text
stringlengths
6
2.78M
meta
dict
--- author: - 'R. Arcidiaconoi' - 'M. Berretti' - 'E. Bossini' - 'M. Bozzo' - 'N. Cartiglia' - 'M. Ferrero' - 'V. Georgiev' - 'T. Isidori' - 'R. Linhart' - 'N. Minafra' - 'M. M. Obertino' - 'V. Sola' - 'N. Turini' bibliography: - 'Diam-2016-DD-base-J.bib' title: Test of UFSD Silicon Detectors for the TOTEM Upgrade Project --- [leer.eps]{} gsave 72 31 moveto 72 342 lineto 601 342 lineto 601 31 lineto 72 31 lineto showpage grestore Timing Detector for the TOTEM Proton Time of Flight Measurement at the LHC {#sec:det} ========================================================================== The TOTEM experiment will install new timing detectors to measure the time of flight (TOF) of protons produced in central diffractive (CD) collisions at the LHC [@Albrow:1753795].\ The CD interactions measured by TOTEM at $\sqrt{s}=13$TeV are characterized by having two high energy protons (with momentum greater than 5 TeV) scattered at less than 100$\mu$rad from the beam axis. In the presence of pile-up[^1] events the reconstruction of the protons interaction vertex position allows to associate the physics objects reconstructed by the CMS experiment with the particles generated from that vertex. The TOF detectors installed in the TOTEM Roman Pots (RPs)[^2] will measure with high precision the arrival time of the CD protons on each side of the interaction point. They will operate in the LHC with a scenario of moderate pile-up ($\mu\sim$1) and a time precision of at least 50ps per arm is required to efficiently identify the event vertex [@CERN-LHCC-2014-024]. Since the difference of the arrival times is directly proportional to the longitudinal position of the interaction vertex ($z_{VTX} = c \Delta t/2$), a precision of 50ps will allow to know the longitudinal interaction vertex position to less than 1cm. The timing detector will be installed in four vertical RPs located at 210m from the interaction point 5 (IP5) of the LHC. The detector comprises four identical stations, each consisting of four hybrid boards[^3] equipped either with an ultra fast silicon detector (UFSD) [@DallaBetta2015154], [@Sadrozinski20147], [@Sadrozinski2013226],[@Cartiglia2015141] or with a single crystal chemical vapor deposition (scCVD) diamond sensor [@timing-nov-15], [@Berretti:2016sfj]. The board contains 12 independent amplifiers, each bonded to a single pad (pixel) of the sensors. The typical time precision of one plane equipped with scCVD is in the range of 50 − 100ps, while it is in the 30 − 100ps range for one equipped with an UFSD sensor. Combining TOF measurements from 4 detector planes will provide an ultimate time precision better than $\sim$50ps, which translates in a precision on the longitudinal position of the interaction vertex $\sigma_z\,<$1cm. Ultra Fast Silicon Detector {#sec:ufsd} =========================== Ultra Fast Silicon Detectors, a new concept in silicon detector design, associate the best characteristics of standard silicon sensors with the main feature of Avalanche Photo Diodes (APD). ![Comparison of the structures of a silicon diode (left) and a Low-Gain Avalanche Diode (right). The additional $p^+$ layer near the $n^{++}$ electrode creates, when depleted, a large electric field that generates charge multiplications.[]{data-label="fig:LGAD"}](sketch_new){width="70.00000%"} UFSD are thin (typically $50 \mu$m thick) silicon Low Gain Avalanche Diodes (LGAD) [@FernandezMartinez201198], [@Pellegrini201412], that produce large signals showing hence a large $dV/dt$, a characteristic necessary to measure time accurately. Charge multiplication in silicon sensors happens when the charge carriers drift in electric fields of the order of $E \sim 300$kV/cm. Under this condition the drifting electrons acquire sufficient kinetic energy to generate additional e/h pairs. A field value of 300 kV/cm in a semiconductor can be obtained by implanting an appropriate charge density around $N_D \sim 10^{16}/cm^3$, that will locally generate the required very high fields. Indeed in the LGAD design (Figure \[fig:LGAD\]) an additional doping layer is added at the $n-p$ junction which, when fully depleted, generates the high field necessary to achieve charge multiplication. Gain depends exponentially on the value of the electric field E, $ N(l) = N_o e^{\alpha (E)l}$, where $\alpha$ is a strong function of E and $l$ is the mean path length in the high field region. First results of time resolution of thin LGADs (UFSD), and based on beam test measurements, have been published in 2016 [@Cartiglia:2016voy]. ![Sensor geometry of the TOTEM UFSD prototype made up of 15 pixels of different dimensions.[]{data-label="geogeo"}](geometry){width="0.65\linewidth" height="0.4\textheight"} Radiation tolerance studies have shown [@Baldassarri2016], [@1748-0221-10-07-P07006] that LGAD sensors can withstand up to $10^{14} \; n_{eq}/cm^2$ without loss of performance. LGAD sensors can be built in many sizes and shapes, ranging from thin strips to large pads. The measurements reported here have been performed on a 2 cm$^2$ 50$\mu$m thick UFSD sensor, manufactured by CNM[^4] with a structure specifically designed for the TOTEM experiment, mounted on a standard TOTEM hybrid board [@timing-nov-15]. Description of the UFSD-based Timing Board {#sec:descr} ========================================== The UFSD sensor used for the prototype timing plane has 15 pixels with the pixel layout shown in Figure \[geogeo\]. --------- ------------ ------------- ----------------------- Pixel N Surface Capacitance Preamplifier feedback \[mm$^2$\] \[pF\] \[ohm\] 1 1.8 3.1 1 k 2 2.2 4.4 1 k 3 3.0 6.0 1 k 4 7.0 14 1 k 5 14 28 300 --------- ------------ ------------- ----------------------- : Characteristics of the 50$\mu$m UFSD pixels used in the tests. \[table:dimensions\] Prior to the gluing of the sensor on the hybrid board, each of the 15 pixels had been tested in the lab to determined its maximum operating voltage. ![The UFSD sensor mounted on the TOTEM hybrid board.[]{data-label="pianlab"}](Piano){width="0.4\linewidth"} Only pixels with a breakdown voltage higher than 180V and a leakage current lower than 0.1mA, were bonded to the amplification channel by means of standard 25$\mu m$ aluminum wires (Figure \[pianlab\]). The UFSD output pulse shape simulated with the simulation program Weightfield2[^5], developed particularly for LGAD devices [@Cenna2015149], assuming a bias voltage of 200V and a sensor gain of 10 is shown in Figure \[currr\]. The detector generates a current whose maximum is about 8 $\mu$A. ![Simulations of the pulse shape from a 50$\,\mu$m UFSD with a gain of 10 (from [@Cartiglia:2015iua]). The plot shows the contribution of each component of the generated charge.[]{data-label="currr"}](Fig3ufsd){width="0.58\linewidth"} . Capacitance of the 50$\mu$m thick UFSD pixels scales linearly with their area as $\sim\,$2 pF/mm$^2$: dimensions and relative capacitance for the pixels measured here are summarized in table \[table:dimensions\]. Front End Electronics {#sec:Feel} ===================== Given the UFSD intrinsic charge amplification one expects the primary charge presented at the input of the amplifier to be 10-100 times larger than the one expected from a diamond sensor. The TOTEM hybrid, originally designed for scCVD diamonds [@timing-nov-15], was modified for the UFSD eliminating the second amplification stage, referred elsewhere as ABA. The amplification chain has now only 3 active elements (one BFP840ESD and two BFG425W BJT transistors). ![Event display of several MCP (top) and UFSD (bottom) signals. The oscilloscope record was triggered by the UFSD signal.[]{data-label="edisp"}](edisp){width="0.6\linewidth"} Moreover, since the UFSD pixels have a larger capacitance than diamond sensors, in order to maintain a fast rise time the feedback resistor of the preamplification chain has has been reduced to to 1k$\Omega$ or 300$\Omega$, accordingly to the capacitance of the pixel (see table \[table:dimensions\]). Test Beam Measurements {#sec:meas} ====================== The time precision of the UFSD sensors has been measured at the H8 beam line of the CERN SPS, a 180 GeV/c pion beam, by computing the time difference of the signal produced by particles crossing a Micro Channel Plate (MCP) PLANACON$^{TM}$ 85011-501 [^6] and one of the UFSD pixels. ![Signal to Noise ratio of the MCP and of the 2mm$^2$ UFSD pixel. []{data-label="ston"}](ston){width="0.7\linewidth" height="0.3\textheight"} The particle rate was $\sim 10^3\,$/mm$^2$, the HV on the UFSD was set initially at 180V, which is the maximum voltage before pixels breakdown, and varied down to 140V. The maximum current allowed in the present measurement was 0.1mA. A screen shot from the oscilloscope with the signals from the MCP and the UFSD detectors is shown in Figure \[edisp\]. The UFSD pixels that we tested have an area ranging between 1.8mm$^2$ and 14mm$^2$. The $2.2\, {\rm mm}^2$ UFSD pixel shows an average S/N of $\sim $60 (Figure \[ston\]) and a risetime of 0.6ns (Figure \[rise\]) . The UFSD S/N curve for the events used in this analysis does not show the typical Landau curve tail ; this is due to the saturation of $\sim\,$10% of the signals and may include the effect of a non linearity in the modified amplification chain. ![Risetime of the MCP and of the 2mm$^2$ UFSD pixel.[]{data-label="rise"}](rise){width="0.7\linewidth"} Signals are recorded with a 20 Gsa/s DSO9254A Agilent oscilloscope. The time difference between the MCP and the $2.2\, {\rm mm}^2$ UFSD pixel is shown in Figure \[deltaT\]. The difference is computed off-line by using a constant fraction discrimination with a threshold at 30% of the maximum for both the UFSD and the MCP signal. ![Difference of the arrival time measured by the MCP and by the 2.2mm$^2$ UFSD pixel biased at 180V.[]{data-label="deltaT"}](deltaT){width="0.7\linewidth"} The MCP time precision was obtained from other measurements and is $(40\pm 5)\,$ps. The results of the measurements are summarized in Table \[table:results\]. Figures \[summary-C\] and \[summaryhv\] show the UFSD time precision as a function of the pixel capacitance and of the applied bias voltage respectively; the second set of measurements was performed on the pixel with an area of 2.2mm$^2$. The precision of the measurement is mainly due to the uncertainty with which we know the MCP time precision. ---------------- ------------- ------- ---------------- Surface Capacitance HV Time precision [\[mm$^2$\]]{} \[pF\] \[V\] \[ps\] 1.8 3.1 180 32 2.2 4.4 180 33 3.0 6.0 180 38 7.0 14 180 57 14 28 180 102 2.2 4.4 140 49 2.2 4.4 160 41 2.2 4.4 180 33 ---------------- ------------- ------- ---------------- : Results of the time precision measurements as a function of the pixel capacitance, pixel surface area and of the applied bias Voltage. The uncertainty on the measured values is of $\sim 5\,$ps and depends essentially on the uncertainty of the MCP reference measurement. \[table:results\] ![UFSD time precision as a function of the pixel capacitance for a bias of 180V.[]{data-label="summary-C"}](prec-vs-C){width="0.7\linewidth"} ![UFSD time precision ( 2.2mm$^2$ pixel) as a function of the applied bias Voltage.[]{data-label="summaryhv"}](prec-vs-bias){width="0.7\linewidth"} The trend of the measurements suggests that a time precision of less than 30ps could be reached for the smallest area pixel biased at 200V. Conclusions {#sec:concl} =========== Here we described the timing performance of a 50$\mu$m thick UFSD detector on a beam of minimum ionizing particles. A time precision in the range of 30-100ps has been measured, depending on the pixel capacitance . The UFSD technology will be used by TOTEM experiment in the vertical RPs together with scCVD sensors. Acknowledgments {#sec:ack .unnumbered} =============== We thank Florentina Manolescu and Jan Mcgill for the realization of the unusual bonding of the sensors. Support for some of us to travel to CERN for the beam tests was provided by AIDA-2020-CERN-TB-2016-11. This work was supported by the institutions listed on the front page and also by the project LM2015058 from the Czech Ministry of Education Youth and Sports. Part of this work has been financed by the European Union’s Horizon 2020 Research and Innovation funding program, under Grant Agreement no. 654168 (AIDA-2020) and Grant Agreement no. 669529 (ERC UFSD669529), and by the Italian Ministero degli Affari Esteri and INFN Gruppo I and V. The design was supported by National program of sustainability LO1607 Rice-Netesis of the Ministry of Education, Youth and Sports, Czech Republic. [^1]: Probability that more than one interaction is produced during the same bunch crossing. [^2]: Special movable insertion in the LHC vacuum beam pipe that allow to move a detector edge very close to the circulating beam. [^3]: The particle sensor and the amplification electronic are mounted on the same PCB. [^4]: [C](http://www.cnm.es)entro Nacional de Microelectrónica, Campus Universidad Autónoma de Barcelona. 08193 Bellaterra (Barcelona), Spain. [^5]: open source code may be found at <http://personalpages.to.infn.it/~cartigli/Weightfield2/Main.html> [^6]: PLANACON$^{TM}$ Photomultiplier tube assembly 85011-501 from BURLE.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The magnetic phases of a triangular-lattice antiferromagnet, CuCrO$_2$, were investigated in magnetic fields along to the $c$ axis, $H$ // \[001\], up to 120 T . Faraday rotation and magneto-absorption spectroscopy were used to unveil the rich physics of magnetic phases. An up-up-down (UUD) magnetic structure phase was observed around 90–105 T at temperatures around 10 K. Additional distinct anomalies adjacent to the UUD phase were uncovered and the Y-shaped and the V-shaped phases are proposed to be viable candidates. These ordered phases are emerged as a result of the interplay of geometrical spin frustration, single ion anisotropy and thermal fluctuations in an environment of extremely high magnetic fields.' author: - Atsuhiko Miyata - Oliver Portugall - Daisuke Nakamura - Kenya Ohgushi - Shojiro Takeyama title: 'Ultrahigh Magnetic Field Phases in Frustrated Triangular-lattice Magnet CuCrO$_2$' --- In geometrically frustrated magnets, the competition between different magnetic interactions produces highly degenerate magnetic ground states that are vulnerable to tiny perturbations, leading to diverse novel magnetic phases [@textfrust]. Among them, one typical state is a multiferroic state in which ferroelectricity is induced by unconventional magnetic structures that arise from geometrical magnetic frustration [@kimura03; @kimura06]. Since changes in the spin structure alter the ferroelectricity, the application of magnetic fields plays an important role in elucidating the rich variety of magnetic and ferroelectric phases in geometrically frustrated magnet system. Typical triangular-lattice antiferromagnets that are also multiferroic are CuFeO$_2$ [@kimura06] and CuCrO$_2$ [@seki08; @kimura09], both of which are delafossite oxides and have been intensively investigated in the past decade. CuFeO$_2$ has a Curie-Weiss temperature of around -88 K and exhibits two successive phase transitions around 14 and 11 K [@kimura06]. Below 11 K, its magnetic structure becomes a four-sublattice collinear antiferromagnetic structure. When a magnetic field is applied to this state, a ferroelectric phase appears between $\sim$7 and 13 T, which is induced by a proper-screw magnetic structure. This phenomenon is well described by a theoretical model proposed by Arima [@arima07]. Interestingly, additional magnetic phase transitions successively occur at higher magnetic fields in CuFeO$_2$, and magnetization plateaus with values of 1/5 and 1/3 of the saturation moment have actually been reported [@lummen10]. To date, some theoretical models were proposed to explain this rich occurrence of magnetic and ferroelectric phases. For example, a theory proposed by Fishman *et al.* [@fishman12] suggested a spin Hamiltonian into which magnetic interactions are incorporated up to the third-nearest neighbors as well as easy-axis single-ion anisotropy. The importance of spin-phonon couplings was suggested by Wang and Vishwanath [@wang08]. However, none of these theories has been able to provide a general explanation of the magnetic and electric properties of multiferroic CuFeO$_2$ which therefore still remain an open issue. To illuminate the complicated phases in delafossite oxides forming a triangular lattice, it is crucial to reveal the magnetic phases of another delafossite oxide, CuCrO$_2$, which has been known to have a much smaller easy-axis single-ion anisotropy $D$ with respect to its primary nearest-neighbor interaction $J_1$, in contrast to those of CuFeO$_2$ [@ye07; @poienar10; @yamaguchi10; @fujita13]. For example, their ratio $D/J_1$ has been estimated by electron spin resonance (ESR) measuremetns as $D/J_1\sim0.017$ for CuCrO$_2$ and much smaller than $\sim0.097$ for CuFeO$_2$ [@yamaguchi10; @fujita13]. CuCrO$_2$ has Curie-Weiss temperatures of -211 K (magnetic field applied perpendicular to the triangular-lattice plane) and -203 K (parallel to the plane), and exhibits two successive phase transitions around 24.2 and 23.6 K [@seki08; @kimura09]. Below 23.6 K, its magnetic structure becomes an incommensurate proper-screw magnetic structure, as identified by neutron studies [@soda09], which induces ferroelectricity. This mechanism is described by the theoretical model of Arima [@arima07]. Remarkably, a recent study of CuCrO$_2$ under magnetic fields of up to 65 T applied parallel to the \[001\] axis by Mun *et al.* showed a rich magnetic-field-induced phase diagram including a few ferroelectric phases [@mun14], which are not reproduced by the theoretical model incorporating further-neighbor interactions and easy-axis single-ion anisotropy proposed for CuCrO$_2$ by Fishman [@fishman11]. Lin *et al.* conducted Monte Carlo calculations with a model including ÒspatiallyÓ anisotropic nearest-neighbor interactions and easy-axis single-ion anisotropy terms, which showed good agreement with their new results obtained from an experiment performed under higher magnetic fields up to 92 T [@lin14]. As a consequence of their different magnetic interactions and anisotropy, both delafossite compounds, CuFeO$_2$ and CuCrO$_2$, show clearly different magnetic properties at low temperatures. Therefore, unveiling the high-magnetic-field phases of CuCrO$_2$ could provide further insight not only into the rich magnetic and ferroelectric properties of this material but also of delafossite oxides in general. In this paper, we present magneto-optical studies (Faraday rotation and magneto-optical spectral absorption measurements) of CuCrO$_2$ carried out in ultrahigh magnetic fields up to 120 T and at temperatures down to 5 K. We reveal magnetic phases newly found in CuCrO$_2$, including the up-up-down (UUD) magnetic structure phase around 90–105 T at $\sim$10 K. In our experiments a single-turn coil (STC) ultra-high magnetic field generator (UHMFG) at the Institute for Solid State Physics, University of Tokyo was used to generate magnetic fields exceeding 100 T [@nakao85]. Faraday rotation and magneto-optical spectral absorption measurements were conducted up to 120 T using a horizontally aligned STC-UHMFG. The optical alignment around the STC was similar to that described in Ref. 19 and 20. Single crystals of CuCrO$_2$ were grown by a flux growth method using Bi$_2$O$_3$ [@kimura08]. Plate-like samples parallel to the (001) crystal plane about 10 $\times$ 10 $\times$ 1 mm$^3$ in size were thus obtained. A sample of CuCrO$_2$ with 2 mm diameter was cut, then polished to 50 $\mu$m thickness and finally attached to a quartz substrate. The magnetic field was applied parallel to the \[001\] axis in all measurements. A non-metallic helium-flow cryostat was used to cool down the sample to temperatures of $\sim$5 K [@takeyama87]. Figure 1 shows the normalized magneto-optical transmission $T$($B$)/$T$(0) at a photon-energy of 1.943 eV (a wavelength of 638 nm), the Faraday rotation angle $\theta_\text{F}$, and the corresponding magnetization $M$ which is deduced by assuming their proportionality relation $\theta_\text{F} \propto M$, of CuCrO$_2$ under magnetic fields of up to 120 T at 5 K. A magnetization curve obtained by Yamaguchi *et al.* using a non-destructing pulsed magnet up to 50 T at 1.3 K [@yamaguchi10] is also shown by a dashed line as a reference in Fig. 1. At 76 T, we observed clear anomaly associating a hysteresis in both magneto-optical transmissions $T$($B$)/$T$(0) and Faraday rotation angles $\theta_\text{F}$, indicating the first-order phase transition. The anomaly was also observed in electric polarization measurements performed by Lin *et al.* [@lin14] who suggested that it can be attributed to a phase transition from a commensurate Y-shaped phase (three spins form a “Y” shape) to the 1/3 magnetization plateau (UUD phase). However, according to their Monte Carlo calculations a transition to the UUD phase cannot be of the first order. In addition, the magnetic moment deduced from the FR angles turned out to be $\sim$0.83 $\mu$$_\text{B}$/Cr$^{3+}$ at 76 T, which is smaller than what would be expected for the 1/3 magnetization plateau (1 $\mu$$_\text{B}$/Cr$^{3+}$). Therefore, it is natural to regard the phase just above 76 T as another magnetic phase prior to the UUD phase. Details of this phase will be discussed later. The magnetization deduced from FR angles reaches 1 $\mu$$_\text{B}$/Cr$^{3+}$ around $\sim$95 T, but there is no clear evidence of a plateau-like phase in Fig.1. The following scenario is the most likely: The 1/3 magnetization plateau is known to cause by an easy-axis anisotropy in classical Heisenberg triangular-lattice antiferromagnets [@yun15]. However, the anisotropy can be released by applying a magnetic field especially above the first-order phase transition at 76 T which is possibly associated with a lattice distortion. Note that the easy-axis anisotropy of CuCrO$_2$ is rather small even in the absence of a magnetic field ($D/J_1\sim0.017$) [@yamaguchi10]. The reduction of the easy-axis anisotropy in magnetic fields has been taken into account, for example in the sister compound CuFeO$_2$, to explain its magnetic field induced phases [@lummen10; @fishman12]. Even without the easy-axis anisotropy, thermal fluctuations can induce the UUD phase in classical Heisenberg triangular-lattice antiferromagnets. This has been studied as the so-called “order by disorder” [@yun15; @kawamura85]. However, the magnetization appears as an almost linear curve smeared out by temperature as shown in ref. 24. This is a viable reason why the 1/3 plateau is scarcely observed in current magnetization measurements. ![\[fig:Faradayrotation\] (Color online) Normalized magneto-optical transmissions $T$($B$)/$T$(0) and Faraday rotation angles $\theta_\text{F}$ obtained by STC at 5 K together with magnetizations $M$ obtained at 1.3 K in ref. 12 ($H$ // \[001\]). Arrows show the hysteresis observed at 76 T. ](fig1.eps){width="45.00000%"} For further investigating details of the magnetic phases in CuCrO$_2$, we conducted magneto-optical transmission spectroscopy (MOTS) of excitonÐmagnon transitions (EMT). MOTS of EMTs is sensitive to magnetic phase transitions [@miyata13; @miyata11] because EMTs occur only when a magnon is required to compensate the spin and angular momentum changes of an otherwise optically forbidden excitonic transition. Spectral structures associated with EMT thus provide strong evidence for a change of both the magnetic and crystal structures. Fig. 1 demonstrates that $T$($B$)/$T$(0) responds in fact very sensitively to phase transitions. Figure 2 shows the optical absorption (-log($T$($B$)) spectra of the $d$-$d$ transition and EMT in CuCrO$_2$ measured at 10 K; they are consistent with a previous report by Schmidt *et al.* [@schmidt13]. The inset shows how the absorption spectrum evolves in magnetic fields up to 100 T in the wavelength region where EMT occurs. The peak intensity of the exciton-magnon absorption first decreases gradually up to 70 T and then increases with further increase of the magnetic field. ![\[fig:spectra\] (Color online) Optical absorption spectra of $d$-$d$ transition and EMT in CuCrO$_2$ at 0 T at 10 K. Inset shows magneto-optical absorption spectra in the region of EMTs at several magnetic fields along $H$ // \[001\] at 10 K obtained by streak spectroscopy. Absorption peak intensity between 660 and 680 nm is integrated and plotted as a function of magnetic field in Fig. 3. ](fig2.eps){width="45.00000%"} In Fig. 3 the intensity measured at wavelengths between 660 and 680 nm and at temperatures of 7 and 10 K is integrated (integrated absorption intensity: IAI) and plotted as a function of magnetic field together with the magnetization curve of CuCrO$_2$ deduced from the Faraday rotation angles at 5 K. Three distinct anomalies are observed at $\sim$75, 90, and 105 T in IAI (notified by black triangles in Fig. 3). The anomaly at $\sim$75 T corresponds to that observed at 76 T in the magnetization (the corresponding magnetic field differs slightly because of differences in the measurement temperature). A remarkable recovery of IAI is observed above $\sim$75 T. In conventional antiferromagnets, the EMT monotonically loses its intensity with increasing magnetic field, since a lesser number of magnons ($\Delta S_z=+1$) compensate for the spin angular momentum during exciton transition as the spin structure transforms to the canted configuration from the antiparallel configuration under magnetic fields. Magnon creation ($\Delta S_z=+1$) is quenched finally in a fully spin-polarized phase [@eremenko75]. Therefore, the recovery of the EMT intensity reflects a change of the spin structure above $\sim$75 T. The Y-shaped spin structure is the most likely candidate. In this structure, spins approach an antiparallel configuration with increasing magnetic fields, which contributes to an increase of the excitonÐmagnon absorption intensity. In fact, the increase in the EMT intensity was observed in another multiferroic material, BiFeO$_3$, at the phase transition from the spin spiral to the canted antiferromagnetic phase, which causes an increase in “antiparallelism” [@xu09]. ![\[fig:EMT\_int\] (Color online) Integrated absorption intensity in the EMT of CuCrO$_2$ at 7 and 10 K at wavelengths between 660 and 680 nm (shaded area in inset of Fig. 2) and magnetization curve at 5 K deduced from Faraday rotation angles ($H$ // \[001\]). Arrows illustrate spin structures of Y-shaped, UUD, and V-shaped magnetic phases. Broken lines are a guide to the eyes for phase boundaries. ](fig3.eps){width="45.00000%"} In Fig. 3, around 90–105 T, the EMT intensity goes into a flat-top region (i.e. the maximum of antiparallesim), which indicates that the spins form a collinear up-up-down structure (i.e. UUD phase). The 1/3 plateau is scarcely visible in the magnetization ($M$) data. However, the deduced magnetization at 95 T reaches 1 $\mu$$_\text{B}$/Cr$^{3+}$, corresponding to the value expected for a 1/3 plateau. A slight widening of the flat-topped region is recognized upon increasing temperatures from 7 to 10 K. The UUD phase is known to stabilize by thermal fluctuations [@yun15; @kawamura85]. Above 105 T, the EMT intensity decreases again. A plausible magnetic phase above the UUD phase is a V-shaped magnetic phase in which two parallel spins and one other spin form a “V” shape (illustrated by arrows in Fig. 3). V-shaped and Y-shaped magnetic phases have been reported to respectively appear above and below the UUD phase in the phase diagram for a classical Heisenberg antiferromagnet on a triangular lattice with relatively weak easy-axis anisotropy [@yun15]. ![\[fig:phasediagram\] (Color online) Magnetic phase diagram of CuCrO$_2$ ($H$ // \[001\]). For lower magnetic fields, the data are taken from Ref. 17 and 30. Arrows illustrate spin structures of Y-shaped, UUD, and V-shaped magnetic phases. Broken lines are a guide to the eyes for phase boundaries. ](fig4.eps){width="45.00000%"} Figure 4 shows the magnetic phase diagram of CuCrO$_2$ up to 120 T. The data for phase transitions in lower magnetic fields refer to measurements of the electric polarization $P$ ($H$ // \[001\] and $P$ // \[110\]) [@lin14] and nuclear magnetic resonance (NMR, $H$ // \[001\]) [@sakhratov16]. Sakhratov *et al.* have assigned the regions I and III to a three-dimensionally (3D) ordered incommensurate planar spin structure phase and a 2D-ordered (or 3D-polar) incommensurate planar spin structure phase, respectively [@sakhratov16]. The region II is the intermediate phase of I and III with hysteretic behavior. At temperatures below 10 K the boundary of region “N” was observed in electric polarizations and assigned to an incommensurate umbrella-like spin structure (cycloidal spiral) phase [@lin14]. The region “C” was attributed to a collinear spin-structure phase that could be connected to the collinear UUD phase that we observed around 90–105 T. The connection of two collinear phases has been theoretically suggested, since thermal fluctuations stabilize the UUD phase even at the zero limit of magnetic field [@yun15]. This behavior has been observed in magnetic phase diagrams of other triangular-lattice Heisenberg antiferromagnets with easy-axis single-ion anisotropy, Rb$_4$Mn(MoO$_4$)$_3$ [@ishii11] and Ba$_3$MnNb$_2$O$_9$ [@lee14]. A striking difference between the magnetic phases of CuCrO$_2$ and CuFeO$_2$ is that collinear magnetic structures are unstable in CuCrO$_2$ in the low temperature limit. The collinear 1/5 magnetization plateau and collinear four-sublattice antiferromagnetic phase observed in CuFeO$_2$ have not been found in CuCrO$_2$. This difference arises from extremely small value of the easy-axis single-ion anisotropy of CuCrO$_2$ ($D/J_1\sim0.017$) [@yamaguchi10] in contrast to that of CuFeO$_2$ ($D/J_1\sim0.097$) [@fujita13]. In summary, magneto-optical measurements of CuCrO$_2$ in ultrahigh magnetic fields up to 120 T applied along the \[001\] axis revealed that the UUD phase exists around 90–105 T at 7–10 K. Furthermore, additional anomalies were observed in the optical absorption intensities of the EMT, which revealed the existence of magnetic phases (presumably the Y-shaped and canted V-shaped phases) below and above the UUD phase. These magnetic phases emerge owing to the interplay of geometrical frustration, the magnetic field, and subtle perturbations of tiny easy-axis single-ion anisotropy and thermal fluctuations. Acknowledgments {#acknowledgments .unnumbered} =============== We acknowledge Masayuki Hagiwara and Hironori Yamaguchi for sending their magnetization data which is shown in Fig. 1, and Kenta Kimura and Masashi Tokunaga for their helpful assistance to grow the samples. A. M. thanks to a support of the Grant-in-Aid for the Japan Society for the Promotion of Science (JSPS) Fellows. [99]{} *Introduction to Frustrated Magnetism*, edited by C. Lacroix, P. Mendels, and F. Mila, Springer Series in Solid-State Sciences, Vol. 164 (Springer, Berlin, 2011). T. Kimura, T. Goto, H. Shintani, K. Ishizaka, T. Arima, and Y. Tokura, Nature **426**, 55 (2003). T. Kimura, J. C. Lashley, and A. P. Ramirez, Phys. Rev. B **73**, 220401 (2006). S. Seki, Y. Onose, and Y. Tokura, Phys. Rev. Lett. **101**, 067204 (2008). K. Kimura, H. Nakamura, S. Kimura, M. Hagiwara, and T. Kimura, Phys. Rev. Lett. **103**, 107201 (2009). T. Arima, J. Phys. Soc. Jpn. **76**, 073702 (2007). T. T. A. Lummen, C. Strohm, H. Rakoto, and P. H. M. van Loosdrecht, Phys. Rev. B **81**, 224420 (2010). R. S. Fishman, G. Brown, and J. T. Haraldsen, Phys. Rev. B **85**, 020405 (2012). F. Wang and A. Vishwanath, Phys. Rev. Lett. **100**, 077201 (2008). F. Ye, J. A. Fernandez-Baca, R. S. Fishman, Y. Ren, H. J. Kang, Y. Qiu, and T. Kimura, Phys. Rev. Lett. **99**, 157201 (2007). M. Poienar, F. Damay, C. Martin, J. Robert, and S. Petit, Phys. Rev. B **81**, 104411 (2010). H. Yamaguchi, S. Ohtomo, S. Kimura, M. Hagiwara, K. Kimura, T. Kimura, T. Okuda, and K. Kindo, Phys. Rev. B **81**, 033104 (2010). T. Fujita, S. Kimura, T. Kida, T. Kotetsu, and M. Hagiwara, J. Phys. Soc. Jpn. **82**, 064712 (2013). M. Soda, K. Kimura, T. Kimura, M. Matsuura, and K. Hirota, J. Phys. Soc. Jpn. **78**, 124703 (2009). E. Mun, M. Frontzek, A. Podlesnyak, G. Ehlers, S. Barilo, S. V. Shiryaev, and V. S. Zapf, Phys. Rev. B **89**, 054411 (2014). R. S. Fishman, Phys. Rev. B **84**, 052405 (2011). S.-Z. Lin, K. Barros, E. Mun, J.-W. Kim, M. Frontzek, S. Barilo, S. V. Shiryaev, V. S. Zapf, and C. D. Batista, Phys. Rev. B **89**, 220405(R) (2014). K. Nakao, F. Herlach, T. Goto, S. Takeyama, T. Sakakibara, and N. Miura, J. Phys. E **18**, 1018 (1985). A. Miyata, H. Ueda, Y. Ueda, Y. Motome, N. Shannon, K. Penc, and S. Takeyama, J. Phys. Soc. Jpn. **80**, 074709 (2011). A. Miyata, H. Ueda, Y. Ueda, Y. Motome, N. Shannon, K. Penc, and S. Takeyama, J. Phys. Soc. Jpn. **81**, 114701 (2012). K. Kimura, H. Nakamura, K. Ohgushi, and T. Kimura, Phys. Rev. B **78**, 140401(R) (2008). S. Takeyama, M. Kobayashi, A. Matsui, K. Mizuno, and N. Miura, in *High Magnetic Fields in Semiconductor Physics*, edited by G. Landwehr, Springer Series in Solid State Sciences, Vol. 71 (Springer, Berlin, 1987), p. 555. M. Yun and G. S. Jeon, J. Phys.: Conf. Ser. **592**, 012111 (2015). H. Kawamura and S. Miyashita, J. Phys. Soc. Jpn. **54**, 4530 (1985). A. Miyata, S. Takeyama, and H. Ueda, Phys. Rev. B **87**, 214424 (2013). A. Miyata, H. Ueda, Y. Ueda, H. Sawabe, and S. Takeyama, Phys. Rev. Lett. **107**, 207203 (2011). M. Schmidt, Zhe Wang, Ch. Kant, F. Mayr, S. Toth, A. T. M. N. Islam, B. Lake, V. Tsurkan, A. Loidl, and J. Deisenhofer, Phys. Rev. B **87**, 224424 (2013) V. V. Eremenko, Yu. G. Litvinenko, and V. V. Shapiro, Fiz. Nizk. Temp. **1**, 1077 (1975). X. S. Xu, T. V. Brinzari, S. Lee, Y. H. Chu, L. W. Martin, A. Kumar, S. McGill, R. C. Rai, R. Ramesh, V. Gopalan, S. W. Cheong, and J. L. Musfeldt, Phys. Rev. B **79**, 134425 (2009). Yu. A. Sakhratov, L. E. Svistov, P. L. Kuhns, H. D. Zhou, and A. P. Reyes, Phys. Rev. B **94**, 094410 (2016). R. Ishii, S. Tanaka, K. Onuma, Y. Nambu, M. Tokunaga, T. Sakakibara, N. Kawashima, Y. Maeno, C. Broholm, D. P. Gautreaux, J. Y. Chan, and S. Nakatsuji, Europhys. Lett. **94**, 17001 (2011). M. Lee, E. S. Choi, X. Huang, J. Ma, C. R. Dela Cruz, M. Matsuda, W. Tian, Z. L. Dun, S. Dong, and H. D. Zhou, Phys. Rev. B **90**, 224402 (2014).
{ "pile_set_name": "ArXiv" }
--- author: - 'Michael Krämer,' - Benjamin Summ - and Alexander Voigt bibliography: - 'paper.bib' title: 'Completing the scalar and fermionic Universal One-Loop Effective Action' --- TTK–19–31\ P3H–19–026 Introduction ============ With the discovery of the Higgs boson at the Large Hadron Collider (LHC) [@Aad:2012tfa; @Chatrchyan:2012xdj], the Standard Model of Particle Physics (SM) is formally complete. While existing deviations between some SM predictions and experiment, such as for the anomalous magnetic moment of the muon (see for example [@Bennett:2006fi; @Jegerlehner:2018zrj]), are not conclusive, the SM is not a complete description of nature as it neither accounts for astrophysical phenomena such as dark matter, nor does it incorporate gravity. Searches for physics beyond the SM have not been successful thus far. Exclusion limits for new particles introduced by SM extensions often exceed the TeV scale. These results suggest that new physics either interacts weakly with the SM, or that the masses of new particles are significantly above the electroweak scale. A well-known example is the Minimal Supersymmetric Standard Model (MSSM) [@Haber:1984rc], which requires at least TeV-scale stops in order to correctly predict the mass of the SM-like Higgs boson of about $125{\ensuremath{\;\text{GeV}}}$, see for example [@Allanach:2018fif; @Bahl:2018zmf]. The construction and phenomenological analysis of new physics models with heavy particles is therefore a suitable path to develop viable theories beyond the SM that are consistent with experimental results. The observables predicted in models with large mass hierarchies, however, usually suffer from large logarithmic quantum corrections, which should be resummed in order to obtain precise predictions. Effective Field Theories (EFTs) are a well-suited tool to resum these large logarithmic corrections. Conventional matching procedures using Feynman diagrams, however, are often cumbersome, in particular if the new physics model contains many new heavy particles and/or complicated interactions. The Universal One-Loop Effective Action (UOLEA) [@Drozd:2015rsp; @Ellis:2017jns; @Summ:2018oko], which has been developed using functional methods [@Gaillard:1985uh; @Cheyette:1987qz; @Haba:2011vi; @Henning:2014wua; @Henning:2016lyp; @Ellis:2016enq; @Fuentes-Martin:2016uol; @Zhang:2016pja], is a very promising tool to overcome these difficulties. It represents a generic one-loop expression for the Wilson coefficients of an effective Lagrangian for a given ultra-violet (UV) model with a large mass hierarchy. Compared to the conventional matching using Feynman diagrams, the calculation of the Wilson coefficients with the UOLEA is straightforward, as it is expressed directly in terms of derivatives of the UV Lagrangian w.r.t. the fields and simple rational functions. In particular, no loop integration is necessary and spurious infrared (IR) divergences are absent by construction. To date, however, the UOLEA is not completely known: Only contributions from scalar particles [@Drozd:2015rsp; @Ellis:2017jns] as well as conversion terms between dimensional regularization and dimensional reduction [@Summ:2018oko] have been calculated at the generic one-loop level up to dimension 6. Whereas some contributions from fermion loops can be calculated using these results by squaring the fermionic trace, this treatment is incomplete when the couplings depend on gamma matrices. Furthermore, contributions from loops containing both scalars and fermions as well as terms with open covariant derivatives are unknown. In this publication we present all one-loop operators of the UOLEA up to dimension 6 that involve both scalars and fermions in a generic form, excluding contributions from open covariant derivatives. Thus, our results go beyond the scope of [@Drozd:2015rsp; @Ellis:2017jns] and allow for an application of the UOLEA to a broader set of new physics models. We publish our generic expressions as a Mathematica ancillary file `UOLEA.m` in the arXiv submission of this publication. Due to their generic structure, the expressions are well suited to be implemented into generic spectrum generators such as [`SARAH`]{}[@Staub:2009bi; @Staub:2010jh; @Staub:2012pb; @Staub:2013tta] or [`FlexibleSUSY`]{}[@Athron:2014yba; @Athron:2017fvs] or EFT codes in the spirit of [`CoDEx`]{}[@Bakshi:2018ics; @DasBakshi:2019vzr]. This paper is structured as follows: In [section \[sec:calculation\]]{} we present the calculation of the UOLEA involving both scalars and fermions. We discuss the results in [section \[sec:results\]]{} and apply our generic expressions to various EFTs of the SM and the MSSM in [section \[sec:applications\]]{}. Our conclusions are presented in [section \[sec:conclusions\]]{}, and the appendices collect further formulae and calculational details. Calculation of the scalar and fermionic UOLEA {#sec:calculation} ============================================= Functional matching in a scalar theory {#sec: intro} -------------------------------------- In this section we briefly review the most important steps in the functional matching approach at one-loop level in a scalar theory and fix the notation for the subsequent sections. Most of what is being discussed here is well-documented in the literature and more details can be found in [@Henning:2014wua; @Henning:2016lyp; @Fuentes-Martin:2016uol; @Zhang:2016pja]. We consider a generic UV theory that contains heavy real scalar fields, collectively denoted by $\Phi$, with masses of the order $M$ and light real scalar fields, denoted by $\phi$, with masses of the order $m$. We assume that $m/M \ll 1$ such that an EFT expansion in the mass ratio $m/M$ is valid. To perform the functional matching the background field method is used to calculate the generator of 1-light-particle-irreducible (1LPI) Green’s functions in the UV-theory, $\Gamma_{\text{L,UV}}[{{\phi_{\text{cl}}}}]$, and the generator of 1-particle-irreducible (1PI) Green’s functions in the EFT, $\Gamma_{{\ensuremath{\text{EFT}}\xspace}}[{{\phi_{\text{cl}}}}]$, where ${{\phi_{\text{cl}}}}$ are light background fields which obey the classical equation of motion. For the determination of these generating functionals beyond tree-level a regularization scheme must be specified, which is chosen to be dimensional regularization.[^1] This introduces a dependence on the unphysical renormalization scale $\mu$ in both generating functionals, and the matching condition becomes $$\begin{aligned} \Gamma_\text{L,UV}[{{\phi_{\text{cl}}}}]=\Gamma_{\ensuremath{\text{EFT}}\xspace}[{{\phi_{\text{cl}}}}], \label{eq:scalar_matching_condition}\end{aligned}$$ which is imposed at the matching scale $\mu$, order by order in perturbation theory. In principle the matching scale can be chosen arbitrarily, however, in order to avoid large logarithms the choice $\mu=M$ is preferred. To calculate $\Gamma_\text{L,UV}[{{\phi_{\text{cl}}}}]$ one starts from the generating functional of Green’s functions $$\begin{aligned} Z_{\ensuremath{\text{UV}}\xspace}[J_\Phi,J_\phi]=\int \mathcal{D}\Phi \mathcal{D}\phi \exp\left \{i \int {\ensuremath{\mathrm{d}}}^d x \, \big[{\mathcal{L}}_{\ensuremath{\text{UV}}\xspace}[\Phi,\phi]+J_{\Phi}(x) \Phi(x)+J_{\phi}(x) \phi(x) \big]\right\}\end{aligned}$$ with sources $J_\Phi$ and $J_\phi$ and splits both the heavy and the light fields into background parts ${{\Phi_{\text{cl}}}}$ and ${{\phi_{\text{cl}}}}$, respectively, and fluctuations $\delta \Phi$ and $\delta \phi$, respectively, as $$\begin{aligned} \Phi&={{\Phi_{\text{cl}}}}+\delta \Phi, \\ \phi&={{\phi_{\text{cl}}}}+\delta \phi.\end{aligned}$$ The background fields are defined to satisfy the classical equations of motion, $$\begin{aligned} \frac{\delta {\mathcal{L}}_{\ensuremath{\text{UV}}\xspace}}{\delta \Phi}[{{\Phi_{\text{cl}}}},{{\phi_{\text{cl}}}}]+J_\Phi &= 0, & \frac{\delta {\mathcal{L}}_{\ensuremath{\text{UV}}\xspace}}{\delta \phi}[{{\Phi_{\text{cl}}}},{{\phi_{\text{cl}}}}]+J_\phi &= 0.\end{aligned}$$ The generating functional of the 1LPI Green’s functions of the UV model, $\Gamma_\text{L,UV}[{{\phi_{\text{cl}}}}]$, is then given by $$\begin{aligned} \Gamma_\text{L,UV}[{{\phi_{\text{cl}}}}]=-i \log Z_{\ensuremath{\text{UV}}\xspace}[J_\Phi=0,J_\phi]-\int {\ensuremath{\mathrm{d}}}^d x \, J_\phi(x) {{\phi_{\text{cl}}}}(x),\end{aligned}$$ where $J_\Phi=0$ since we are only interested in Green’s functions with light external particles. Expanding the Lagrangian together with the source terms around the background fields yields $$\begin{aligned} {\mathcal{L}}_{\ensuremath{\text{UV}}\xspace}[\Phi,\phi]+J_{\Phi}\Phi+J_{\phi}\phi &= {\mathcal{L}}_{\ensuremath{\text{UV}}\xspace}[{{\Phi_{\text{cl}}}},{{\phi_{\text{cl}}}}] + J_{\Phi}{{\Phi_{\text{cl}}}}+ J_{\phi}{{\phi_{\text{cl}}}}-\frac{1}{2}\begin{pmatrix} \delta \Phi^T && \delta \phi^T \end{pmatrix} {\mathcal{Q}}\begin{pmatrix} \delta \Phi \\ \delta \phi \end{pmatrix} + \cdots , \label{eq:actionExpansion} \\ \intertext{where the matrix} {\mathcal{Q}}&\equiv - \begin{pmatrix} \frac{\delta ^2 {\mathcal{L}}_{\ensuremath{\text{UV}}\xspace}}{\delta \Phi \delta \Phi}[{{\Phi_{\text{cl}}}},{{\phi_{\text{cl}}}}] && \frac{\delta ^2 {\mathcal{L}}_{\ensuremath{\text{UV}}\xspace}}{\delta \Phi \delta \phi}[{{\Phi_{\text{cl}}}},{{\phi_{\text{cl}}}}] \\ \frac{\delta ^2 {\mathcal{L}}_{\ensuremath{\text{UV}}\xspace}}{\delta \phi \delta \Phi}[{{\Phi_{\text{cl}}}},{{\phi_{\text{cl}}}}] && \frac{\delta ^2 {\mathcal{L}}_{\ensuremath{\text{UV}}\xspace}}{\delta \phi \delta \phi}[{{\Phi_{\text{cl}}}},{{\phi_{\text{cl}}}}] \end{pmatrix}\end{aligned}$$ is referred to as the fluctuation operator and the dots indicate higher order terms in the expansion. Through the equations of motion with $J_\Phi=0$ the heavy background fields can be expressed in terms of the light ones such that ${{\Phi_{\text{cl}}}}={{\Phi_{\text{cl}}}}[{{\phi_{\text{cl}}}}]$. In general, ${{\Phi_{\text{cl}}}}[{{\phi_{\text{cl}}}}]$ is a non-local object and has to be expanded using a local operator expansion. The one-loop part of $\Gamma_\text{L,UV}[{{\phi_{\text{cl}}}}]$ is then found to be $$\begin{aligned} \Gamma^\text{1{\ensuremath{\ell}}}_\text{L,UV}[{{\phi_{\text{cl}}}}]= \frac{i}{2} \log \det {\mathcal{Q}}.\end{aligned}$$ The above can be re-written as [@Fuentes-Martin:2016uol] $$\begin{aligned} \Gamma^\text{1{\ensuremath{\ell}}}_\text{L,UV}[{{\phi_{\text{cl}}}}]&=\frac{i}{2} \log \det \left({\mathcal{Q}}_{11} - {\mathcal{Q}}_{12} {\mathcal{Q}}_{22}^{-1} {\mathcal{Q}}_{21}\right) +\frac{i}{2}\log \det {\mathcal{Q}}_{22}.\end{aligned}$$ Using similar arguments for the Lagrangian of the EFT, ${\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}[\phi]$, which only depends on the light fields, the generator of 1PI Green’s functions in the EFT can be calculated at one-loop as $$\begin{aligned} \Gamma^\text{1{\ensuremath{\ell}}}_{\ensuremath{\text{EFT}}\xspace}[{{\phi_{\text{cl}}}}]=\int {\ensuremath{\mathrm{d}}}^d x \, {\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}^\text{1{\ensuremath{\ell}}}[{{\phi_{\text{cl}}}}]+\frac{i}{2} \log \det \left(-\frac{\delta ^2 {\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}^{\ensuremath{\text{tree}}\xspace}}{\delta \phi \delta \phi}[{{\phi_{\text{cl}}}}] \right),\end{aligned}$$ where ${\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}^\text{1{\ensuremath{\ell}}}$ is the effective Lagrangian whose couplings are given by the one-loop heavy or heavy/light field contributions. The second term contains one-loop contributions constructed from the tree-level part of the effective Lagrangian ${\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}^{\ensuremath{\text{tree}}\xspace}$. The matching condition then implies $$\begin{aligned} \label{eq:matchingCond} \int {\ensuremath{\mathrm{d}}}^d x \, {\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}^\text{1{\ensuremath{\ell}}}[\phi] ={}& \frac{i}{2} \log \det \left({\mathcal{Q}}_{11} - {\mathcal{Q}}_{12} {\mathcal{Q}}_{22}^{-1} {\mathcal{Q}}_{21}\right) +\frac{i}{2} \log \det {\mathcal{Q}}_{22} \nonumber \\ & -\frac{i}{2} \log \det \left(-\frac{\delta ^2 {\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}^{\ensuremath{\text{tree}}\xspace}}{\delta \phi \delta \phi}[{{\phi_{\text{cl}}}}] \right).\end{aligned}$$ The functional determinants can be calculated using the relation $\log \det A = \operatorname{Tr}\log A$ and then calculating the trace. This includes a trace in the Hilbert space as constructed in [@Ball:1988xg]. It is convenient to calculate this trace in position space and insert the identity in terms of a complete set of momentum eigenstates. The calculation then involves an integral over the four-momentum, and expansion by regions [@Beneke:1997zp; @Jantzen:2011nz] can be applied to the integrals [@Fuentes-Martin:2016uol; @Zhang:2016pja]. It can then be shown [@Zhang:2016pja] that $$\begin{aligned} {\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}^\text{1{\ensuremath{\ell}}}[\phi]&=\frac{i}{2} \int \frac{{\ensuremath{\mathrm{d}}}^dq}{(2\pi)^d} \operatorname{tr}\log \left. \left({\mathcal{Q}}_{11} - {\mathcal{Q}}_{12} {\mathcal{Q}}_{22}^{-1} {\mathcal{Q}}_{21}\right)\right \rvert ^{P\rightarrow P-q} _\text{hard}, \label{eq:scalarres}\end{aligned}$$ where the final result is given by the hard part of the integrals, i.e. the part for which the integrands can be expanded in the region $|q^2| \sim M^2 \gg m^2$ and where $P_\mu=i D_\mu$ with $D_\mu$ being the gauge-covariant derivative. In the trace over the Hilbert space has already been performed and “$\operatorname{tr}$” designates a trace over all indices. To derive the currently known form of the purely scalar UOLEA [@Drozd:2015rsp; @Ellis:2017jns] from , one expands the logarithm in a power series, which is evaluated up to terms giving rise to operators of mass dimension 6 and calculates the corresponding coefficients arising from the momentum integral. In order to keep gauge-invariance manifest in the resulting ${\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}^\text{1{\ensuremath{\ell}}}$ a covariant derivative expansion [@Gaillard:1985uh; @Cheyette:1987qz] is used, where $P^\mu$ is kept as a whole and not split into a partial derivative and gauge fields. Fermionic contributions to the UOLEA {#sec:calc} ------------------------------------ In this section we consider a more general theory which contains both scalar and fermionic fields and calculate their contributions to the UOLEA.[^2] This extends the results provided in [@Ellis:2017jns] by including contributions to the matching from loops containing both scalars and fermions as well as contributions from purely fermionic loops. The latter are partially contained in the results of [@Ellis:2017jns] since they can be computed by squaring the purely fermionic trace. However, in this approach contributions are missed whenever the interaction terms among fermions contain gamma matrices. These terms would be classified as terms with open covariant derivatives in the language used in [@Ellis:2017jns]. In our treatment no assumptions are made about the spin structure of the fermionic interactions. In principle, the calculation can be performed using the method of covariant diagrams introduced in [@Zhang:2016pja], however, the calculation is presented starting from first principles for the following reason. There is some freedom in choosing the degrees of freedom to integrate over in the path integral. For complex scalar fields, for example, these can be the real and imaginary parts of the field. Alternatively one can choose the field and its conjugate as independent degrees of freedom. For fermions similar choices can be made. The explicit form of the fluctuation operator and the transformations necessary to bring the Gaussian path integral into a form where it can be trivially performed depend on this choice. To reduce the number of these transformations we use a formalism where Dirac and Majorana fermions are treated together in one multiplet in the diagonalization step. Our formalism has the additional advantage, that the resulting expressions are more compact compared to the case when Dirac and Majorana fermions are treated separately. In the following we will present our formalism in detail and introduce the notation of the final result. As mentioned above, there is some freedom in the choice of degrees of freedom to be integrated over. In order to treat real and complex scalar fields on the same footing one could split all complex fields into a real part and an imaginary part and perform the calculation using these as the fundamental fields. However, for scalars it is often desirable to maintain the complex fields as they might have some physical interpretation in the effective theory. We therefore use the field and its complex conjugate as independent degrees of freedom. Similarly, in order to treat Dirac and Majorana fermions simultaneously without diagonalizing the fluctuation operator among these it is convenient to treat any Dirac fermion and its charge conjugate as independent degrees of freedom. We collect all light and heavy scalars into the multiplets $\phi$ and $\Phi$, respectively, and all light and heavy fermions into the multiplets $\xi$ and $\Xi$, respectively, see table \[table1\]. Multiplet Components Description ----------- ------------------------------------------- ------------- $\Xi$ $\big(\Omega, {\Omega^C}, \Lambda\big)^T$ $\Phi$ $\big(\Sigma, \Sigma^*, \Theta\big)^T$ $\xi$ $\big(\omega, {\omega^C}, \lambda\big)^T$ $\phi$ $\big(\sigma, \sigma^*, \theta\big)^T$ : Contents of the different multiplets appearing in the calculation.[]{data-label="table1"} The charge conjugate of the Dirac spinor $\Omega$ is denoted as ${\Omega^C}={\mathcal{C}}\bar{\Omega}^T$, with ${\mathcal{C}}$ being the charge conjugation matrix. Similarly, we define for a light Dirac spinor $\omega$, ${\omega^C}={\mathcal{C}}\bar{\omega}^T$. With these definitions we may write the second variation of the Lagrangian as follows $$\begin{aligned} \delta^2 {\mathcal{L}}&= \delta^2 {\mathcal{L}}_\text{S} +\frac{1}{2}\delta \Xi ^T \mathbf{\Delta}_\Xi \delta \Xi - \frac{1}{2} \delta \Xi ^T \tilde{\mathbf{X}}_{\Xi \Phi} \delta \Phi+\frac{1}{2} \delta \Phi ^T \tilde{\mathbf{X}}_{\Phi \Xi} \delta \Xi -\frac{1}{2} \delta \Xi ^T \tilde{\mathbf{X}}_{\Xi \phi} \delta \phi+\frac{1}{2} \delta \phi ^T \tilde{\mathbf{X}}_{\phi \Xi} \delta \Xi \nonumber \\ & \quad+\frac{1}{2} \delta \xi ^T \tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi +\frac{1}{2}\delta \Xi ^T \tilde{\mathbf{X}}_{\Xi \xi} \delta \xi+\frac{1}{2}\delta \xi ^T \mathbf{\Delta}_\xi \delta \xi -\frac{1}{2} \delta \xi ^T \tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi +\frac{1}{2} \delta \Phi ^T \tilde{\mathbf{X}}_{\Phi \xi} \delta \xi \nonumber \\ & \quad -\frac{1}{2} \delta \xi ^T \tilde{\mathbf{X}}_{\xi \phi} \delta \phi+\frac{1}{2} \delta \phi ^T \tilde{\mathbf{X}}_{\phi \xi} \delta \xi, \label{eq:variation-part1}\end{aligned}$$ where the pure scalar part is given by $$\begin{aligned} \delta^2 {\mathcal{L}}_\text{S}=-\frac{1}{2} \delta \Phi ^T \mathbf{\Delta}_{\Phi} \delta \Phi -\frac{1}{2} \delta \phi ^T \mathbf{\Delta}_{\phi} \delta \phi -\frac{1}{2} \delta \Phi ^T \tilde{\mathbf{X}}_{\Phi \phi} \delta \phi-\frac{1}{2}\delta \phi ^T \tilde{\mathbf{X}}_{\phi \Phi} \delta \Phi. \label{eq:variation-part2}\end{aligned}$$ In eqs.  and we introduced the following abbreviations: $$\begin{aligned} \mathbf{\Delta}_\Xi &= \begin{pmatrix} X_{\Omega \Omega} && {\mathcal{C}}(\slashed{P}_{{\Omega^C}}-M_\Omega+{\mathcal{C}}^{-1} X_{\Omega \bar{\Omega}}{\mathcal{C}}^{-1}) && X_{\Omega \Lambda} \\ {\mathcal{C}}(\slashed{P}_\Omega-M_\Omega+X_{\bar{\Omega} \Omega}) && {\mathcal{C}}X_{\bar{\Omega} \bar{\Omega}} {\mathcal{C}}^{-1} && {\mathcal{C}}X_{\bar{\Omega} \Lambda} \\ X_{\Lambda \Omega} && X_{\Lambda \bar{\Omega}}{\mathcal{C}}^{-1} && {\mathcal{C}}(\slashed{P}_\Lambda-M_\Lambda+{\mathcal{C}}^{-1} X_{\Lambda \Lambda}) \end{pmatrix} \label{eq:initialDeltaXi}, \\ \tilde{\mathbf{X}}_{\Xi \Phi} &= \begin{pmatrix} X_{\Omega \Sigma} && X_{\Omega \Sigma^{*}} && X_{\Omega \Theta} \\ {\mathcal{C}}X_{\bar{\Omega} \Sigma} && {\mathcal{C}}X_{\bar{\Omega} \Sigma ^*} && {\mathcal{C}}X_{\bar{\Omega} \Theta} \\ X_{\Lambda \Sigma} && X_{\Lambda \Sigma ^*} && X_{\Lambda \Theta} \end{pmatrix}, \\ \tilde{\mathbf{X}}_{\Phi \Xi} &= \begin{pmatrix} X_{\Sigma \Omega} && X_{\Sigma \bar{\Omega}} {\mathcal{C}}^{-1} && X_{\Sigma \Lambda} \\ X_{\Sigma ^* \Omega} && X_{\Sigma ^* \bar{\Omega}} {\mathcal{C}}^{-1} && X_{\Sigma ^* \Lambda} \\ X_{\Theta \Omega} && X_{\Theta \bar{\Omega}} {\mathcal{C}}^{-1} && X_{\Theta \Lambda} \end{pmatrix}, \\ \tilde{\mathbf{X}}_{\Xi \xi} &= \begin{pmatrix} X_{\Omega \omega} && X_{\Omega \bar{\omega}}{\mathcal{C}}^{-1} && X_{\Omega \lambda} \\ {\mathcal{C}}X_{\bar{\Omega} \omega} && {\mathcal{C}}X_{\bar{\Omega} \bar{\omega}} {\mathcal{C}}^{-1} && {\mathcal{C}}X_{\bar{\Omega} \lambda} \\ X_{\Lambda \omega} && X_{\Lambda \bar{\omega}} {\mathcal{C}}^{-1} && X_{\Lambda \lambda} \end{pmatrix}, \\ \mathbf{\Delta}_{\Phi} &= \begin{pmatrix} X_{\Sigma \Sigma} && -P_{\Sigma^{*}}^2+M_\Sigma^2+X_{\Sigma \Sigma ^{*}} && X_{\Sigma \Theta} \\ -P_{\Sigma}^2+M_\Sigma^2+X_{\Sigma ^* \Sigma} && X_{\Sigma ^* \Sigma ^*} && X_{\Sigma^* \Theta} \\ X_{\Theta \Sigma} && X_{\Theta \Sigma ^*} && -P_\Theta^2+M_\Theta^2+X_{\Theta \Theta} \end{pmatrix}, \label{eq:DeltaPhi}\end{aligned}$$ with similar definitions for $\Phi\rightarrow \phi$ and $\Xi\rightarrow \xi$. Here $P^\mu \equiv i D^\mu$, with $D^\mu$ being the gauge-covariant derivative, is a matrix diagonal in field space for which the subscript indicates which gauge group generators are to be used. Furthermore we have defined $$\begin{aligned} (X_{A B})_{ij}\equiv -\frac{\delta ^2 {{\mathcal{L}}_{\text{UV,int}}}}{\delta A_i \delta B_j},\end{aligned}$$ where ${{\mathcal{L}}_{\text{UV,int}}}$ is the interaction Lagrangian of the UV theory and $A$ and $B$ designate arbitrary (scalar or fermionic) fields, if not stated otherwise. Here the indices $i$ and $j$ collectively denote all of the indices carried by the fields $A$ and $B$. It shall be noted that if $P^\mu _\Omega$ contains generators $T^a _r$ of a representation $r$, then $P^\mu_{{\Omega^C}}$ contains the generators of the conjugate representation $\bar{r}$, denoted by $T^{a} _{\bar{r}}$. The same holds for the generators contained in $P^\mu _\Sigma$ and $P^\mu _{\Sigma^*}$. Note also that is in principle equivalent to the quadratic term in with the difference being that in all scalar fields are assumed to be real, while in complex and real fields are separate. The different signs in the fermionic terms in result from using the anti-commutation relation between fermions and derivatives w.r.t. fermions. Before proceeding it is convenient to define $$\begin{aligned} \tilde{\mathds{1}}\equiv \begin{pmatrix} 0 && \mathds{1} && 0 \\ \mathds{1} && 0 && 0 \\ 0 && 0 && \mathds{1} \end{pmatrix}, \label{eq:fermID}\end{aligned}$$ and rewrite as $$\begin{aligned} \mathbf{\Delta}_\Xi &= {\mathcal{C}}\tilde{\mathds{1}} (\slashed{P}-M_\Xi) +\tilde{\mathbf{X}}_{\Xi \Xi}, \end{aligned}$$ where $$\begin{aligned} \slashed{P}-M_\Xi &= \begin{pmatrix} \slashed{P}_{\Omega}-M_\Omega && 0 && 0 \\ 0 && \slashed{P}_{{\Omega^C}}-M_\Omega && 0 \\ 0 && 0 && \slashed{P}_\Lambda-M_\Lambda \end{pmatrix},\\ \tilde{\mathbf{X}}_{\Xi \Xi} &= \begin{pmatrix} X_{\omega \omega} && X_{\Omega \bar{\Omega}}{\mathcal{C}}^{-1} && X_{\Omega \Lambda} \\ {\mathcal{C}}X_{\bar{\Omega} \Omega} && {\mathcal{C}}X_{\bar{\omega} \bar{\omega}} {\mathcal{C}}^{-1} && {\mathcal{C}}X_{\bar{\Omega} \Lambda} \\ X_{\Lambda \Omega} && X_{\Lambda \bar{\Omega}}{\mathcal{C}}^{-1} && X_{\Lambda \Lambda} \end{pmatrix}.\end{aligned}$$ We rewrite in a similar way as $$\begin{aligned} \mathbf{\Delta}_\Phi= \tilde{\mathds{1}}(-P^2+M^2_\Phi)+\tilde{\mathbf{X}}_{\Phi \Phi},\end{aligned}$$ with $$\begin{aligned} -P^2+M^2_\Phi &= \begin{pmatrix} -P_{\Sigma} ^2 + M^2_\Sigma && 0 && 0 \\ 0 && -P_{\Sigma^*} ^2 + M^2_{\Sigma^*} && 0 \\ 0 && 0 && -P_{\Theta} ^2 + M^2_{\Theta} \end{pmatrix}, \\ \tilde{\mathbf{X}}_{\Phi \Phi} &= \begin{pmatrix} X_{\Sigma \Sigma} && X_{\Sigma \Sigma ^{*}} && X_{\Sigma \Theta} \\ X_{\Sigma ^* \Sigma} && X_{\Sigma ^* \Sigma ^*} && X_{\Sigma^* \Theta} \\ X_{\Theta \Sigma} && X_{\Theta \Sigma ^*} && X_{\Theta \Theta}\end{pmatrix}.\end{aligned}$$ The calculation now proceeds by diagonalizing the quadratic variation in terms of statistics in order to be able to perform the (Gaussian) path integral. We first eliminate terms that mix scalar fluctuations and fluctuations of light fermions $\xi$ by rewriting the second variation as $$\begin{aligned} \delta ^2 {\mathcal{L}}_\xi ={}& \frac{1}{2} \delta \xi ^T \tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi +\frac{1}{2}\delta \Xi ^T \tilde{\mathbf{X}}_{\Xi \xi} \delta \xi+\frac{1}{2}\delta \xi ^T \mathbf{\Delta}_\xi \delta \xi -\frac{1}{2} \delta \xi ^T \tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi +\frac{1}{2} \delta \Phi ^T \tilde{\mathbf{X}}_{\Phi \xi} \delta \xi \nonumber \\ & -\frac{1}{2} \delta \xi ^T \tilde{\mathbf{X}}_{\xi \phi} \delta \phi+\frac{1}{2} \delta \phi ^T \tilde{\mathbf{X}}_{\phi \xi} \delta \xi \\ ={}& \frac{1}{2} \left(\delta \xi^T+\left[\delta \Xi^T \tilde{\mathbf{X}}_{\Xi \xi}+\delta \Phi^T \tilde{\mathbf{X}}_{\Phi \xi}+\delta \phi^T \tilde{\mathbf{X}}_{\phi \xi}\right]\overleftarrow{\mathbf{\Delta}}_\xi^{-1}\right)\mathbf{\Delta}_\xi \nonumber \\ & \times \left(\delta \xi+\mathbf{\Delta}_\xi^{-1}\left[\tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi-\tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi-\tilde{\mathbf{X}}_{\xi \phi} \delta \phi\right]\right) \nonumber \\ & -\frac{1}{2} \left[\delta \Xi^T \tilde{\mathbf{X}}_{\Xi \xi}+\delta \Phi^T \tilde{\mathbf{X}}_{\Phi \xi}+\delta \phi^T \tilde{\mathbf{X}}_{\phi \xi}\right]\mathbf{\Delta}_\xi^{-1}\left[\tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi-\tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi-\tilde{\mathbf{X}}_{\xi \phi} \delta \phi\right].\end{aligned}$$ In the last step we have introduced $\mathbf{\Delta}_\xi ^{-1}$, which is the matrix-valued Green’s function of $\mathbf{\Delta}_\xi$. The occurring matrix multiplication also implies an integration, that is $$\begin{gathered} \left(\mathbf{\Delta}_\xi^{-1}\left[\tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi-\tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi-\tilde{\mathbf{X}}_{\xi \phi} \delta \phi\right]\right)(x)\equiv \\ \int {\ensuremath{\mathrm{d}}}^dy \; \mathbf{\Delta}_\xi^{-1}(x,y)\left[\tilde{\mathbf{X}}_{\xi \Xi}(y) \delta \Xi(y)-\tilde{\mathbf{X}}_{\xi \Phi}(y) \delta \Phi(y)-\tilde{\mathbf{X}}_{\xi \phi}(y) \delta \phi(y)\right].\end{gathered}$$ Similar to $\mathbf{\Delta}_\xi ^{-1}$ we define $\overleftarrow{\mathbf{\Delta}}_\xi^{-1}$ in such a way that $$\begin{aligned} \int {\ensuremath{\mathrm{d}}}^dy \; f(y) \overleftarrow{\mathbf{\Delta}}_\xi ^{-1}(y,x) \overleftarrow{\mathbf{\Delta}} _\xi (x)=f(x),\end{aligned}$$ where $\overleftarrow{\mathbf{\Delta}} _\xi (x)=-\overleftarrow{\slashed{P}}-M_{\xi}$. Next, we shift the light fermion field as $$\begin{aligned} \delta \xi' &= \delta \xi+\mathbf{\Delta}_\xi^{-1}\left[\tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi-\tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi-\tilde{\mathbf{X}}_{\xi \phi} \delta \phi\right], \label{eq:xishift} \\ \delta \xi'^T &= \delta \xi^T+\left[\delta \Xi^T \tilde{\mathbf{X}}_{\Xi \xi}+\delta \Phi^T \tilde{\mathbf{X}}_{\Phi \xi}+\delta \phi^T \tilde{\mathbf{X}}_{\phi \xi}\right]\overleftarrow{\mathbf{\Delta}}_\xi^{-1}, \label{eq:xishift_T}\end{aligned}$$ under which the path integral measure is invariant. Since $\xi$ is a multiplet of Majorana-like spinors, the two shifts and are not independent. The required relation between the two shifts is proven in [appendix \[sec:shifts\]]{}. After the shifts have been performed we arrive at $$\begin{aligned} \delta ^2 {\mathcal{L}}_\xi ={}& \frac{1}{2} \delta \xi'^{T} \mathbf{\Delta}_\xi \delta \xi'-\frac{1}{2}\delta \Xi^T \tilde{\mathbf{X}}_{\Xi \xi} \mathbf{\Delta}_\xi^{-1} \tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi+\frac{1}{2}\delta \Xi ^T \tilde{\mathbf{X}}_{\Xi \xi} \mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi+\frac{1}{2}\delta \Xi ^T \tilde{\mathbf{X}}_{\Xi \xi} \mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi \phi} \delta \phi \nonumber \\ & +\frac{1}{2} \delta \Phi^T \tilde{\mathbf{X}}_{\Phi \xi} \mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi-\frac{1}{2} \delta \Phi^T \tilde{\mathbf{X}}_{\Phi \xi} \mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi-\frac{1}{2} \delta \Phi^T \tilde{\mathbf{X}}_{\Phi \xi} \mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi \phi} \delta \phi \nonumber \\ & +\frac{1}{2} \delta \phi^T \tilde{\mathbf{X}}_{\phi \xi} \mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi-\frac{1}{2} \delta \phi^T \tilde{\mathbf{X}}_{\phi \xi} \mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi-\frac{1}{2} \delta \phi^T \tilde{\mathbf{X}}_{\phi \xi} \mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi \phi} \delta \phi. \label{eq:original_2nd_variation}\end{aligned}$$ We proceed by eliminating terms that mix scalar fluctuations and fluctuations of heavy fermions $\Xi$. It is convenient to first introduce $$\begin{aligned} \bar{\mathbf{X}}_{A B}&\equiv \tilde{\mathbf{X}}_{A B}-\tilde{\mathbf{X}}_{A \xi}\mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi B}, \label{eq:quantitiesWithTilde_1} \\ \bar{\mathbf{\Delta}}_A&\equiv \mathbf{\Delta}_A-\tilde{\mathbf{X}}_{A \xi}\mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi A}, \label{eq:quantitiesWithTilde_2}\end{aligned}$$ and write the second variation as $$\begin{aligned} \delta^2 {\mathcal{L}}= \delta^2 \bar{{\mathcal{L}}}_\text{S}+\frac{1}{2}\delta \Xi ^T \bar{\mathbf{\Delta}}_{\Xi} \delta \Xi- \frac{1}{2} \delta \Xi ^T \bar{\mathbf{X}}_{\Xi \Phi} \delta \Phi+\frac{1}{2} \delta \Phi ^T \bar{\mathbf{X}}_{\Phi \Xi} \delta \Xi -\frac{1}{2} \delta \Xi ^T \bar{\mathbf{X}} _{\Xi \phi} \delta \phi+\frac{1}{2} \delta \phi ^T \bar{\mathbf{X}} _{\phi \Xi} \delta \Xi. \label{eq:d2_Lag_step_1}\end{aligned}$$ In the first term on the r.h.s., $\delta^2 \bar{{\mathcal{L}}}_\text{S}$, is obtained by replacing $\tilde{\mathbf{X}}_{A B}$ and $\mathbf{\Delta}_A$ in $\delta^2 {\mathcal{L}}_\text{S}$ via the relations –. By shifting the $\delta \Xi$ in a similar way, $$\begin{aligned} \delta \Xi' &= \delta \Xi-\bar{\mathbf{\Delta}}_\Xi^{-1}\left[\bar{\mathbf{X}}_{\Xi \Phi} \delta \Phi+\bar{\mathbf{X}}_{\Xi \phi} \delta \phi\right], \\ \delta \Xi'^T &= \delta \Xi ^T+\left[\delta \Phi^T \bar{\mathbf{X}}_{\Phi \Xi} +\delta \phi ^T \bar{\mathbf{X}}_{\phi \Xi} \right] \overleftarrow{\bar{\mathbf{\Delta}}}_\Xi^{-1} \label{eq:Xishift}\end{aligned}$$ one finds $$\begin{aligned} \delta^2 {\mathcal{L}}={}& -\frac{1}{2} \delta \Phi ^T (\bar{\mathbf{\Delta}}_{\Phi}-\bar{\mathbf{X}}_{\Phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \Phi}) \delta \Phi -\frac{1}{2} \delta \phi ^T (\bar{\mathbf{\Delta}}_{\phi}-\bar{\mathbf{X}}_{\phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \phi}) \delta \phi \nonumber \\ & -\frac{1}{2} \delta \Phi ^T (\bar{\mathbf{X}}_{\Phi \phi}-\bar{\mathbf{X}}_{\Phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \phi}) \delta \phi \nonumber \\ & -\frac{1}{2}\delta \phi ^T (\bar{\mathbf{X}}_{\phi \Phi}-\bar{\mathbf{X}}_{\phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \Phi}) \delta \Phi+\frac{1}{2} \delta \xi'^{T} \mathbf{\Delta}_\xi \delta \xi'+\frac{1}{2} \delta \Xi'^{T} \bar{\mathbf{\Delta}}_\Xi \delta \Xi' \\ ={}& -\frac{1}{2} \begin{pmatrix} \delta \Phi^T && \delta \phi^T \end{pmatrix} \begin{pmatrix} \bar{\mathbf{\Delta}}_{\Phi}-\bar{\mathbf{X}}_{\Phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \Phi} && \bar{\mathbf{X}}_{\Phi \phi}-\bar{\mathbf{X}}_{\Phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \phi} \\ \bar{\mathbf{X}}_{\phi \Phi}-\bar{\mathbf{X}}_{\phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \Phi} && \bar{\mathbf{\Delta}}_{\phi}-\bar{\mathbf{X}}_{\phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \phi} \end{pmatrix} \begin{pmatrix} \delta \Phi \\ \delta \phi \end{pmatrix}\nonumber \\ & +\frac{1}{2} \delta \xi'^{T} \mathbf{\Delta}_\xi \delta \xi'+\frac{1}{2} \delta \Xi'^{T} \bar{\mathbf{\Delta}}_\Xi \delta \Xi' \\ \equiv{}& -\frac{1}{2}\begin{pmatrix} \delta \Phi^T && \delta \phi^T \end{pmatrix} {\mathcal{Q}_{\text{S}}}\begin{pmatrix} \delta \Phi \\ \delta \phi \end{pmatrix} +\frac{1}{2} \delta \xi'^{T} \mathbf{\Delta}_\xi \delta \xi'+\frac{1}{2} \delta \Xi'^{T} \bar{\mathbf{\Delta}}_\Xi \delta \Xi' \\ \equiv{}& \delta^2{\mathcal{L}}_{\text{SF}} + \delta^2{\mathcal{L}}_{\text{F}} \label{eq:second_var}\end{aligned}$$ with $$\begin{aligned} \delta^2{\mathcal{L}}_{\text{SF}} &= -\frac{1}{2} \begin{pmatrix}\delta \Phi^T && \delta \phi^T\end{pmatrix} {\mathcal{Q}_{\text{S}}}\begin{pmatrix}\delta \Phi \\ \delta \phi\end{pmatrix}, \\ \delta^2{\mathcal{L}}_{\text{F}} &= \frac{1}{2} \delta \xi'^{T} \mathbf{\Delta}_\xi \delta \xi' + \frac{1}{2} \delta \Xi'^{T} \bar{\mathbf{\Delta}}_\Xi \delta \Xi'.\end{aligned}$$ At this point there are no terms including both a scalar and a fermionic fluctuation and the path integrals over scalars and fermions can be performed separately. As has been pointed out in [@Fuentes-Martin:2016uol] it is convenient to diagonalize the scalar part such that $$\begin{aligned} {\mathcal{Q}_{\text{S}}}= \begin{pmatrix} \hat{\mathbf{\Delta}}_\Phi-\hat{\mathbf{X}}_{\Phi \phi} \hat{\mathbf{\Delta}}_{\phi}^{-1}\hat{\mathbf{X}}_{\phi \Phi} && 0 \\ 0 && \hat{\mathbf{\Delta}}_\phi \end{pmatrix},\end{aligned}$$ where $$\begin{aligned} \hat{\mathbf{\Delta}}_A &= \bar{\mathbf{\Delta}}_{A}-\bar{\mathbf{X}}_{A \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi A},\\ \hat{\mathbf{X}}_{A B} &= \bar{\mathbf{X}}_{A B}-\bar{\mathbf{X}}_{A \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi B},\end{aligned}$$ with $A,B \in \{\phi, \Phi\}$. The contribution from this mixed scalar/fermionic part to the effective action is then given by $$\begin{aligned} {\mathcal{L}}_{{\ensuremath{\text{EFT}}\xspace}\text{,SF}}^\text{1{\ensuremath{\ell}}}&=\frac{i}{2}\int \frac{{\ensuremath{\mathrm{d}}}^d q}{(2\pi)^d}\left[ \operatorname{tr}\log \left(\hat{\mathbf{\Delta}}_\Phi-\hat{\mathbf{X}}_{\Phi \phi} \hat{\mathbf{\Delta}}_{\phi}^{-1}\hat{\mathbf{X}}_{\phi \Phi}\right) + \operatorname{tr}\log \left. \hat{\mathbf{\Delta}}_\phi\right]\right \rvert ^{P\rightarrow P-q} _\text{hard} \label{eq:scalarcontribution}\end{aligned}$$ and it can be calculated using a covariant derivative expansion as outlined in e.g.[@Zhang:2016pja]. However, care has to be taken since $\hat{\mathbf{\Delta}}_\phi$ contains contributions from heavy fermions and hence does not vanish completely in the hard region of the momentum integration. The corresponding contributions can be calculated by using $$\begin{aligned} \log \det \left(\bar{\mathbf{\Delta}}_{\phi}-\bar{\mathbf{X}}_{\phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \phi}\right)=\log \det \left(\bar{\mathbf{\Delta}}_{\phi}\right)+\log \det \left(\mathds{1}-\bar{\mathbf{\Delta}}_{\phi}^{-1}\bar{\mathbf{X}}_{\phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \phi}\right),\end{aligned}$$ where the first term on the right hand side vanishes in the hard region as it only contains contributions from light fields. Since a lot of terms are generated when re-expressing the hatted and barred quantities in terms of the quantities arising in the original variation we abstain from writing out the result explicitly. It is, however, useful to consider the expansion of the hatted operators in order to understand the ingredients entering the final result. In particular we will show that it is possible to absorb all explicit factors of $\tilde{\mathds{1}}$ and ${\mathcal{C}}$ by appropriate re-definitions of $\tilde{\mathbf{X}}_{A B}$. In order to achieve that we first expand $(\mathbf{\Delta}_\xi ^{-1})_{P_\mu\to P_\mu-q_\mu}\equiv \mathbf{\Delta}_\xi ^{-1}(q)$ as $$\begin{aligned} \mathbf{\Delta}_\xi ^{-1}(q)&=\left[{\mathcal{C}}\tilde{\mathds{1}}(\slashed{P}-\slashed{q}-M_\xi) +\tilde{\mathbf{X}}_{\xi \xi}\right]^{-1} \\ &= \left[\mathds{1}-\left(-\slashed{q}-M_{\xi}\right)^{-1}\tilde{\mathds{1}}{\mathcal{C}}^{-1}\left(-{\mathcal{C}}\tilde{\mathds{1}}\slashed{P}-\tilde{\mathbf{X}}_{\xi \xi}\right)\right]\left(-\slashed{q}-M_{\xi}\right)^{-1}\tilde{\mathds{1}}{\mathcal{C}}^{-1} \\ &= \sum_{n=0} ^\infty \left[\left(-\slashed{q}-M_{\xi}\right)^{-1}\tilde{\mathds{1}}{\mathcal{C}}^{-1}\left(-{\mathcal{C}}\tilde{\mathds{1}}\slashed{P}-\tilde{\mathbf{X}}_{\xi \xi}\right) \right]^n \left(-\slashed{q}-M_{\xi}\right)^{-1}\tilde{\mathds{1}}{\mathcal{C}}^{-1} \\ &= \sum_{n=0} ^\infty \left[\left(-\slashed{q}-M_{\xi}\right)^{-1}\left(-\slashed{P}-\mathbf{X}_{\xi \xi}\right) \right]^n \left(-\slashed{q}-M_{\xi}\right)^{-1}\tilde{\mathds{1}}{\mathcal{C}}^{-1}, \label{eq:DeltaxiInv}\end{aligned}$$ where we defined $$\begin{aligned} \mathbf{X}_{\xi \xi}\equiv \tilde{\mathds{1}}{\mathcal{C}}^{-1} \tilde{\mathbf{X}}_{\xi \xi}.\end{aligned}$$ Then – become $$\begin{aligned} \bar{\mathbf{X}}_{A B}&= \tilde{\mathbf{X}}_{A B}-\tilde{\mathbf{X}}_{A \xi}\sum _{n=0} ^{\infty} \left[\left(-\slashed{q}-M_{\xi}\right) ^{-1} \left(-\mathbf{X} _{\xi \xi}-\slashed{P}\right) \right]^n \left(-\slashed{q}-M_{\xi}\right) ^{-1} \mathbf{X}_{\xi B}, \\ \bar{\mathbf{\Delta}}_A&= \mathbf{\Delta}_A-\tilde{\mathbf{X}}_{A \xi}\sum _{n=0} ^{\infty} \left[\left(-\slashed{q}-M_{\xi}\right) ^{-1} \left(-\mathbf{X} _{\xi \xi}-\slashed{P}\right) \right]^n \left(-\slashed{q}-M_{\xi}\right) ^{-1} \mathbf{X}_{\xi A},\end{aligned}$$ where we introduced $\mathbf{X}_{\xi B}\equiv {\mathcal{C}}^{-1}\tilde{\mathds{1}} \tilde{\mathbf{X}}_{\xi B}$. Next we consider $$\begin{aligned} \bar{\mathbf{\Delta}}_\Xi ^{-1}(q) &= \Bigg[{\mathcal{C}}\tilde{\mathds{1}}\left(-\slashed{q}-M_{\Xi}\right) +{\mathcal{C}}\tilde{\mathds{1}} \slashed{P}+\tilde{\mathbf{X}}_{\Xi \Xi} \nonumber \\ &~~~~~~~ -\tilde{\mathbf{X}}_{\Xi \xi}\sum _{n=0} ^{\infty} \left[\left(-\slashed{q}-M_{\xi}\right) ^{-1} \left(-\mathbf{X} _{\xi \xi}-\slashed{P}\right) \right]^n \left(-\slashed{q}-M_{\xi}\right) ^{-1} \mathbf{X}_{\xi \Xi} \Bigg]^{-1} \\ &=\sum _{m=0} ^{\infty} \left\{\mathcal{K}_\Xi ^{-1} \left(-\mathbf{X} _{\Xi \Xi}-\slashed{P}\right) + \mathcal{K}_\Xi ^{-1} \mathbf{X}_{\Xi \xi}\sum _{n=0} ^{\infty} \left[\mathcal{K}_\xi ^{-1} \left(-\mathbf{X} _{\xi \xi}-\slashed{P}\right) \right]^n \mathcal{K}_\xi ^{-1} \mathbf{X}_{\xi \Xi} \right\}^m \mathcal{K}_\Xi ^{-1} {\mathcal{C}}^{-1} \tilde{\mathds{1}}, \label{eq:DeltaXiTildeInv}\end{aligned}$$ where $$\begin{aligned} \mathcal{K}_A &\equiv \left(-\slashed{q}-M_A\right), \\ \mathbf{X}_{\Xi \xi} &\equiv {\mathcal{C}}^{-1} \tilde{\mathds{1}} \tilde{\mathbf{X}}_{\Xi \xi}.\end{aligned}$$ Note that in and the expressions for $\mathbf{\Delta}_\xi ^{-1}$ and $\bar{\mathbf{\Delta}}^{-1}_{\Xi}$ contain the factor ${\mathcal{C}}^{-1} \tilde{\mathds{1}}$ on the very right. This means that in the combination $$\begin{aligned} \bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi B}&=\bar{\mathbf{\Delta}}^{-1}_{\Xi} (\tilde{\mathbf{X}}_{\Xi B}-\tilde{\mathbf{X}}_{\Xi \xi}\mathbf{\Delta}_\xi ^{-1} \tilde{\mathds{1}} {\mathcal{C}}{\mathcal{C}}^{-1} \tilde{\mathds{1}} \tilde{\mathbf{X}} _{\xi B}) \\ &=\bar{\mathbf{\Delta}}^{-1}_{\Xi} \tilde{\mathds{1}} {\mathcal{C}}{\mathcal{C}}^{-1} \tilde{\mathds{1}} (\tilde{\mathbf{X}}_{\Xi B}-\tilde{\mathbf{X}}_{\Xi \xi}\mathbf{\Delta}_\xi ^{-1} \tilde{\mathds{1}} {\mathcal{C}}{\mathcal{C}}^{-1} \tilde{\mathds{1}} \tilde{\mathbf{X}} _{\xi B}) \\ &=\bar{\mathbf{\Delta}}^{-1}_{\Xi} \tilde{\mathds{1}} {\mathcal{C}}(\mathbf{X}_{\Xi B}-\mathbf{X}_{\Xi \xi}\mathbf{\Delta}_\xi ^{-1} \tilde{\mathds{1}} {\mathcal{C}}\mathbf{X}_{\xi B}),\end{aligned}$$ all appearances of ${\mathcal{C}}$ and $\tilde{\mathds{1}}$ cancel once $\bar{\mathbf{\Delta}}^{-1}_{\Xi}$ and $\mathbf{\Delta}_\xi ^{-1}$ are inserted and $\tilde{\mathbf{X}}_{A B}$ is expressed in terms of $\mathbf{X}_{A B}$ with $\mathbf{X}_{A B}={\mathcal{C}}^{-1} \tilde{\mathds{1}} \tilde{\mathbf{X}}_{A B}$. A similar property holds for $\tilde{\mathbf{X}}_{\Phi B}$ and $\tilde{\mathbf{X}}_{\phi B}$, which only appear as $\mathbf{X}_{\Phi B}=\tilde{\mathds{1}}\tilde{\mathbf{X}}_{\Phi B}$ and $\mathbf{X}_{\phi B}=\tilde{\mathds{1}}\tilde{\mathbf{X}}_{\phi B}$. Hence, the result can be expressed entirely through the matrices $\mathbf{X}_{A B}$ and neither $\tilde{\mathds{1}}$ nor ${\mathcal{C}}$ explicitly appears in the final operator structures. To complete the calculation we need to compute the purely fermionic part of the second variation , which reads $$\begin{aligned} \delta^2 {\mathcal{L}}_\text{F} = \frac{1}{2}\delta \Xi'^T \bar{\mathbf{\Delta}}_\Xi \delta \Xi' + \frac{1}{2} \delta \xi'^T \mathbf{\Delta}_\xi \delta \xi'.\end{aligned}$$ Again, we are only interested in the contribution from the hard region where the light only part $\mathbf{\Delta} _\xi$ does not contribute. Hence we only need to consider $\bar{\mathbf{\Delta}}_\Xi$. We find $$\begin{aligned} \operatorname{tr}\log \Big(&\mathbf{\Delta}_\Xi(q) - \mathbf{X}_{\Xi \xi}\Delta_\xi ^{-1}(q) \mathbf{X}_{\xi \Xi}\Big)\nonumber \\ &= \operatorname{tr}\log \left({\mathcal{C}}\tilde{\mathds{1}} \mathcal{K}_\Xi+{\mathcal{C}}\tilde{\mathds{1}} \slashed{P}+\tilde{\mathbf{X}}_{\Xi \Xi}-\mathbf{X}_{\Xi \xi}\mathbf{\Delta}_\xi ^{-1}(q) \tilde{\mathbf{X}}_{\xi \Xi}\right) \\ &= \operatorname{tr}\log \left( {\mathcal{C}}\tilde{\mathds{1}}\mathcal{K}_\Xi \right) + \operatorname{tr}\log \left[ \mathds{1}-\mathcal{K}_\Xi^{-1} \left(-\slashed{P}-\mathbf{X}_{\Xi \Xi}+\mathbf{X}_{\Xi \xi}\mathbf{\Delta}_\xi ^{-1}(q) \tilde{\mathbf{X}}_{\xi \Xi} \right) \right], \label{eq:trlog_fermionic}\end{aligned}$$ where the first term on the r.h.s. of is absorbed in the normalization of the path integral. Inserting $\mathbf{\Delta}_\xi ^{-1}(q)$ from yields $$\begin{aligned} {\mathcal{L}}_\text{{\ensuremath{\text{EFT}}\xspace},F}^{1{\ensuremath{\ell}}} &= \frac{i}{2} \sum_{n=1}^{\infty} \frac{1}{n} \operatorname{tr}\left[\mathcal{K}_\Xi^{-1}\left(-\slashed{P}-\mathbf{X}_{\Xi \Xi}+\mathbf{X} _{\Xi \xi}\sum _{m=0} ^{\infty} \left[\mathcal{K} _\xi ^{-1} \mathbf{X} _{\xi \xi} \right]^m \mathcal{K} _\xi ^{-1} \mathbf{X} _{\xi \Xi} \right) \right]^n.\end{aligned}$$ In order to obtain the final UOLEA from the sum $$\begin{aligned} {\mathcal{L}}_{{\ensuremath{\text{EFT}}\xspace}}^{1{\ensuremath{\ell}}} = {\mathcal{L}}_{{\ensuremath{\text{EFT}}\xspace}\text{,SF}}^{1{\ensuremath{\ell}}} + {\mathcal{L}}_\text{{\ensuremath{\text{EFT}}\xspace},F}^{1{\ensuremath{\ell}}} \label{eq:UOLEA_final}\end{aligned}$$ one needs to expand all functional traces on the r.h.s. of to a given mass dimension and calculate the coefficients and operator structures. In this expansion we keep $P^\mu$ as a whole to obtain a manifestly gauge-invariant effective Lagrangian. It can be shown, by using the Baker-Campbell-Hausdorff formula, that every $P_\mu$ appears in commutators of the form $[P_\mu,\bullet]$ [@Gaillard:1985uh; @Cheyette:1987qz]. To combine all $P^\mu$ operators into commutators one can either explicitly use the Baker-Campbell-Hausdorff formula in the calculation as was done in [@Drozd:2015rsp] or construct a basis for these commutators and then solve a system of equations to fix the coefficients of the basis elements as was pointed out in [@Zhang:2016pja]. In this publication the second method was deployed. Our final expression for ${\mathcal{L}}_{{\ensuremath{\text{EFT}}\xspace}}^{1{\ensuremath{\ell}}}$ is contained in the ancillary file `UOLEA.m` in the arXiv submission of this publication and will be described further in the next section. Discussion of the result {#sec:results} ======================== Published operators and coefficients {#sec:results_ops_coeffs} ------------------------------------ In the following we describe the calculated scalar/fermionic operators, which we publish in the ancillary file `UOLEA.m` in the arXiv submission of this publication. The file contains the following four lists: - `mixedLoopsNoP`: Mixed scalar/fermionic operators without $P^\mu$. - `mixedLoopsWithP`: Mixed scalar/fermionic operators with $P^\mu$. - `fermionicLoopsNoP`: Purely fermionic operators without $P^\mu$. - `fermionicLoopsWithP`: Purely fermionic operators with $P^\mu$. For convenience, the additional list `uolea` is defined, which is the union of the four lists from above. The lists contain the calculated operators in the form $\{F^\alpha(M_i,M_j,\dots),\mathcal{O}^\alpha_{ij\cdots}\}$, where $F^\alpha(M_i,M_j,\dots)$ is the coefficient of the operator $\mathcal{O}^\alpha_{ij\cdots}$, which is expressed through the integrals ${\tilde{\mathcal{I}}}[q^{2n_c}]^{n_i n_j \dots n_L} _{i j \dots 0}$ defined in [appendix \[sec:loop\_functions\]]{}. The operators $\mathcal{O}^\alpha_{ij\cdots}$ are expressed in terms of the symbols $X[\text{A},\text{B}][i,j]$, with $\text{A}, \text{B}\in \{\text{S},\text{s},\text{F},\text{f}\}$, which correspond to the matrices defined in [section \[sec:calc\]]{} as follows: $$\begin{aligned} X[\text{S},\text{F}] &\equiv \mathbf{X}_{\Phi \Xi}= \begin{pmatrix} X_{\Sigma ^* \Omega} && X_{\Sigma ^* \bar{\Omega}} {\mathcal{C}}^{-1} && X_{\Sigma ^* \Lambda} \\ X_{\Sigma \Omega} && X_{\Sigma \bar{\Omega}} {\mathcal{C}}^{-1} && X_{\Sigma \Lambda} \\ X_{\Theta \Omega} && X_{\Theta \bar{\Omega}} {\mathcal{C}}^{-1} && X_{\Theta \Lambda} \end{pmatrix}, \nonumber \\ X[\text{s},\text{F}] &\equiv \mathbf{X}_{\phi \Xi}= \begin{pmatrix} X_{\sigma ^* \Omega} && X_{\sigma ^* \bar{\Omega}} {\mathcal{C}}^{-1} && X_{\sigma ^* \Lambda} \\ X_{\sigma \Omega} && X_{\sigma \bar{\Omega}} {\mathcal{C}}^{-1} && X_{\sigma \Lambda} \\ X_{\theta \Omega} && X_{\theta \bar{\Omega}} {\mathcal{C}}^{-1} && X_{\theta \Lambda} \end{pmatrix}, \nonumber \\ X[\text{S},\text{f}] &\equiv \mathbf{X}_{\Phi \xi}=\begin{pmatrix} X_{\Sigma ^* \omega} && X_{\Sigma ^* \bar{\omega}} {\mathcal{C}}^{-1} && X_{\Sigma ^* \lambda} \\ X_{\Sigma \omega} && X_{\Sigma \bar{\omega}} {\mathcal{C}}^{-1} && X_{\Sigma \lambda} \\ X_{\Theta \omega} && X_{\Theta \bar{\omega}} {\mathcal{C}}^{-1} && X_{\Theta \lambda} \end{pmatrix}, \nonumber \\ X[\text{s},\text{f}] &\equiv \mathbf{X}_{\phi \xi}= \begin{pmatrix} X_{\sigma ^* \omega} && X_{\sigma ^* \bar{\omega}} {\mathcal{C}}^{-1} && X_{\sigma ^* \lambda} \\ X_{\sigma \omega} && X_{\sigma \bar{\omega}} {\mathcal{C}}^{-1} && X_{\sigma \lambda} \\ X_{\theta \omega} && X_{\theta \bar{\omega}} {\mathcal{C}}^{-1} && X_{\theta \lambda} \end{pmatrix}, \nonumber \\ X[\text{F},\text{S}] &\equiv \mathbf{X}_{\Xi \Phi} = \begin{pmatrix} X_{\bar{\Omega} \Sigma} && X_{\bar{\Omega} \Sigma ^*} && X_{\bar{\Omega} \Theta} \\ {\mathcal{C}}^{-1} X_{\Omega \Sigma} && {\mathcal{C}}^{-1} X_{\Omega \Sigma^{*}} && {\mathcal{C}}^{-1} X_{\Omega \Theta} \\ {\mathcal{C}}^{-1} X_{\Lambda \Sigma} && {\mathcal{C}}^{-1} X_{\Lambda \Sigma ^*} && {\mathcal{C}}^{-1} X_{\Lambda \Theta} \end{pmatrix}, \nonumber \\ X[\text{f},\text{S}] &\equiv \mathbf{X}_{\xi \Phi} = \begin{pmatrix} X_{\bar{\omega} \Sigma} && X_{\bar{\omega} \Sigma ^*} && X_{\bar{\omega} \Theta} \\ {\mathcal{C}}^{-1} X_{\omega \Sigma} && {\mathcal{C}}^{-1} X_{\omega \Sigma^{*}} && {\mathcal{C}}^{-1} X_{\omega \Theta} \\ {\mathcal{C}}^{-1} X_{\lambda \Sigma} && {\mathcal{C}}^{-1} X_{\lambda \Sigma ^*} && {\mathcal{C}}^{-1} X_{\lambda \Theta} \end{pmatrix}, \nonumber \\ X[\text{F},\text{s}] &\equiv \mathbf{X}_{\Xi \phi} = \begin{pmatrix} X_{\bar{\Omega} \sigma} && X_{\bar{\Omega} \sigma ^*} && X_{\bar{\Omega} \theta} \\ {\mathcal{C}}^{-1} X_{\Omega \sigma} && {\mathcal{C}}^{-1} X_{\Omega \sigma^{*}} && {\mathcal{C}}^{-1} X_{\Omega \theta} \\ {\mathcal{C}}^{-1} X_{\Lambda \sigma} && {\mathcal{C}}^{-1} X_{\Lambda \sigma ^*} && {\mathcal{C}}^{-1} X_{\Lambda \theta} \end{pmatrix}, \nonumber \\ X[\text{f},\text{s}] &\equiv \mathbf{X}_{\xi \phi} = \begin{pmatrix} X_{\bar{\omega} \sigma} && X_{\bar{\omega} \sigma ^*} && X_{\bar{\omega} \theta} \\ {\mathcal{C}}^{-1} X_{\omega \sigma} && {\mathcal{C}}^{-1} X_{\omega \sigma^{*}} && {\mathcal{C}}^{-1} X_{\omega \theta} \\ {\mathcal{C}}^{-1} X_{\lambda \sigma} && {\mathcal{C}}^{-1} X_{\lambda \sigma ^*} && {\mathcal{C}}^{-1} X_{\lambda \theta} \end{pmatrix}, \nonumber \\ X[\text{F},\text{F}] &\equiv \mathbf{X}_{\Xi \Xi}=\begin{pmatrix} X_{\bar{\Omega} \Omega} && X_{\bar{\Omega} \bar{\Omega}} {\mathcal{C}}^{-1} && X_{\bar{\Omega} \Lambda} \\ {\mathcal{C}}^{-1} X_{\Omega \Omega} && {\mathcal{C}}^{-1} X_{\Omega \bar{\Omega}}{\mathcal{C}}^{-1} && {\mathcal{C}}^{-1} X_{\Omega \Lambda} \\ {\mathcal{C}}^{-1} X_{\Lambda \Omega} && {\mathcal{C}}^{-1} X_{\Lambda \bar{\Omega}}{\mathcal{C}}^{-1} && {\mathcal{C}}^{-1} X_{\Lambda \Lambda} \end{pmatrix}, \nonumber \\ X[\text{f},\text{f}] &\equiv \mathbf{X}_{\xi \xi}=\begin{pmatrix} X_{\bar{\omega} \omega} && X_{\bar{\omega} \bar{\omega}} {\mathcal{C}}^{-1} && X_{\bar{\omega} \lambda} \\ {\mathcal{C}}^{-1} X_{\omega \omega} && {\mathcal{C}}^{-1} X_{\omega \bar{\omega}}{\mathcal{C}}^{-1} && {\mathcal{C}}^{-1} X_{\omega \lambda} \\ {\mathcal{C}}^{-1} X_{\lambda \omega} && {\mathcal{C}}^{-1} X_{\lambda \bar{\omega}}{\mathcal{C}}^{-1} && {\mathcal{C}}^{-1} X_{\lambda \lambda} \end{pmatrix}, \nonumber \\ X[\text{F},\text{f}] &\equiv \mathbf{X}_{\Xi \xi}=\begin{pmatrix} X_{\bar{\Omega} \omega} && X_{\bar{\Omega} \bar{\omega}} {\mathcal{C}}^{-1} && X_{\bar{\Omega} \lambda} \\ {\mathcal{C}}^{-1} X_{\Omega \omega} && {\mathcal{C}}^{-1} X_{\Omega \bar{\omega}}{\mathcal{C}}^{-1} && {\mathcal{C}}^{-1} X_{\Omega \lambda} \\ {\mathcal{C}}^{-1} X_{\Lambda \omega} && {\mathcal{C}}^{-1} X_{\Lambda \bar{\omega}} {\mathcal{C}}^{-1} && {\mathcal{C}}^{-1} X_{\Lambda \lambda} \end{pmatrix}, \nonumber \\ X[\text{f},\text{F}] &\equiv \mathbf{X}_{\xi \Xi}=\begin{pmatrix} X_{\bar{\omega} \Omega} && X_{\bar{\omega} \bar{\Omega}} {\mathcal{C}}^{-1} && X_{\bar{\omega} \Lambda} \\ {\mathcal{C}}^{-1} X_{\omega \Omega} && {\mathcal{C}}^{-1} X_{\omega \bar{\Omega}}{\mathcal{C}}^{-1} && {\mathcal{C}}^{-1} X_{\omega \Lambda} \\ {\mathcal{C}}^{-1} X_{\lambda \Omega} && {\mathcal{C}}^{-1} X_{\lambda \bar{\Omega}} {\mathcal{C}}^{-1} && {\mathcal{C}}^{-1} X_{\lambda \Lambda} \end{pmatrix}, \nonumber \\ X[\text{S},\text{S}] &\equiv \mathbf{X}_{\Phi \Phi}=\begin{pmatrix} X_{\Sigma ^* \Sigma} && X_{\Sigma ^* \Sigma ^*} && X_{\Sigma^* \Theta} \\ X_{\Sigma \Sigma} && X_{\Sigma \Sigma ^{*}} && X_{\Sigma \Theta} \\ X_{\Theta \Sigma} && X_{\Theta \Sigma ^*} && X_{\Theta \Theta}\end{pmatrix}, \nonumber \\ X[\text{S},\text{s}] &\equiv \mathbf{X}_{\Phi \phi}=\begin{pmatrix} X_{\Sigma ^* \sigma} && X_{\Sigma ^* \sigma ^*} && X_{\Sigma^* \theta} \\ X_{\Sigma \sigma} && X_{\Sigma \sigma ^{*}} && X_{\Sigma \theta} \\ X_{\Theta \sigma} && X_{\Theta \sigma ^*} && X_{\Theta \theta}\end{pmatrix}, \nonumber \\ X[\text{s},\text{S}] &\equiv \mathbf{X}_{\phi \Phi}=\begin{pmatrix} X_{\sigma ^* \Sigma} && X_{\sigma ^* \Sigma ^*} && X_{\sigma^* \Theta} \\ X_{\sigma \Sigma} && X_{\sigma \Sigma ^{*}} && X_{\sigma \Theta} \\ X_{\theta \Sigma} && X_{\theta \Sigma ^*} && X_{\theta \Theta}\end{pmatrix}, \nonumber \\ X[\text{s},\text{s}] &\equiv \mathbf{X}_{\phi \phi}=\begin{pmatrix} X_{\sigma ^* \sigma} && X_{\sigma ^* \sigma ^*} && X_{\sigma^* \theta} \\ X_{\sigma \sigma} && X_{\sigma \sigma ^{*}} && X_{\sigma \theta} \\ X_{\theta \sigma} && X_{\theta \sigma ^*} && X_{\theta \theta}\end{pmatrix} .\end{aligned}$$ The indices $i,j\in\mathbb{N}$ label a specific element of the respective matrix. The full one-loop effective action is then obtained as $$\begin{aligned} {\mathcal{L}}_{{\ensuremath{\text{EFT}}\xspace}}^{1{\ensuremath{\ell}}} = \kappa\sum_\alpha \sum _{ij \cdots} F^\alpha(M_i,M_j,\dots) \mathcal{O}^\alpha_{ij\cdots}, \label{eq:L_all_generic}\end{aligned}$$ where $\kappa=1/(4\pi)^2$ and the sum over $\alpha$ runs over all operators and their corresponding coefficients. Several comments regarding the use of the operators of are in order. First, no assumptions have been made about the dependence of the second derivatives $X_{A B}$ regarding gamma matrices. The result is valid for any spin $1/2$ spinor structure appearing in these derivatives. Second, care has to be taken to retain the poles of the coefficients since the gamma algebra has to be performed in $d = 4 - {\epsilon}$ dimensions, which may generate finite contributions when combined with the poles. The function `ExpandEps`, contained in the ancillary Mathematica file `LoopFunctions.m` in the arXiv submission of this paper, can be used to extract these finite contributions. Third, some of the coefficients diverge in the case of degenerate masses if the degenerate limit is not taken carefully. The most convenient way to deal with degenerate masses may be to first set the masses equal, which modifies the integrals appearing in the coefficients $F^\alpha(M_i,M_j,\dots)$, and to then calculate these integrals using the reduction algorithm implemented in the ancillary Mathematica file `LoopFunctions.m`. Last, there are no $c_s$ or $c_F$ factors appearing in the final result, in contrast to [@Drozd:2015rsp; @Ellis:2017jns; @Summ:2018oko]. In our formulation these prefactors have been fixed by our treatment of the different kinds of fields and are absorbed in the coefficients. Infrared and ultra-violet divergences ------------------------------------- It appears that the operator coefficients have infrared divergences, which might be surprising as the infrared physics should cancel in the matching. The reason for the appearance of such poles is the fact that expansion by regions was used to perform the calculation as discussed in [section \[sec: intro\]]{}. For a heavy-light loop this means that the one-loop integral $I_\text{full}$ in the full integration region is split into a part $I_\text{soft}$, calculated in the soft region, and a part $I_\text{hard}$, calculated in the hard region, $$\begin{aligned} I_\text{full} = I_\text{soft} + I_\text{hard}.\end{aligned}$$ Only the hard part remains, since the soft part is canceled in the matching by the EFT contribution. For the example of $I_\text{full}$ being finite, a UV-divergence in the soft part of the integration region cancels with an IR-divergence in the hard part with the condition $$\begin{aligned} \frac{1}{{{\epsilon}_{\text{UV}}}}=\frac{1}{{{\epsilon}_{\text{IR}}}}, \label{eq: epsrel}\end{aligned}$$ which assures that scaleless integrals vanish in dimensional regularization. Since the soft part is removed in the matching, the IR-divergence of the hard part remains. However, such an IR-divergence should be interpreted as a subtracted UV-divergence coming from the EFT as indicated by . It is not surprising that these divergences do not cancel in the matching since the UV behavior of the EFT is modified as compared to the UV-theory. However, since these genuine UV-divergences may still combine with an ${\epsilon}$ from the gamma algebra to yield finite contributions they must be treated in the same way as $1/{\epsilon}$ poles stemming from the UV behavior of the UV-theory. After performing the trace and the gamma algebra, remaining terms containing $1/{\epsilon}$ poles can be discarded, which amounts to performing a matching calculation in the ${\ensuremath{\overline{\text{MS}}}\xspace}$ scheme. Application to models with massive vector fields {#sec:results_vectors} ------------------------------------------------ The operators calculated in this paper can be used to treat massive vector fields in Feynman gauge as described in [@Zhang:2016pja]. Furthermore, couplings of fermions to massless gauge bosons can be correctly accounted for as well using the same technique and the treatment is complete when the UV-theory is renormalizable. This follows from the fact that the gauge-kinetic term of a fermion $\psi$ is linear in the covariant derivative so that $X_{A_\mu \psi}$ is independent of $P_\mu$. This is not the case for scalar fields, since the kinetic term is quadratic in $P_\mu$, which means that even for a renormalizable UV-theory there are further operators stemming from the coupling of scalar fields to massless gauge bosons. Of course, once one considers the matching of a UV-theory that already contains higher dimensional operators with covariant derivatives to an EFT, further operators arise also for fermions. These missing operators all stem from open covariant derivatives and are currently unknown. Extraction of $\beta$-functions ------------------------------- As was pointed out in [@Henning:2016lyp] functional methods can be used to calculate $\beta$-functions since they allow for the computation of the loop-corrected generator of 1PI Green’s functions. To one-loop we have $$\begin{aligned} \Gamma[\Phi]=\Gamma^{\ensuremath{\text{tree}}\xspace}[\Phi]+\Gamma^{1{\ensuremath{\ell}}}[\Phi],\end{aligned}$$ where $\Gamma^{\ensuremath{\text{tree}}\xspace}[\Phi]=S[\Phi]$ is the tree-level generator of 1PI Green’s functions, which is simply the classical action. Assume that $\Gamma^{\ensuremath{\text{tree}}\xspace}[\Phi]$ contains a kinetic term $\mathcal{O}_K[\Phi]$ and an interaction term $g \mathcal{O}_g[\Phi]$. Then, in general, the one-loop contribution will contain corrections to these, which depend on the renormalization scale $\mu$, so that $$\begin{aligned} \Gamma[\Phi] \supset \int {\ensuremath{\mathrm{d}}}^4 x \; \big\{a_K(\mu) \mathcal{O}_K[\Phi]+a_g(\mu) \mathcal{O}_g[\Phi]\big\}.\end{aligned}$$ Canonically normalizing the kinetic term for the field $\Phi$ yields $$\begin{aligned} \Gamma[\Phi] \supset \int {\ensuremath{\mathrm{d}}}^4 x \; \big\{\mathcal{O}_K[\Phi]+a'_g(\mu) \mathcal{O}_g[\Phi]\big\},\end{aligned}$$ where $$\begin{aligned} \mu \frac{{\ensuremath{\mathrm{d}}}}{{\ensuremath{\mathrm{d}}}\mu}a'_g(\mu)=0 \label{eq: running equation}\end{aligned}$$ due to the Callan-Symanzik equation [@Callan:1970yg; @Symanzik:1970rt]. Eq.  can be solved for the one-loop $\beta$-function of the coupling $g$. In a specific sense, the UOLEA represents an expression for $\Gamma^{1{\ensuremath{\ell}}}$ of a model with operators up to dimension 6, and it can thus be used to calculate the one-loop $\beta$-functions of all dimension 6 operators for any given Lagrangian as described above. In order to calculate $\Gamma^{1{\ensuremath{\ell}}}$, the UOLEA operators must be re-interpreted as follows: Since one is interested in the full $\Gamma^{1{\ensuremath{\ell}}}$, a distinction between heavy and light fields must not be made and all fields shall be treated as “heavy” fields. As a consequence, the one-loop effective action of a scalar theory is given by $$\begin{aligned} \Gamma[\Phi] = S[\Phi] + \frac{i}{2} \log\det\left(-\frac{\delta^2 {\mathcal{L}}_\text{int}}{\delta\Phi\delta\Phi}\right), \label{eq:gamma_1L_heavy}\end{aligned}$$ where $\Phi$ represents the collection of all scalar fields contained in the model. The expression on the r.h.s. of can be expanded as outlined e.g. in [@Drozd:2015rsp; @Henning:2016lyp; @Fuentes-Martin:2016uol] and one arrives at the heavy-only part of the UOLEA , which contains only operators built out of derivatives of the Lagrangian with respect to “heavy” $\Phi$ fields. This procedure is not restricted to a theory with only scalars and can also be applied to models with both scalars and fermions using the heavy-only part of . However, higher-dimensional operators with covariant derivatives have not been treated in this work and hence their influence on the running of the couplings cannot be determined using our result. Applications {#sec:applications} ============ Integrating out the top quark from the Standard Model ----------------------------------------------------- As a simple first example we consider the corrections to the Higgs tadpole and mass parameter that arise when integrating out the top quark from the Standard Model. The considered interaction Lagrangian shall contain only one coupling $$\begin{aligned} {\mathcal{L}}_{\ensuremath{\text{SM}}\xspace}\supset -\frac{g_t}{\sqrt{2}}h \bar{t}t,\end{aligned}$$ where $h$ denotes the physical Higgs field, $t$ is the top quark and $g_t$ is the top Yukawa coupling. The relevant operators of the UOLEA are given by $$\begin{aligned} \frac{1}{\kappa} {\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}^{1{\ensuremath{\ell}}} = \operatorname{tr}\Bigg\lbrace & \frac{1}{4} m_{\Xi i} m_{\Xi j}^3 {\tilde{\mathcal{I}}}^{13} _{ij} [P_\mu,(\mathbf{X}_{\Xi \Xi})_{ij}][P^\mu,(\mathbf{X}_{\Xi \Xi})_{ji}] \nonumber \\ & -\frac{1}{2} {\tilde{\mathcal{I}}}[q^4] ^{22} _{ij} \gamma^\nu [P_\mu,(\mathbf{X}_{\Xi \Xi})_{ij}]\gamma_\nu[P^\mu,(\mathbf{X}_{\Xi \Xi})_{ji}] \nonumber \\ & - {\tilde{\mathcal{I}}}[q^4] ^{22} _{ij} \gamma^\nu [P_\nu,(\mathbf{X}_{\Xi \Xi})_{ij}]\gamma_\mu[P^\mu,(\mathbf{X}_{\Xi \Xi})_{ji}] \nonumber \\ & +\frac{1}{2} m_{\Xi i} {\tilde{\mathcal{I}}}^1 _i (\mathbf{X}_{\Xi \Xi})_{ii} \nonumber \\ & -\frac{1}{4} m_{\Xi i} m_{\Xi j} {\tilde{\mathcal{I}}}^{11} _{ij} (\mathbf{X}_{\Xi \Xi})_{ij} (\mathbf{X}_{\Xi \Xi})_{ji} \nonumber \\ & -\frac{1}{4} {\tilde{\mathcal{I}}}[q^2] ^{11} _{ij} \gamma ^\mu (\mathbf{X}_{\Xi \Xi})_{ij} \gamma_\mu (\mathbf{X}_{\Xi \Xi})_{ji} \Bigg\rbrace, \label{eq:UOLEALAG-topout}\end{aligned}$$ where $m_{\Xi i}$ denotes the mass of the $i$th component of $\Xi$. The matrix $(\mathbf{X}_{\Xi \Xi})$ is given by $$\begin{aligned} (\mathbf{X}_{\Xi \Xi})_{\alpha \beta ij} = \begin{pmatrix} (X_{\bar{t}t})_{\alpha \beta ij} & 0 \\ 0 & {\mathcal{C}}^{-1} _{\alpha \rho} (X_{t\bar{t}})_{\rho \sigma ij} {\mathcal{C}}^{-1} _{\sigma \beta} \end{pmatrix} = -\frac{g_t}{\sqrt{2}}h \delta_{\alpha \beta} \delta_{ij} \mathbf{1}_{2\times 2}, \label{eq:top-derivative}\end{aligned}$$ with $\alpha,\beta = 1,\ldots,4$ being spinor indices and $i,j = 1,2,3$ being color indices. In we included terms with two covariant derivatives in order to obtain the field-redefinition of the Higgs field that is necessary to canonically normalize the corresponding Higgs field $\hat{h}$ in the effective theory. Since this redefinition arises from the correction to the kinetic term only, we can set $P^\mu = i\partial ^\mu$. Inserting into and calculating the trace yields $$\begin{aligned} \frac{1}{\kappa}{\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}^{1{\ensuremath{\ell}}} ={}& -3g_t^2 \left(m_t^4 {\tilde{\mathcal{I}}}^4 _t-2d{\tilde{\mathcal{I}}}[q^4]^4 _{t}-4 {\tilde{\mathcal{I}}}[q^4] ^4 _{t} \right) (\partial_\mu h) (\partial^\mu h) \nonumber \\ & -3g_t^2 \left({\tilde{\mathcal{I}}}^2_t m_t^2+d {\tilde{\mathcal{I}}}[q^2]^2_t \right)h^2-\frac{12}{\sqrt{2}} g_t m_t {\tilde{\mathcal{I}}}^1_t h, \label{eq: HiggsEFT}\end{aligned}$$ where $d = 4 - {\epsilon}= g^\mu _\mu$ has to be retained since the integrals contain poles in $1/\epsilon$. The loop functions ${\tilde{\mathcal{I}}}$ are defined in [appendix \[sec:loop\_functions\]]{}. It is customary to introduce the canonically normalized field $\hat{h}$ which is related to $h$ through $$\begin{aligned} \hat{h}=\left(1+\frac{1}{2}\delta Z_h \right) h.\end{aligned}$$ From one can read off $\delta Z_h$ to be $$\begin{aligned} \delta Z_h = -6g_t^2\left(m_t^4 {\tilde{\mathcal{I}}}^4 _t-2d{\tilde{\mathcal{I}}}[q^4]^4 _{t}-4 {\tilde{\mathcal{I}}}[q^4] ^4 _{t}\right) = -6g_t^2\left(m_t^4 {\tilde{\mathcal{I}}}^4_t - 12 {\tilde{\mathcal{I}}}[q^4] ^4_{t} + \frac{1}{6}\right). \label{eq:delta_Zh_top}\end{aligned}$$ The loop functions that appear in and can be calculated with the Mathematica file `LoopFunctions.m` and read $$\begin{aligned} {\tilde{\mathcal{I}}}^1_t &= 2 {\tilde{\mathcal{I}}}[q^2]^2_t = m_t^2 \left(\frac{2}{{\epsilon}} + 1 - \log\frac{m_t^2}{\mu^2}\right), \\ {\tilde{\mathcal{I}}}^2_t &= 24 {\tilde{\mathcal{I}}}[q^4]^4_t = \frac{2}{{\epsilon}}-\log\frac{m_t^2}{\mu^2}, \\ {\tilde{\mathcal{I}}}^4_t &= \frac{1}{6 m_t^4}.\end{aligned}$$ MSSM threshold correction to the quartic Higgs coupling {#sec:lambdacalc} ------------------------------------------------------- As a first nontrivial application and a check we reproduce the one-loop threshold correction of the quartic Higgs coupling $\lambda$ when matching the MSSM to the SM at one-loop [@Bagnaschi:2014rsa] in the unbroken phase. As discussed in [@Bagnaschi:2014rsa] there are several contributions of distinct origins. The scalar contribution $\Delta \lambda^{1{\ensuremath{\ell}},\phi}$ arises from interactions of the SM-like Higgs with heavy Higgs bosons, squarks and sleptons, and the relevant interaction Lagrangian is given by $$\begin{aligned} {\mathcal{L}}_{\phi} ={}& - \frac{g_t^2}{2} h^2 ({\tilde{t}_{L}}^* {\tilde{t}_{L}} + {\tilde{t}_{R}}^*{\tilde{t}_{R}})-\frac{1}{\sqrt{2}} g_t X_t h ({\tilde{t}_{L}}^* {\tilde{t}_{R}} + {\tilde{t}_{L}}{\tilde{t}_{R}}^*) \nonumber \\ & -\frac{1}{8} c_{2\beta} h^2\sum_{i} \left[\left(g_2^2 - \frac{g_1^2}{5}\right) {\tilde{u}}^* _{Li} {\tilde{u}}_{Li}+ \frac{4}{5} g_1^2 {\tilde{u}}_{Ri}^* {\tilde{u}}_{Ri}- \left(g_2^2 + \frac{g_1^2}{5}\right) {\tilde{d}}_{Li}^* {\tilde{d}}_{Li}- \frac{2}{5} g_1^2 {\tilde{d}}_{Ri}^* {\tilde{d}}_{Ri}\right] \nonumber \\ &-\frac{1}{8} c_{2\beta} h^2\sum_{i} \left[\left(g_2^2 + g_1^2\frac{3}{5}\right) {\tilde{\nu}}^* _{Li} {\tilde{\nu}}_{Li}- \left(g_2^2 - g_1^2\frac{3}{5}\right) {\tilde{e}}_{Li}^* {\tilde{e}}_{Li}- \frac{6}{5} g_1^2 {\tilde{e}}_{Ri}^* {\tilde{e}}_{Ri}\right] \nonumber \\ & +\frac{1}{16} c_{2\beta}^2 \left(\frac{3}{5} g_1^2 + g_2^2\right) h^2 A^2- \frac{1}{8} \left((1 + s_{2\beta}^2) g_2^2 - \frac{3}{5} g_1^2 c_{2\beta}^2\right) h^2 H^{-} H^{+} \nonumber \\ & - \frac{1}{16} \left(\frac{3}{5} g_1^2 + g_2^2\right) (3 s_{2\beta}^2 - 1) h^2 H^2- \frac{1}{8} \left(\frac{3}{5} g_1^2 + g_2^2\right) s_{2\beta} c_{2\beta} h^3 H \nonumber \\ & + \frac{1}{8} \left(\frac{3}{5} g_1^2 + g_2^2\right) s_{2\beta} c_{2\beta} h^2 (G^{-} H^{+} + H^{-} G^{+})+ \frac{1}{8} \left(\frac{3}{5} g_1^2 + g_2^2\right) s_{2\beta} c_{2\beta} h^2 G^0 A.\end{aligned}$$ Here $g_1$ and $g_2$ are the GUT-normalized electroweak gauge couplings, $X_t$ is the stop mixing parameter, and $g_t = y_t s_\beta$ with $y_t$ being the MSSM top Yukawa coupling and $s_\beta=\sin (\beta)$. The three generations of left- and right-handed squarks and sleptons are denoted as ${\tilde{u}}_{Li}$, ${\tilde{u}}_{Ri}$, ${\tilde{d}}_{Li}$, ${\tilde{d}}_{Ri}$, ${\tilde{e}}_{Li}$, ${\tilde{e}}_{Ri}$, ${\tilde{\nu}}_{Li}$ ($i=1,2,3$), respectively, where ${\tilde{t}_{L}} \equiv {\tilde{u}}_{L3}$ and ${\tilde{t}_{R}} \equiv {\tilde{u}}_{R3}$ are the left- and right-handed stops. Furthermore we have defined $h=\sqrt{2}\, {\ensuremath{\Re\mathfrak{e}}}(\mathcal{H}^0)$, where $\mathcal{H}^0$ is the neutral component of the SM-like Higgs doublet $\mathcal{H}$ related to the Higgs doublets $H_u$ and $H_d$ through $$\begin{aligned} \mathcal{H} = - c_\beta {\varepsilon}H^{*}_d + s_\beta H_u, \label{eq:rot_H}\end{aligned}$$ where ${\varepsilon}$ is the antisymmetric tensor with ${\varepsilon}_{12}=1$ and $c_\beta = \cos(\beta)$, $s_{2\beta} = \sin(2\beta)$ and $c_{2\beta} = \cos(2\beta)$. The fields $G^0$ and $G^{\pm}$ are Goldstone bosons arising from the same Higgs doublet. The heavy Higgs bosons $H$, $A$ and $H^\pm$ arise from the heavy doublet $\mathcal{A}$, which is related to the MSSM doublets through $$\begin{aligned} \mathcal{A} = s_\beta {\varepsilon}H^{*}_d + c_\beta H_u. \label{eq:rot_A}\end{aligned}$$ Note, that since we work in the unbroken phase, $\beta$ should not be regarded as a ratio of vacuum expectation values, but as the fine-tuned mixing angle which rotates the two MSSM Higgs doublets $H_u$ and $H_d$ into $\mathcal{H}$ and $\mathcal{A}$ as given in – [@Bagnaschi:2014rsa]. The fermionic contribution $\Delta \lambda ^{1{\ensuremath{\ell}},\chi}$ to the threshold correction of $\lambda$ originates from interactions of the Higgs boson with charginos $\tilde{\chi}^{+}_i$ ($i=1,2$) and neutralinos $\tilde{\chi}^0_i$ ($i=1,\ldots,4$) described by the interaction Lagrangian $$\begin{aligned} {\mathcal{L}}_\chi ={}& - \frac{g_2}{\sqrt{2}} h c_\beta (\overline{\tilde{\chi}^{+}_1} P_R \tilde{\chi}^{+} _2 + \overline{\tilde{\chi}^{+}_2} P_L \tilde{\chi}^{+} _1)- \frac{g_2}{\sqrt{2}} h s_\beta (\overline{\tilde{\chi}^{+}_2} P_R \tilde{\chi}^{+} _1 + \overline{\tilde{\chi}^{+}_1} P_L \tilde{\chi}^{+}_2 )\nonumber \\ & +i \frac{g_Y}{2\sqrt{2}} (c_\beta - s_\beta) h \overline{\tilde{\chi}^0_1} \gamma^5 \tilde{\chi}^0_3-\frac{g_Y}{2\sqrt{2}} (c_\beta + s_\beta)h \overline{\tilde{\chi}^0_1} \tilde{\chi}^0_4 \nonumber \\ & -i \frac{g_2}{2\sqrt{2}} (c_\beta - s_\beta) h \overline{\tilde{\chi}^0_2} \gamma^5 \tilde{\chi}^0_3+ \frac{g_2}{2\sqrt{2}} (c_\beta + s_\beta) h \overline{\tilde{\chi}^0_2} \tilde{\chi}^0_4 ,\end{aligned}$$ where $\overline{\tilde{\chi}^0_i} = (\tilde{\chi}^0_i)^T {\mathcal{C}}$ and $g_Y = \sqrt{3/5}\, g_1$. To calculate the one-loop threshold correction for $\lambda$, the following contributions with purely scalar and purely fermionic operators from our generic UOLEA are relevant, $$\begin{aligned} \frac{1}{\kappa} {\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}^{1{\ensuremath{\ell}}} = \operatorname{tr}\Bigg\lbrace & \frac{1}{2} {\tilde{\mathcal{I}}}^{1} _{i} (\mathbf{X}_{\Phi \Phi})_{ii}+\frac{1}{2} {\tilde{\mathcal{I}}}[q^2]^{22} _{ij} [P_\mu, (\mathbf{X}_{\Phi \Phi})_{ij}] [P^\mu, (\mathbf{X}_{\Phi \Phi})_{ji}]\nonumber \\ & +\frac{1}{4} {\tilde{\mathcal{I}}}^{11} _{ij} (\mathbf{X}_{\Phi \Phi})_{ij}(\mathbf{X}_{\Phi \Phi})_{ji}+\frac{1}{6} {\tilde{\mathcal{I}}}^{111} _{ijk}(\mathbf{X}_{\Phi \Phi})_{ij}(\mathbf{X}_{\Phi \Phi})_{jk}(\mathbf{X}_{\Phi \Phi})_{ki} \nonumber \\ &+\frac{1}{8} {\tilde{\mathcal{I}}}^{1111} _{ijkl} (\mathbf{X}_{\Phi \Phi})_{ij}(\mathbf{X}_{\Phi \Phi})_{jk} (\mathbf{X}_{\Phi \Phi})_{kl} (\mathbf{X}_{\Phi \Phi})_{li} + \frac{1}{2} {\tilde{\mathcal{I}}}^{1} _{i} (\mathbf{X}_{\Phi \phi})_{ij}(\mathbf{X}_{\phi \Phi})_{ji} \nonumber \\ & -\frac{1}{8} m_{\Xi i}m_{\Xi j} m_{\Xi k} m_{\Xi l} {\tilde{\mathcal{I}}}^{1111} _{ijkl}(\mathbf{X}_{\Xi \Xi})_{ij}(\mathbf{X}_{\Xi \Xi})_{jk} (\mathbf{X}_{\Xi \Xi})_{kl} (\mathbf{X}_{\Xi \Xi})_{li} \nonumber \\ & -\frac{1}{2} m_{\Xi i}m_{\Xi j} {\tilde{\mathcal{I}}}[q^2] ^{1111} _{ijkl}(\mathbf{X}_{\Xi \Xi})_{ij}(\mathbf{X}_{\Xi \Xi})_{jk}\gamma^\mu (\mathbf{X}_{\Xi \Xi})_{kl} \gamma_\mu (\mathbf{X}_{\Xi \Xi})_{li} \nonumber \\ & -\frac{1}{4} m_{\Xi i}m_{\Xi k} {\tilde{\mathcal{I}}}[q^2] ^{1111} _{ijkl}(\mathbf{X}_{\Xi \Xi})_{ij}\gamma^\mu(\mathbf{X}_{\Xi \Xi})_{jk} (\mathbf{X}_{\Xi \Xi})_{kl} \gamma_\mu (\mathbf{X}_{\Xi \Xi})_{li} \nonumber \\ & -\frac{1}{8} g_{\mu \nu \rho \sigma} {\tilde{\mathcal{I}}}[q^4] ^{1111} _{ijkl}\gamma^\mu (\mathbf{X}_{\Xi \Xi})_{ij}\gamma^\nu(\mathbf{X}_{\Xi \Xi})_{jk} \gamma^\rho (\mathbf{X}_{\Xi \Xi})_{kl} \gamma^\sigma (\mathbf{X}_{\Xi \Xi})_{li} \nonumber \\ & +\frac{1}{4} m_{\Xi i} m_{\Xi j}^3 {\tilde{\mathcal{I}}}^{13} _{ij} [P_\mu,(\mathbf{X}_{\Xi \Xi})_{ij}][P^\mu,(\mathbf{X}_{\Xi \Xi})_{ji}] \nonumber \\ & -\frac{1}{2} {\tilde{\mathcal{I}}}[q^4] ^{22} _{ij} \gamma^\nu [P_\mu,(\mathbf{X}_{\Xi \Xi})_{ij}]\gamma_\nu[P^\mu,(\mathbf{X}_{\Xi \Xi})_{ji}] \nonumber \\ & - {\tilde{\mathcal{I}}}[q^4] ^{22} _{ij} \gamma^\nu [P_\nu,(\mathbf{X}_{\Xi \Xi})_{ij}]\gamma_\mu[P^\mu,(\mathbf{X}_{\Xi \Xi})_{ji}] \Bigg\rbrace, \label{eq:UOLEAToLambda}\end{aligned}$$ where $\kappa=1/(4\pi)^2$. The operators containing covariant derivatives can be removed by a field-strength renormalization of the Higgs field to canonically normalize the kinetic term. This field renormalization propagates into every Higgs coupling that has a non-vanishing tree-level contribution and hence also into the quartic coupling. Next, we compute the $\mathbf{X}_{AB}$ matrices as the second derivatives of the Lagrangian with respect to the different kinds of fields. We start with $$\begin{aligned} \mathbf{X}_{\Phi \Phi}=\begin{pmatrix} X_{\Sigma ^* \Sigma} && X_{\Sigma ^{*} \Sigma ^{*}} && X_{\Sigma^* \Theta} \\ X_{\Sigma \Sigma} && X_{\Sigma \Sigma ^{*}} && X_{\Sigma \Theta} \\ X_{\Theta \Sigma} && X_{\Theta \Sigma ^*} && X_{\Theta \Theta}\end{pmatrix} \label{eq: heavy-scalar-heavy-scalar}\end{aligned}$$ and define $$\begin{aligned} \Sigma &= \begin{pmatrix} {\tilde{u}}_{Li} & {\tilde{u}}_{Ri} & {\tilde{d}}_{Li} & {\tilde{d}}_{Ri} & {\tilde{e}}_{Li} & {\tilde{e}}_{Ri} & {\tilde{\nu}}_{Li} & H^{+} \end{pmatrix}^T, & \Theta &= \begin{pmatrix} A & H \end{pmatrix}^T ,\end{aligned}$$ where $i=1,2,3$ denotes the generation index. The non-vanishing derivatives with respect to two heavy scalar fields read $$\begin{aligned} X_{{\tilde{u}}_{Li}^* {\tilde{u}}_{Lj}}&=X_{{\tilde{u}}_{Li} {\tilde{u}}_{Lj}^*}=\frac{1}{8}c_{2\beta}h^2 \delta_{ij}\left(g_2^2-\frac{1}{5}g_1^2\right)+\delta_{3i}\delta_{3j}\frac{g_t^2}{2}h^2, \\ X_{{\tilde{u}}_{Ri}^* {\tilde{u}}_{Rj}}&=X_{{\tilde{u}}_{Ri} {\tilde{u}}_{Rj}^*}=\frac{1}{10}c_{2\beta}h^2 \delta_{ij}g_1^2+\delta_{3i}\delta_{3j}\frac{g_t^2}{2}h^2, \\ X_{{\tilde{d}}_{Li}^* {\tilde{d}}_{Lj}}&=X_{{\tilde{d}}_{Li} {\tilde{d}}_{Lj}^*}=-\frac{1}{8}c_{2\beta}h^2 \delta_{ij}\left(g_2^2+\frac{1}{5}g_1^2\right), \\ X_{{\tilde{d}}_{Ri}^* {\tilde{d}}_{Rj}}&=X_{{\tilde{d}}_{Ri} {\tilde{d}}_{Rj}^*}=\frac{1}{20}c_{2\beta}h^2 \delta_{ij}g_1^2, \\ X_{{\tilde{e}}_{Li}^* {\tilde{e}}_{Lj}}&=X_{{\tilde{e}}_{Li} {\tilde{e}}_{Lj}^*}=\frac{1}{8}c_{2\beta}h^2 \delta_{ij}\left(g_2^2-\frac{3}{5}g_1^2\right), \\ X_{{\tilde{e}}_{Ri}^* {\tilde{e}}_{Rj}}&=X_{{\tilde{e}}_{Ri} {\tilde{e}}_{Rj}^*}=-\frac{1}{20}c_{2\beta}h^2 \delta_{ij}g_1^2, \\ X_{{\tilde{\nu}}_{Li}^* {\tilde{\nu}}_{Lj}}&=X_{{\tilde{\nu}}_{Li} {\tilde{\nu}}_{Lj}^*}=\frac{1}{8}c_{2\beta}h^2 \delta_{ij}\left(g_2^2+\frac{3}{5}g_1^2\right), \\ X_{H^+ H^-}&=X_{H^- H^+}=\frac{1}{8}h^2 \left[(1+s_{2\beta}^2)g_2^2-\frac{3}{5}g_1^2 c_{2\beta}^2\right] \\ X_{AA}&=-\frac{1}{16}c_{2\beta}^2\left(\frac{3}{5}g_1^2+g_2^2\right)h^2, \\ X_{HH}&=\frac{1}{16}(2s_{2\beta}^2-1)\left(\frac{3}{5}g_1^2+g_2^2\right)h^2, \\ X_{{\tilde{u}}_{Li}^* {\tilde{u}}_{Rj}}&=X_{{\tilde{u}}_{Li} {\tilde{u}}_{Rj}^*}=\frac{1}{\sqrt{2}}\delta_{3i}\delta_{3j}g_t X_t h.\end{aligned}$$ Given these derivatives we find that $\mathbf{X}_{\Phi \Phi}$ is block-diagonal with the blocks being $$\begin{aligned} X_{\Sigma^* \Sigma}&=\begin{pmatrix} X_{{\tilde{u}}_{Li}^* {\tilde{u}}_{Lj}} & X_{{\tilde{u}}_{Li}^* {\tilde{u}}_{Rj}} & \mathbf{0}_{1\times 6} \\ X_{{\tilde{u}}_{Ri}^* {\tilde{u}}_{Lj}} & X_{{\tilde{u}}_{Ri}^* {\tilde{u}}_{Rj}} & \mathbf{0}_{1\times 6} \\ \mathbf{0}_{6\times 1} & \mathbf{0}_{6\times 1} & X_{\Pi^* \Pi} \end{pmatrix}, \\ X_{\Pi^* \Pi}&={\mathop{\rm diag}}(X_{{\tilde{d}}_{Li}^* {\tilde{d}}_{Lj}},X_{{\tilde{d}}_{Ri}^* {\tilde{d}}_{Rj}},X_{{\tilde{e}}_{Li}^* {\tilde{e}}_{Lj}},X_{{\tilde{e}}_{Ri}^* {\tilde{e}}_{Rj}},X_{{\tilde{\nu}}_{Li}^* {\tilde{\nu}}_{Lj}},X_{H^+ H^-}), \\ X_{\Sigma \Sigma^*}&=\begin{pmatrix} X_{{\tilde{u}}_{Li} {\tilde{u}}_{Lj}^*} & X_{{\tilde{u}}_{Li} {\tilde{u}}_{Rj}^*} & \mathbf{0}_{1\times 6} \\ X_{{\tilde{u}}_{Ri} {\tilde{u}}_{Lj}^*} & X_{{\tilde{u}}_{Ri} {\tilde{u}}_{Rj}^*} & \mathbf{0}_{1\times 6} \\ \mathbf{0}_{6\times 1} & \mathbf{0}_{6\times 1} & X_{\Pi \Pi^*} \end{pmatrix}, \\ X_{\Pi \Pi^*}&={\mathop{\rm diag}}(X_{{\tilde{d}}_{Li} {\tilde{d}}_{Lj}^*},X_{{\tilde{d}}_{Ri} {\tilde{d}}_{Rj}^*},X_{{\tilde{e}}_{Li} {\tilde{e}}_{Lj}^*},X_{{\tilde{e}}_{Ri} {\tilde{e}}_{Rj}^*},X_{{\tilde{\nu}}_{Li} {\tilde{\nu}}_{Lj}^*},X_{H^- H^+}), \\ X_{\Theta \Theta}&={\mathop{\rm diag}}(X_{AA},X_{HH}),\end{aligned}$$ where $\mathbf{0}_{m\times n}$ denotes the $m \times n$ matrix of only zeros. We next calculate $\mathbf{X}_{\phi \Phi}$ and $\mathbf{X}_{\Phi\phi}$, which contain derivatives with respect to one heavy and one light scalar field. We define the light scalar field multiplets as $$\begin{aligned} \sigma &= (G^+), & \theta &= \begin{pmatrix} h & G^0 \end{pmatrix}^T.\end{aligned}$$ As discussed in [section \[sec: intro\]]{} the derivatives w.r.t. the fields are evaluated at the background field configurations, and the heavy background fields are expressed in terms of the light ones using a local operator expansion.[^3] This corresponds to an expansion in $\Box/M^2$ for a heavy scalar field of mass $M$ and hence it leads to contributions suppressed by at least $1/M^2$. Since we are not interested in these suppressed contributions here, we only consider derivatives of the Lagrangian which exclusively contain light background fields and set all other derivatives to zero. The non-vanishing derivatives are given by $$\begin{aligned} X_{Hh} &= X_{hH}=\frac{3}{8}\left(\frac{3}{5}g_1^2+g_2^2\right)s_{2\beta}c_{2\beta}h^2 ,\\ X_{AG^0} &= X_{G^0A}=-\frac{1}{8}\left(\frac{3}{5}g_1^2+g_2^2\right)s_{2\beta}c_{2\beta}h^2, \\ X_{H^{+}G^{-}} &= X_{H^{-}G^{+}}=-\frac{1}{8}\left(\frac{3}{5}g_1^2+g_2^2\right)s_{2\beta}c_{2\beta}h^2.\end{aligned}$$ We then find that $\mathbf{X}_{\Phi \phi}$ is block-diagonal with the blocks being $$\begin{aligned} X_{\Sigma^* \sigma}&=\begin{pmatrix} \mathbf{0}_{7 \times 1} \\ X_{H^{-} G^{+}} \end{pmatrix}, \\ X_{\Sigma \sigma^*}&=\begin{pmatrix} \mathbf{0}_{7 \times 1} \\ X_{H^{+} G^{-}} \end{pmatrix}, \\ X_{\Theta \theta}&=\begin{pmatrix} 0 & X_{A G^0}\\ X_{Hh} & 0 \end{pmatrix}.\end{aligned}$$ Similarly, $\mathbf{X}_{\phi \Phi}$ is block-diagonal with diagonal entries $$\begin{aligned} X_{\sigma^* \Sigma}&=\begin{pmatrix} \mathbf{0}_{1 \times 7} & X_{G^{-} H^{+}} \end{pmatrix}, \\ X_{\sigma \Sigma^*}&=\begin{pmatrix} \mathbf{0}_{1 \times 7} & X_{G^{+} H^{-}} \end{pmatrix}, \\ X_{\theta \Theta}&=\begin{pmatrix} 0 & X_{h H}\\ X_{G^0 A} & 0 \end{pmatrix}.\end{aligned}$$ Finally, we need the derivatives with respect to two heavy fermions to construct the matrix $\mathbf{X}_{\Xi \Xi}$. We define $$\begin{aligned} \Omega &= \begin{pmatrix} \tilde{\chi}^+ _1 & \tilde{\chi}^+ _2 \end{pmatrix}^T, & \Lambda &= \begin{pmatrix} \tilde{\chi}^0 _1 & \tilde{\chi}^0 _2 & \tilde{\chi}^0 _3 & \tilde{\chi}^0 _4 \end{pmatrix}^T\end{aligned}$$ and the matrix $\mathbf{X}_{\Xi \Xi}$ is again block-diagonal with the non-vanishing entries $$\begin{aligned} X_{\bar{\Omega} \Omega}&={\mathcal{C}}^{-1} X^T_{\Omega \bar{\Omega}} {\mathcal{C}}^{-1}=-\frac{g_2}{\sqrt{2}}h\begin{pmatrix} 0 & c_\beta P_R+ s_\beta P_L \\ c_\beta P_L+s_\beta P_R & 0 \end{pmatrix}, \\ {\mathcal{C}}^{-1} X_{\Lambda \Lambda}&=\frac{h}{2\sqrt{2}}\begin{pmatrix} 0 & 0 & i g_Y(c_\beta-s_\beta)\gamma^5 & -g_Y(c_\beta+s_\beta) \\ 0 & 0 & -ig_2(c_\beta-s_\beta)\gamma^5 & g_2(c_\beta+s_\beta) \\ i g_Y(c_\beta-s_\beta)\gamma^5 & -ig_2(c_\beta-s_\beta)\gamma^5 & 0 & 0 \\ -g_Y(c_\beta+s_\beta) & g_2(c_\beta+s_\beta) & 0 & 0 \end{pmatrix}, \end{aligned}$$ where the relations of [appendix \[sec: spinor algebra\]]{} were used to simplify the expressions. Note, that in the calculation of $X_{\Lambda \Lambda}$ for a given Majorana fermion $\lambda$ the two fields $\bar{\lambda}$ and $\lambda$ are not independent, but are related via $\bar{\lambda}=\lambda^T {\mathcal{C}}$. Inserting all of the derivatives into , summing over all indices and canonically normalizing the kinetic term for the SM-like Higgs boson as $$\begin{aligned} h ={}& \left(1 - \frac{1}{2} \delta Z_h\right) \hat{h}, \\ \delta Z_h ={}& -6 g_t^2 X_t^2 {\tilde{\mathcal{I}}}[q^2]_{{\tilde{q}}{\tilde{u}}}^{22}+\frac{s_{2 \beta}}{2} \mu \left(g^2_Y M_1 \mu^2 {\tilde{\mathcal{I}}}^{13}_{1\mu}+g^2_Y M_1^3 {\tilde{\mathcal{I}}}^{31}_{1\mu}-3g^2_2 M_2 \mu^2 {\tilde{\mathcal{I}}}^{13}_{2\mu}-3g^2_2 M_2^3 {\tilde{\mathcal{I}}}^{31}_{2\mu}\right) \nonumber \\& +2(2+d)\left(-g_Y^2 {\tilde{\mathcal{I}}}[q^4]^{22}_{1\mu}+3g_2^2 {\tilde{\mathcal{I}}}[q^4]^{22}_{2\mu}\right) ,\end{aligned}$$ one finds the following effective Lagrangian $$\begin{aligned} {\mathcal{L}}_{{\ensuremath{\text{EFT}}\xspace}}^{1{\ensuremath{\ell}}} = \frac{1}{2}(\partial \hat{h})^2 - \frac{\lambda}{8} \hat{h}^4 + \cdots\end{aligned}$$ with $$\begin{aligned} \lambda &= \frac{1}{4} \left( \frac{3}{5} g_1^2 + g_2^2 \right) c_{2\beta}^2 + \kappa \Delta\lambda^{1{\ensuremath{\ell}}} , \\ \Delta\lambda^{1{\ensuremath{\ell}}} &= \Delta\lambda^{1{\ensuremath{\ell}},\text{reg}} + \Delta\lambda^{1{\ensuremath{\ell}},\phi} + \Delta\lambda^{1{\ensuremath{\ell}},\chi},\end{aligned}$$ and $$\begin{aligned} \Delta \lambda^{1{\ensuremath{\ell}},\phi} ={}& g_t^4 \left[ -3 X_t^4 {\tilde{\mathcal{I}}}_{{\tilde{q}}{\tilde{q}}{\tilde{u}}{\tilde{u}}}^{1111} -6 X_t^2 \left({\tilde{\mathcal{I}}}_{{\tilde{q}}{\tilde{q}}{\tilde{u}}}^{111} + {\tilde{\mathcal{I}}}_{{\tilde{q}}{\tilde{u}}{\tilde{u}}}^{111}\right) -3 \left({\tilde{\mathcal{I}}}_{{\tilde{q}}{\tilde{q}}}^{11} + {\tilde{\mathcal{I}}}_{{\tilde{u}}{\tilde{u}}}^{11}\right) \right] \nonumber \\ &+\frac{3}{10} g_t^2 c_{2\beta} \Big\{ X_t^2 \left[ 2 c_{2\beta} \left(3 g_1^2+5g_2^2\right) {\tilde{\mathcal{I}}}[q^2]_{{\tilde{q}}{\tilde{u}}}^{22} +\left(g_1^2-5 g_2^2\right) {\tilde{\mathcal{I}}}_{{\tilde{q}}{\tilde{q}}{\tilde{u}}}^{111} -4 g_1^2 {\tilde{\mathcal{I}}}_{{\tilde{q}}{\tilde{u}}{\tilde{u}}}^{111}\right] \nonumber \\ &~~~~~~~~~~~~~~~~ + \left(g_1^2-5 g_2^2\right) {\tilde{\mathcal{I}}}_{{\tilde{q}}{\tilde{q}}}^{11} -4 g_1^2 {\tilde{\mathcal{I}}}_{{\tilde{u}}{\tilde{u}}}^{11}\Big\} \nonumber \\ & -\frac{c_{2\beta}^2}{200} \sum_{i=1}^3 \Big[ 3 \left(g_1^4+25 g_2^4\right) {\tilde{\mathcal{I}}}_{{\tilde{q}}_i{\tilde{q}}_i}^{11} +24 g_1^4 {\tilde{\mathcal{I}}}_{{\tilde{u}}_i{\tilde{u}}_i}^{11} +6 g_1^4 {\tilde{\mathcal{I}}}_{{\tilde{d}}_i{\tilde{d}}_i}^{11} \nonumber \\ &~~~~~~~~~~~~~~~ +\left(9 g_1^4+25 g_2^4\right) {\tilde{\mathcal{I}}}_{{\tilde{l}}_i{\tilde{l}}_i}^{11} +18 g_1^4 {\tilde{\mathcal{I}}}_{{\tilde{e}}_i{\tilde{e}}_i}^{11} \Big] \nonumber \\ &+ \frac{1}{200} \Big\{6 c_{2\beta}^2 \left(c_{2\beta}^2-1\right) \left(3 g_1^2+5 g_2^2\right)^2 {\tilde{\mathcal{I}}}_{A0}^{11} - \Big[9 \left(3 c_{2\beta}^4-3 c_{2\beta}^2+1\right) g_1^4 \nonumber \\ &~~~~~~~~~~ +30 \left(3 c_{2\beta}^4-4 c_{2\beta}^2+1\right) g_1^2 g_2^2+25 \left(3 c_{2\beta}^4-5 c_{2\beta}^2+3\right) g_2^4\Big] {\tilde{\mathcal{I}}}_{AA}^{11}\Big\},\\ \Delta\lambda^{1{\ensuremath{\ell}},\chi} ={}& -\frac{1}{4} \Big\{-d \big(2 g_Y^4 M_1^2 {\tilde{\mathcal{I}}}[q^2] ^{22} _{1 \mu} + 2 g_2^4 M_2^2 {\tilde{\mathcal{I}}}[q^2] ^{22} _{2 \mu} + g_Y^4 \mu^2 {\tilde{\mathcal{I}}}[q^2] ^{22} _{1 \mu} \nonumber \\ &\qquad ~~~~~~~~ - g_Y^4 \mu^2 c_{4 \beta} {\tilde{\mathcal{I}}}[q^2] ^{22} _{1 \mu} + g_2^4 \mu^2 {\tilde{\mathcal{I}}}[q^2] ^{22} _{2 \mu} - g_2^4 \mu^2 c_{4 \beta} {\tilde{\mathcal{I}}}[q^2] ^{22} _{2 \mu} \nonumber \\ &\qquad ~~~~~~~~ + 4 g_Y^2 g_2^2 M_1 M_2 {\tilde{\mathcal{I}}}[q^2] ^{112} _{1 2 \mu} + 2 g_Y^2 g_2^2 \mu^2 {\tilde{\mathcal{I}}}[q^2] ^{112} _{1 2 \mu} - 2 g_Y^2 g_2^2 \mu^2 c_{4 \beta} {\tilde{\mathcal{I}}}[q^2] ^{112} _{1 2 \mu}\big) \nonumber \\ &\qquad~~ - d (2 + d) \big(2 g_Y^4 {\tilde{\mathcal{I}}}[q^4] ^{22} _{1 \mu} + 2 g_2^4 {\tilde{\mathcal{I}}}[q^4] ^{22} _{2 \mu} + 4 g_Y^2 g_2^2 {\tilde{\mathcal{I}}}[q^4] ^{112} _{1 2 \mu}\big) \nonumber \\ &\qquad~~ - g_2^4 \big[2d (2 + d) (3 + c_{4 \beta}) {\tilde{\mathcal{I}}}[q^4] ^{22} _{2 \mu} + 16 c_{\beta} s_{\beta} (d M_2 {\tilde{\mathcal{I}}}[q^2] ^{22} _{2 \mu} (\mu + M_2 c_{\beta} s_{\beta}) \nonumber \\ &\qquad ~~~~~~~~~ + \mu \{M_2^2 \mu c_{\beta} {\tilde{\mathcal{I}}}^{22} _{2 \mu} s_{\beta} + d {\tilde{\mathcal{I}}}[q^2] ^{22} _{2 \mu} (M_2 + \mu c_{\beta} s_{\beta})\})\big] \nonumber \\ &\qquad~~ - 4 d \mu \big(2 g_Y^4 M_1 {\tilde{\mathcal{I}}}[q^2] ^{22} _{1 \mu} + 2 g_2^4 M_2 {\tilde{\mathcal{I}}}[q^2] ^{22} _{2 \mu} + 2 g_Y^2 g_2^2 M_1 {\tilde{\mathcal{I}}}[q^2] ^{112} _{1 2 \mu} \nonumber \\ &\qquad ~~~~~~~~~~~ + 2 g_Y^2 g_2^2 M_2 {\tilde{\mathcal{I}}}[q^2] ^{112} _{1 2 \mu}\big) s_{2 \beta} \nonumber \\ &\qquad~~ - 2 \mu^2 \big(g_Y^4 M_1^2 {\tilde{\mathcal{I}}}^{22} _{1 \mu} + g_2^2 M_2 (g_2^2 M_2 {\tilde{\mathcal{I}}}^{22} _{2 \mu} + g_Y^2 M_1 2{\tilde{\mathcal{I}}}^{112} _{12 \mu} + )\big) s_{2 \beta}^2 \nonumber \\ &\qquad~~ - 2 g_2^2 (g_Y^2 + g_2^2) c_{ 2 \beta}^2 \big(-4 (2 + d) {\tilde{\mathcal{I}}}[q^4] ^{22} _{2 \mu} + M_2 \mu (\mu^2 {\tilde{\mathcal{I}}}^{13} _{2 \mu} + M_2^2 {\tilde{\mathcal{I}}}^{31} _{2 \mu}) s_{2 \beta}\big)\nonumber \\ &\qquad~~ - (g_Y^2 + g_2^2) c_{ 2 \beta}^2 \big(-4 (2 + d) g_Y^2 {\tilde{\mathcal{I}}}[q^4] ^{22} _{1 \mu} - 4 (2 + d) g_2^2 {\tilde{\mathcal{I}}}[q^4] ^{22} _{2 \mu} \nonumber \\ &\qquad ~~~~~~ + \mu \{g_Y^2 M_1 \mu^2 {\tilde{\mathcal{I}}}^{13} _{1 \mu} + g_Y^2 M_1^3 {\tilde{\mathcal{I}}}^{31} _{1 \mu} + g_2^2 M_2 (\mu^2 {\tilde{\mathcal{I}}}^{31} _{2 \mu} + M_2^2 {\tilde{\mathcal{I}}}^{31} _{2 \mu})\} s_{2 \beta}\big)\Big\}.\end{aligned}$$ The subscripts $1$ and $2$ of the loop functions are shorthand for $M_1$ and $M_2$, respectively. The terms involving $d = 4 - {\epsilon}$ originate from contractions of gamma matrices and metric tensors, see [appendix \[sec:DREG\_DRED\]]{}. Note, that $\lambda$ is expressed entirely in terms of the MSSM gauge couplings, in contrast to [@Bagnaschi:2014rsa]. It is sensible to regularize the MSSM using dimensional reduction (DRED) [@Siegel:1979wq], whereas the SM is more naturally regularized in dimensional regularization (DREG) [@Bollini:1972ui; @Ashmore:1972uj; @Cicuta:1972jf; @tHooft:1972tcz; @tHooft:1973mfk]. Such a regularization scheme change leads to further contributions to the threshold correction denoted by $\Delta\lambda^{1{\ensuremath{\ell}},\text{reg}}$, which can be obtained using the DRED–DREG regularization scheme translating operators presented in [@Summ:2018oko]. This contribution originates from the operator $$\begin{aligned} \frac{1}{\kappa} {\epsilon}{\mathcal{L}}_{{\ensuremath{\text{EFT}}\xspace},{\epsilon}}^{1{\ensuremath{\ell}}} = \frac{1}{2} \operatorname{tr}\{{\ensuremath{\breve{X}}}^{\mu \nu}_{{\epsilon}{\epsilon}} {\ensuremath{\breve{X}}}_{{\epsilon}{\epsilon}\mu \nu} \}, \label{eq: epsilonconttolambda}\end{aligned}$$ where on the r.h.s. ${\epsilon}$ denotes all epsilon scalars that couple to the Higgs and $$\begin{aligned} {\ensuremath{\breve{X}}}^{\mu \nu}_{{\epsilon}{\epsilon}} = {\ensuremath{\breve{g}}}^\mu_\sigma {\ensuremath{\breve{g}}}^\nu_\rho {\ensuremath{\mathring{X}}}^{\sigma\rho}_{{\epsilon}{\epsilon}}\end{aligned}$$ is the projection of the $4$-dimensional ${\ensuremath{\mathring{X}}}^{\sigma\rho}_{{\epsilon}{\epsilon}}$ onto the ${\epsilon}$-dimensional $Q{\epsilon}S$ space [@Stockinger:2005gx; @Summ:2018oko] with ${\ensuremath{\breve{g}}}^{\mu\nu}{\ensuremath{\breve{g}}}_{\mu\nu} = {\epsilon}$, see [appendix \[sec:DREG\_DRED\]]{}. In the MSSM we have the following couplings to epsilon scalars to the SM-like doublet $\mathcal{H}$, $$\begin{aligned} {\mathcal{L}}_{{\epsilon}\mathcal{H}}=\mathcal{H}^*_i {\ensuremath{\breve{g}}}_{\mu \nu}\left(g_2 ^2 T^a_{ij} T^b_{jl} a^{a\mu} a^{b\nu}+\sqrt{\frac{3}{5}} g_1 g_2 T^a_{il} a^{a\mu} b^\nu +\frac{3}{20}g_1^2 b^\mu b^\nu \delta_{il}\right)\mathcal{H}_l,\end{aligned}$$ where the indices $i,j,l$ are $SU(2)_L$ indices of the fundamental representation with the generators $T^a_{ij}$. The fields $a^{a \mu}$ and $b^\mu$ denote the epsilon scalars corresponding to $SU(2)_L$ and $U(1)_Y$, respectively. One obtains the derivative $$\begin{aligned} {\ensuremath{\breve{X}}}^{\mu \nu}_{{\epsilon}{\epsilon}}=-{\ensuremath{\breve{g}}}^{\mu \nu} \begin{pmatrix} \mathcal{H}^{*}_i g_2^2 \{T^a,T^b\}_{il} \mathcal{H}_l & \sqrt{\frac{3}{5}} g_1 g_2 \mathcal{H}^{*}_i T^a _{il} \mathcal{H}_l \\ \sqrt{\frac{3}{5}} g_1 g_2 \mathcal{H}^{*}_i T^a _{il} \mathcal{H}_l & \frac{3}{10} g_1^2 \mathcal{H}^{*}_i \mathcal{H}_i \end{pmatrix}.\end{aligned}$$ Inserting this into we obtain $$\begin{aligned} \Delta\lambda^{1{\ensuremath{\ell}},\text{reg}} &= - \frac{9}{100}g_1^4 - \frac{3}{10} g_1^2 g_2^2 -\frac{3}{4} g_2^4.\end{aligned}$$ We do not find the term proportional to $c_{2\beta}^2$ given in [@Bagnaschi:2014rsa] since this term only arises once the tree-level expression for $\lambda$ is expressed in terms of SM gauge couplings, as opposed to MSSM parameters as in our case. Up to terms arising from this conversion the one-loop threshold corrections agree with the results of [@Bagnaschi:2014rsa]. Integrating out stops and the gluino from the MSSM {#sec:matching_MSSM_to_SMEFT} -------------------------------------------------- As a second nontrivial application we reproduce known threshold corrections from the MSSM to the Standard Model Effective Field Theory (SMEFT) from heavy stops and the gluino in the gaugeless limit ($g_1 = g_2 = 0$) in the unbroken phase and for vanishing Yukawa couplings, except for the one of the top quark. In particular we reproduce the Wilson coefficient of the higher-dimensional $\hat{h}^6$ operator calculated in [@Drozd:2015rsp; @Bagnaschi:2017xid]. Furthermore, this example application again represents a scenario, where a heavy Majorana fermion is integrated out and the formalism introduced in [section \[sec:calculation\]]{} must be carefully applied. We consider the following part of the MSSM Lagrangian $$\begin{aligned} \begin{split} {\mathcal{L}}_{\ensuremath{\text{MSSM}}\xspace}\supset{}& |\partial{\tilde{t}_{L}}|^2 - {m^2_{{\tilde{q}}}}|{\tilde{t}_{L}}|^2 + |\partial{\tilde{t}_{R}}|^2 - {m^2_{{\tilde{u}}}}|{\tilde{t}_{R}}|^2 + \frac{1}{2}({\tilde{g}^{a}})^T {\mathcal{C}}(i\slashed{\partial} - m_{{\tilde{g}^{}}}) {\tilde{g}^{a}}\\ & - \frac{y_t s_\beta}{\sqrt{2}} h \bar{t} t - \frac{y_t^2 s_\beta^2}{2} h^2 \left(|{\tilde{t}_{L}}|^2 + |{\tilde{t}_{R}}|^2\right) - \frac{y_t s_\beta X_t}{\sqrt{2}} h \left({\tilde{t}_{L}}^* {\tilde{t}_{R}} + {\text{h.c.}}\right) \\ & - \sqrt{2} g_3 \left[ \bar{t} P_R {\tilde{g}^{a}} T^a {\tilde{t}_{L}} - \bar{t} P_L {\tilde{g}^{a}} T^a {\tilde{t}_{R}} + {\tilde{t}_{L}}^* ({\tilde{g}^{a}})^T T^a {\mathcal{C}}P_L t - {\tilde{t}_{R}}^* ({\tilde{g}^{a}})^T T^a {\mathcal{C}}P_R t \right] , \end{split} \label{eq:LMSSM_stop}\end{aligned}$$ where we use the same notation as in [section \[sec:lambdacalc\]]{} and $g_3$ is the strong gauge coupling. The top quark is denoted as $t$ and is defined as a Dirac fermion built from the upper component of the left-handed quark-doublet $q_L$ and the right-handed top $t_R$. The gluino is denoted as ${\tilde{g}^{a}}$ and we have used the relation $\overline{{\tilde{g}^{a}}} = ({({\tilde{g}^{a}})^C})^T {\mathcal{C}}= ({\tilde{g}^{a}})^T {\mathcal{C}}$ to express in terms of the gluino Majorana spinor ${\tilde{g}^{a}}$. Upon integrating out the heavy stops and the gluino the Lagrangian of the effective theory becomes $$\begin{aligned} {\mathcal{L}}_{\ensuremath{\text{SMEFT}}\xspace}\supset - \frac{y_t s_\beta}{\sqrt{2}} h \bar{t} t + {\mathcal{L}}_{\ensuremath{\text{SMEFT}}\xspace}^\text{1{\ensuremath{\ell}}}.\end{aligned}$$ In our limit the one-loop term ${\mathcal{L}}_{\ensuremath{\text{SMEFT}}\xspace}^\text{1{\ensuremath{\ell}}}$ receives contributions from the following generic operators from $$\begin{aligned} \frac{1}{\kappa} {\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}^{1{\ensuremath{\ell}}} \supset{}& \frac{1}{2} {\tilde{\mathcal{I}}}^1 _i (\mathbf{X}_{\Phi \Phi})_{ii}+\frac{1}{4} {\tilde{\mathcal{I}}}^{11} _{ik} (\mathbf{X}_{\Phi \Phi})_{ik} (\mathbf{X}_{\Phi \Phi})_{ki}+\frac{1}{6} {\tilde{\mathcal{I}}}^{111} _{lik} (\mathbf{X}_{\Phi \Phi})_{ik} (\mathbf{X}_{\Phi \Phi})_{kl} (\mathbf{X}_{\Phi \Phi})_{li}\nonumber \\ & +\frac{1}{8} {\tilde{\mathcal{I}}}^{1111} _{likn} (\mathbf{X}_{\Phi \Phi})_{ik} (\mathbf{X}_{\Phi \Phi})_{kl} (\mathbf{X}_{\Phi \Phi})_{ln} (\mathbf{X}_{\Phi \Phi})_{ni}\nonumber \\ & +\frac{1}{10} {\tilde{\mathcal{I}}}^{11111} _{iklnp} (\mathbf{X}_{\Phi \Phi})_{ik} (\mathbf{X}_{\Phi \Phi})_{kl} (\mathbf{X}_{\Phi \Phi})_{ln} (\mathbf{X}_{\Phi \Phi})_{np} (\mathbf{X}_{\Phi \Phi})_{pi}\nonumber \\ & +\frac{1}{12} {\tilde{\mathcal{I}}}^{111111} _{iklnpr} (\mathbf{X}_{\Phi \Phi})_{ik} (\mathbf{X}_{\Phi \Phi})_{kl} (\mathbf{X}_{\Phi \Phi})_{ln} (\mathbf{X}_{\Phi \Phi})_{np} (\mathbf{X}_{\Phi \Phi})_{pr} (\mathbf{X}_{\Phi \Phi})_{ri}\nonumber \\ & +\frac{1}{2} {\tilde{\mathcal{I}}}[q^2]^{22} _{ki} [P_\mu, (\mathbf{X}_{\Phi \Phi})_{ik}] [P^\mu, (\mathbf{X}_{\Phi \Phi})_{ki}]\nonumber \\ & -{\tilde{\mathcal{I}}}[q^2] ^{21}_{il} (\mathbf{X}_{\Phi \Xi})_{il} \gamma^\mu [P_\mu, (\mathbf{X}_{\Xi \Phi})_{li}]\nonumber \\ & - \frac{1}{2} m_{\Xi_{k}} {\tilde{\mathcal{I}}}^{111} _{ikl} (\mathbf{X}_{\Phi \Xi})_{ik} (\mathbf{X}_{\Xi \Phi})_{kl} (\mathbf{X}_{\Phi \Phi})_{li} . \label{eq:L_SMEFT_operators}\end{aligned}$$ We furthermore set $P_\mu \equiv i \partial_\mu$ to omit contributions from gauge bosons. In our scenario we identify $\Sigma = ({\tilde{t}_{L}}, {\tilde{t}_{R}})$ as the vector of (complex) heavy stops and $\Lambda = {\tilde{g}^{a}}$ as the heavy gluino. From we then obtain the following non-vanishing derivatives $$\begin{aligned} (X_{{\tilde{t}_{L}}^* {\tilde{t}_{L}}})_{ij} &= (X_{{\tilde{t}_{L}} {\tilde{t}_{L}}^*})_{ij} = (X_{{\tilde{t}_{R}}^* {\tilde{t}_{R}}})_{ij} = (X_{{\tilde{t}_{R}} {\tilde{t}_{R}}^*})_{ij} = \frac{1}{2}(y_t s_\beta h)^2 \delta_{ij},\\ (X_{{\tilde{t}_{L}}^* {\tilde{t}_{R}}})_{ij} &= (X_{{\tilde{t}_{L}} {\tilde{t}_{R}}^*})_{ij} = (X_{{\tilde{t}_{R}}^* {\tilde{t}_{L}}})_{ij} = (X_{{\tilde{t}_{R}} {\tilde{t}_{L}}^*})_{ij} = \frac{1}{\sqrt{2}} y_t s_\beta h X_t \delta_{ij},\\ (X_{{\tilde{t}_{L}} {\tilde{g}^{a}}})_{i\alpha}^a &= (X_{{\tilde{g}^{a}} {\tilde{t}_{L}}})_{i\alpha}^a = -\sqrt{2} g_3 (\bar{t}_j P_R)_{\alpha} T^a_{ji}, \label{eq:sign_1}\\ (X_{{\tilde{t}_{R}} {\tilde{g}^{a}}})_{i\alpha}^a &= (X_{{\tilde{g}^{a}} {\tilde{t}_{R}}})_{i\alpha}^a = \sqrt{2} g_3 (\bar{t}_j P_L)_{\alpha} T^a_{ji}, \label{eq:sign_2}\\ (X_{{\tilde{g}^{a}} {\tilde{t}_{L}}^*})_{i\alpha}^a &= (X_{{\tilde{t}_{L}}^* {\tilde{g}^{a}}})_{i\alpha}^a = \sqrt{2} g_3 T^a_{ij} ({\mathcal{C}}P_L t_j)_{\alpha},\\ (X_{{\tilde{g}^{a}} {\tilde{t}_{R}}^*})_{i\alpha}^a &= (X_{{\tilde{t}_{R}}^* {\tilde{g}^{a}}})_{i\alpha}^a = -\sqrt{2} g_3 T^a_{ij} ({\mathcal{C}}P_R t_j)_{\alpha},\end{aligned}$$ where $i,j=1,2,3$ and $a=1,\ldots,8$ are color indices and $\alpha=1,\ldots,4$ is a 4-component spinor index. Note the flipped sign in eqs. – due to one anti-commutation of the spinor $\bar{t}$ with the derivative w.r.t.the spinor ${\tilde{g}^{a}}$. The bold derivative matrices thus become $$\begin{aligned} \mathbf{X}_{\Phi \Phi} &= \begin{pmatrix} X_{\Sigma ^* \Sigma} & X_{\Sigma ^* \Sigma ^*} \\ X_{\Sigma \Sigma} & X_{\Sigma \Sigma ^{*}} \end{pmatrix} = \begin{pmatrix} (X_{{\tilde{t}_{L}}^* {\tilde{t}_{L}}})_{ij} & (X_{{\tilde{t}_{L}}^* {\tilde{t}_{R}}})_{ij} & 0 & 0 \\ (X_{{\tilde{t}_{R}}^* {\tilde{t}_{L}}})_{ij} & (X_{{\tilde{t}_{R}}^* {\tilde{t}_{R}}})_{ij} & 0 & 0 \\ 0 & 0 & (X_{{\tilde{t}_{L}} {\tilde{t}_{L}}^*})_{ij} & (X_{{\tilde{t}_{L}} {\tilde{t}_{R}}^*})_{ij} \\ 0 & 0 & (X_{{\tilde{t}_{R}} {\tilde{t}_{L}}^*})_{ij} & (X_{{\tilde{t}_{R}} {\tilde{t}_{R}}^*})_{ij} \end{pmatrix} \\ &= \delta_{ij} \; \mathbf{1}_{2\times 2} \otimes \begin{pmatrix} \frac{1}{2}(y_t s_\beta h)^2 & \frac{1}{\sqrt{2}} y_t s_\beta h X_t \\ \frac{1}{\sqrt{2}} y_t s_\beta h X_t & \frac{1}{2}(y_t s_\beta h)^2 \end{pmatrix}, \\ \mathbf{X}_{\Phi \Xi} &= \begin{pmatrix} X_{\Sigma ^* \Lambda} \\ X_{\Sigma \Lambda} \end{pmatrix} = \begin{pmatrix} (X_{{\tilde{t}_{L}}^* {\tilde{g}^{a}}})_{i\alpha}^a \\ (X_{{\tilde{t}_{R}}^* {\tilde{g}^{a}}})_{i\alpha}^a \\ (X_{{\tilde{t}_{L}} {\tilde{g}^{a}}})_{i\alpha}^a \\ (X_{{\tilde{t}_{R}} {\tilde{g}^{a}}})_{i\alpha}^a \end{pmatrix} = \sqrt{2} g_3 \begin{pmatrix} T^a_{ij} ({\mathcal{C}}P_L t_j)_{\alpha} \\ -T^a_{ij} ({\mathcal{C}}P_R t_j)_{\alpha} \\ -(\bar{t}_j P_R)_{\alpha} T^a_{ji} \\ (\bar{t}_j P_L)_{\alpha} T^a_{ji} \end{pmatrix}, \\ \mathbf{X}_{\Xi \Phi} &= \begin{pmatrix} {\mathcal{C}}^{-1} X_{\Lambda \Sigma}, && {\mathcal{C}}^{-1} X_{\Lambda \Sigma ^*} \end{pmatrix} \\ &= ({\mathcal{C}}^{-1})_{\alpha\beta} \begin{pmatrix} (X_{{\tilde{g}^{a}} {\tilde{t}_{L}}})_{i\beta}^a, && (X_{{\tilde{g}^{a}} {\tilde{t}_{R}}})_{i\beta}^a, && (X_{{\tilde{g}^{a}} {\tilde{t}_{L}}^*})_{i\beta}^a, && (X_{{\tilde{g}^{a}} {\tilde{t}_{R}}^*})_{i\beta}^a \end{pmatrix} \\ &= \sqrt{2} g_3 ({\mathcal{C}}^{-1})_{\alpha\beta} \begin{pmatrix} -(\bar{t}_j P_R)_{\beta} T^a_{ji}, && (\bar{t}_j P_L)_{\beta} T^a_{ji}, && T^a_{ij} ({\mathcal{C}}P_L t_j)_{\beta}, && -T^a_{ij} ({\mathcal{C}}P_R t_j)_{\beta} \end{pmatrix} \\ &= \sqrt{2} g_3 \begin{pmatrix} -(\bar{t}_j P_R ({\mathcal{C}}^{-1})^T)_{\alpha} T^a_{ji}, && (\bar{t}_j P_L ({\mathcal{C}}^{-1})^T)_{\alpha} T^a_{ji}, && T^a_{ij} (P_L t_j)_{\alpha}, & -T^a_{ij} (P_R t_j)_{\alpha} \end{pmatrix} .\end{aligned}$$ By inserting the $\mathbf{X}_{AB}$ operators into and summing over all fields and colors we obtain $$\begin{aligned} {\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}^\text{1{\ensuremath{\ell}}} &= c_t h\bar{t}t + c_L\bar{t}i\slashed{\partial}P_Lt + c_R \bar{t}i\slashed{\partial}P_Rt + c_2' (\partial h)^2 + c_2 h^2 + c_4 h^4 + c_6 h^6 + \cdots,\end{aligned}$$ where $$\begin{aligned} c_t &= -\frac{4 \sqrt{2}}{3}\kappa g_3^2 y_t s_\beta m_{{\tilde{g}^{}}} X_t {\tilde{\mathcal{I}}}^{111} _{{\tilde{g}^{}}{\tilde{q}}{\tilde{u}}},\\ \begin{split} c_L &= \frac{16}{3}\kappa g_3^2 {\tilde{\mathcal{I}}}[q^2] ^{21} _{{\tilde{u}}{\tilde{g}^{}}}, \end{split}\\ c_R &= c_L|_{{\tilde{q}}\to {\tilde{u}}},\\ c_2' &= -3 \kappa (y_t s_\beta)^2 X_t^2 {\tilde{\mathcal{I}}}[q^2] ^{22} _{{\tilde{q}}{\tilde{u}}},\\ c_2 &= \frac{3}{2}\kappa (y_t s_\beta)^2 \left[{\tilde{\mathcal{I}}}^{1} _{{\tilde{q}}} + {\tilde{\mathcal{I}}}^{1} _{{\tilde{u}}} + X_t^2 {\tilde{\mathcal{I}}}^{11} _{{\tilde{q}}{\tilde{u}}}\right], \\ c_4 &= \frac{3}{8}\kappa (y_t s_\beta)^4 \left[ {\tilde{\mathcal{I}}}^{11} _{{\tilde{q}}{\tilde{q}}} + {\tilde{\mathcal{I}}}^{11} _{{\tilde{u}}{\tilde{u}}} + 2 X_t^2 ({\tilde{\mathcal{I}}}^{111} _{{\tilde{q}}{\tilde{q}}{\tilde{u}}} + {\tilde{\mathcal{I}}}^{111} _{{\tilde{q}}{\tilde{u}}{\tilde{u}}}) + X_t^4 {\tilde{\mathcal{I}}}^{1111} _{{\tilde{q}}{\tilde{q}}{\tilde{u}}{\tilde{u}}}\right],\\ \begin{split} c_6 &= \frac{1}{8}\kappa (y_t s_\beta)^6 \big[ {\tilde{\mathcal{I}}}^{111} _{{\tilde{q}}{\tilde{q}}{\tilde{q}}} + {\tilde{\mathcal{I}}}^{111} _{{\tilde{u}}{\tilde{u}}{\tilde{u}}} + 3 X_t^2 ( {\tilde{\mathcal{I}}}^{1111} _{{\tilde{q}}{\tilde{q}}{\tilde{q}}{\tilde{u}}} + {\tilde{\mathcal{I}}}^{1111} _{{\tilde{q}}{\tilde{q}}{\tilde{u}}{\tilde{u}}} + {\tilde{\mathcal{I}}}^{1111} _{{\tilde{q}}{\tilde{u}}{\tilde{u}}{\tilde{u}}} ) \\ & ~~~~~~~~~~~~~~~~~~ + 3 X_t^4 ( {\tilde{\mathcal{I}}}^{11111} _{{\tilde{q}}{\tilde{q}}{\tilde{q}}{\tilde{u}}{\tilde{u}}} + {\tilde{\mathcal{I}}}^{11111} _{{\tilde{q}}{\tilde{q}}{\tilde{u}}{\tilde{u}}{\tilde{u}}} ) + X_t^6 {\tilde{\mathcal{I}}}^{111111} _{{\tilde{q}}{\tilde{q}}{\tilde{q}}{\tilde{u}}{\tilde{u}}{\tilde{u}}} \big] . \end{split}\end{aligned}$$ To canonically normalize the kinetic terms of ${\mathcal{L}}_{\ensuremath{\text{SMEFT}}\xspace}$ we re-define the Higgs and the top quark field as $$\begin{aligned} h &= \left(1 - \frac{1}{2} \delta Z_h\right) \hat{h} , \\ t_L &= \left(1 - \frac{1}{2} \delta Z_L\right) \hat{t}_L , \\ t_R &= \left(1 - \frac{1}{2} \delta Z_R\right) \hat{t}_R ,\end{aligned}$$ where the field renormalizations $\delta Z_{h/L/R}$ are given by $$\begin{aligned} \delta Z_h &= 2c_2', \\ \delta Z_L &= c_L, \\ \delta Z_R &= c_R.\end{aligned}$$ If we parameterize the [$\text{SMEFT}$]{}Lagrangian as $$\begin{aligned} {\mathcal{L}}_{\ensuremath{\text{SMEFT}}\xspace}\supset - \frac{g_t}{\sqrt{2}} \hat{h} \bar{\hat{t}} \hat{t} + \frac{m^2}{2} \hat{h}^2 - \frac{\lambda}{8} \hat{h}^4 - \frac{\tilde{c}_6}{8} \hat{h}^6,\end{aligned}$$ then the SMEFT parameters $g_t$, $\lambda$ and $m^2$ are given by $$\begin{aligned} g_t &= y_t s_\beta \left[1 - \frac{1}{2}(c_L + c_R) - c_2' - \frac{\sqrt{2}c_t}{y_t s_\beta} \right], \\ m^2 &= 2 c_2,\\ \lambda &= -8 c_4 ,\\ \tilde{c}_6 &= -8 c_6,\end{aligned}$$ which agrees with the results calculated in [@Bagnaschi:2014rsa; @Bagnaschi:2017xid; @Huo:2015nka; @Drozd:2015rsp].[^4] Integrating out the gluino from the MSSM with light stops {#sec: gluinoOut} --------------------------------------------------------- In this section we calculate some of the terms that arise when integrating out the gluino from the MSSM. This [$\text{EFT}$]{} scenario is relevant when there is a large hierarchy between the gluino mass and the stop masses in the MSSM. This example is also a direct application of most of the operators calculated in [section \[sec:calc\]]{}, in particular operators where Majorana and Dirac fermions appear in loops at the same time. We consider the following part of the MSSM Lagrangian $$\begin{aligned} {\mathcal{L}}_{\ensuremath{\text{MSSM}}\xspace}\supset{}& |\partial{\tilde{t}_{L}}|^2 - {m^2_{{\tilde{q}}}}|{\tilde{t}_{L}}|^2 + |\partial{\tilde{t}_{R}}|^2 - {m^2_{{\tilde{u}}}}|{\tilde{t}_{R}}|^2 + \frac{1}{2}({\tilde{g}^{a}})^T {\mathcal{C}}(i\slashed{\partial} - m_{{\tilde{g}^{}}}) {\tilde{g}^{a}} \nonumber \\ & -\sqrt{2} g_3 \left( \bar{t} P_R {\tilde{g}^{a}} T^a {\tilde{t}_{L}} - \bar{t} P_L {\tilde{g}^{a}} T^a {\tilde{t}_{R}} + {\tilde{t}_{L}}^* ({\tilde{g}^{a}})^T T^a {\mathcal{C}}P_L t - {\tilde{t}_{R}}^* ({\tilde{g}^{a}})^T T^a {\mathcal{C}}P_R t \right)\nonumber \\ & +\left(-y_t^2+\frac{g_3^2}{2}\right)({\tilde{t}_{L}}^*{\tilde{t}_{R}})({\tilde{t}_{L}}{\tilde{t}_{R}}^*)-\frac{g_3^2}{6}|{\tilde{t}_{L}}|^2|{\tilde{t}_{R}}|^2,\end{aligned}$$ where we use the same notation as in [section \[sec: gluinoOut\]]{} with $t$ being the top quark, defined as a Dirac fermion, and ${\tilde{g}^{a}} = {({\tilde{g}^{a}})^C}$ denotes the gluino, which is a Majorana fermion. The complex scalar fields ${\tilde{t}_{L}}$ and ${\tilde{t}_{R}}$ represent the stops. In the following we determine the one-loop Wilson coefficients of the following operators in the [$\text{EFT}$]{}: $$\begin{aligned} {\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}^{1{\ensuremath{\ell}}} & \supset c_{t_L} \bar{t}_Li\slashed{\partial} t_L+c_{t_R} \bar{t}_Ri\slashed{\partial} t_R+c_{{\tilde{t}_{L}}} \partial_\mu{\tilde{t}_{L}}^*\partial^\mu {\tilde{t}_{L}}-\delta m_{{\tilde{q}}} ^2 |{\tilde{t}_{L}}|^2+c_{{\tilde{t}_{R}}} \partial_\mu{\tilde{t}_{R}}^*\partial^\mu {\tilde{t}_{R}}-\delta m_{{\tilde{u}}} ^2 |{\tilde{t}_{R}}|^2 \nonumber \\ & \quad +c^L_{41} \left({\tilde{t}_{Li}}^* {\tilde{t}_{Li}} \right)^2+c^L_{42} \left({\tilde{t}_{Li}}^* {\tilde{t}_{Lj}} \right) \left({\tilde{t}_{Lj}}^* {\tilde{t}_{Li}}\right)+c^R_{4} \left({\tilde{t}_{R}}^* {\tilde{t}_{R}}\right)^2\nonumber \\ & \quad +c^{LR}_{41} \left({\tilde{t}_{Li}}^* {\tilde{t}_{Li}}\right)\left({\tilde{t}_{Rj}}^* {\tilde{t}_{Rj}}\right)+c^{LR}_{42} \left({\tilde{t}_{Li}}^* {\tilde{t}_{Lj}}\right)\left({\tilde{t}_{Rj}}^* {\tilde{t}_{Ri}}\right)+ c_G G^a_{\mu \nu} G_a ^{\mu \nu}\nonumber \\ & \quad +[c^{LL} _{51} (\bar{t}_{Li} T^a _{ij} {\tilde{t}_{Lj}})({t^C}_{Rk} T^a _{kl} {\tilde{t}_{Ll}})+c^{LL} _{52} ({\tilde{t}_{Li}}^* T^a _{ij} \overline{{t_{Rj}^C}})({\tilde{t}_{Lk}}^* T^a _{kl} t_{Ll})+(L \leftrightarrow R)]\nonumber \\ & \quad +[c^{LR} _{51} (\bar{t}_{Li} T^a _{ij} {\tilde{t}_{Lj}})({\tilde{t}_{Rk}}^* T^a_{kl} t_{Rl})+c^{LR} _{52} ({\tilde{t}_{Li}} {\tilde{t}_{Ri}}^*) (\bar{t}_{Lj} t_{Rj})+(L \leftrightarrow R)] \nonumber \\ & \quad +c_{61} ^L (\tilde{t}_{Li}^* \tilde{t}_{Li})^3+c_{62} ^L (\tilde{t}_{Li}^* \tilde{t}_{Li})(\tilde{t}_{Lj}^* \tilde{t}_{Lk})(\tilde{t}_{Lk}^* \tilde{t}_{Lj})+c_{63} ^L (\tilde{t}_{Li}^* \tilde{t}_{Lj})(\tilde{t}_{Lj}^* \tilde{t}_{Lk})(\tilde{t}_{Lk}^* \tilde{t}_{Li})+c_6^R (\tilde{t}_{Ri}^* \tilde{t}_{Ri})^3 \nonumber \\ & \quad +[c_{61} ^{LR} (\tilde{t}_{Li}^* \tilde{t}_{Li})^2(\tilde{t}_{Ri}^* \tilde{t}_{Ri}) +c_{62} ^{LR} (\tilde{t}_{Li}^* \tilde{t}_{Li})(\tilde{t}_{Lj}^* \tilde{t}_{Lk})(\tilde{t}_{Rk}^* \tilde{t}_{Rj})+c_{63} ^{LR} (\tilde{t}_{Li}^* \tilde{t}_{Lj})(\tilde{t}_{Lj}^* \tilde{t}_{Li})(\tilde{t}_{Rk}^* \tilde{t}_{Rk}) \nonumber \\ & \quad + c_{64} ^{LR} (\tilde{t}_{Li}^* \tilde{t}_{Lj})(\tilde{t}_{Lj}^* \tilde{t}_{Lk})(\tilde{t}_{Rk}^* \tilde{t}_{Ri})+c_{61} ^{RL} (\tilde{t}_{Ri}^* \tilde{t}_{Ri})^2(\tilde{t}_{Li}^* \tilde{t}_{Li}) +c_{62} ^{RL} (\tilde{t}_{Ri}^* \tilde{t}_{Ri})(\tilde{t}_{Rj}^* \tilde{t}_{Rk})(\tilde{t}_{Lk}^* \tilde{t}_{Lj})] \nonumber \\ & \quad +[c_{61} ^{L^\mu L_\mu}\left(\bar{t}_{Li} \gamma^\mu t_{Li}\right)\left(\bar{t}_{Lj} \gamma_\mu t_{Lj}\right)+c_{62} ^{L^\mu L_\mu}\left(\bar{t}_{Li} \gamma^\mu t_{Lj}\right)\left(\bar{t}_{Lj} \gamma_\mu t_{Li}\right)+(L \leftrightarrow R)]\nonumber \\ & \quad +c_{61}^{(LR)^\mu (RL)_\mu} \left(\overline{{t_{Ri}^C}} \gamma^\mu t_{Rj}\right)\left(\bar{t}_{Rj} \gamma_\mu {t^C}_{Ri}\right)+ c_{62}^{(LR)^\mu (RL)_\mu} \left(\overline{{t_{Rj}^C}} \gamma^\mu t_{Ri}\right)\left(\bar{t}_{Rj} \gamma_\mu {t^C}_{Ri}\right)\nonumber \\ & \quad + [c_{61} ^{LL}\left(\overline{{t_{Ri}^C}} t_{Li}\right)\left(\bar{t}_{Lj} {t^C}_{Rj}\right)+c_{62}^{LL}\left(\overline{{t_{Ri}^C}} t_{Lj}\right)\left(\bar{t}_{Lj} {t^C}_{Ri}\right)+(L\leftrightarrow R)]\nonumber \\ & \quad +c_{61} ^{(LR)(RL)} \left(\bar{t}_{Ri} t_{Lj}\right)\left(\bar{t}_{Lj} t_{Ri}\right) +c_{62} ^{(LR)(RL)} \left(\bar{t}_{Rj}t_{Li}\right)\left(\bar{t}_{Lj} t_{Ri}\right). \label{eq: gluonOutFirstEFTLag}\end{aligned}$$ These operators represent all derived one-loop stop interactions in the gaugeless limit and in the unbroken phase, without contributions from higher-dimensional operators with covariant derivatives. Terms which involve SUSY particles beyond the stop are omitted for brevity. In the color indices $i,j,k=1,2,3$ and $a=1,\ldots,8$ are written out explicitly. Note that in general ${\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}^{1{\ensuremath{\ell}}}$ contains $SU(2)_L$ and $SU(3)_C$ invariant terms of the form $(\tilde{q}^\dagger_{Li}\tilde{q}_{Li})(\tilde{q}^\dagger_{Lj}\tilde{q}_{Lj})$ and $(\tilde{q}^\dagger_{Li}\tilde{q}_{Lj})(\tilde{q}^\dagger_{Lj}\tilde{q}_{Li})$, where the $SU(2)_L$ indices are contracted within parentheses, but the color indices are contracted differently among the terms. In , however, the corresponding terms with the couplings $c_{41}^L$ and $c_{42}^L$ have the same structure, because we have omitted the sbottom quark. The dimension 5 operators have contributions already at tree-level, which stem from the insertion of the gluino background field ${{\tilde{g}_{\text{cl}}}}$ into the Lagrangian of the MSSM. The necessary part of the gluino background field can be extracted from the equation of motion $$\begin{aligned} [{\mathcal{C}}(i\slashed{\partial}-m_{{\tilde{g}^{}}})]_{\alpha \beta} ({{\tilde{g}_{\text{cl}}}})_\beta^a=\sqrt{2} g_3 \left(- \bar{t}_{L \alpha} T^a {\tilde{t}_{L}} + \bar{t}_{R \alpha} T^a {\tilde{t}_{R}} + {\tilde{t}_{L}}^* T^a ({\mathcal{C}}t_L)_\alpha - {\tilde{t}_{R}}^* T^a ({\mathcal{C}}t_R)_\alpha \right),\end{aligned}$$ which yields $$\begin{aligned} ({{\tilde{g}_{\text{cl}}}})_\beta^a &= \sqrt{2} g_3 (i\slashed{\partial}-m_{{\tilde{g}^{}}})_{\beta \alpha}^{-1} \left[- (\bar{t}_L {\mathcal{C}}) _\alpha T^a {\tilde{t}_{L}} + (\bar{t}_R {\mathcal{C}})_\alpha T^a {\tilde{t}_{R}} + {\tilde{t}_{L}}^* T^a t_{L\alpha} - {\tilde{t}_{R}}^* T^a t_{R\alpha} \right] \\ &= \frac{\sqrt{2} g_3}{m_{{\tilde{g}^{}}}} \left[ (\bar{t}_L {\mathcal{C}}) _\beta T^a {\tilde{t}_{L}} - (\bar{t}_R {\mathcal{C}})_\beta T^a {\tilde{t}_{R}} - {\tilde{t}_{L}}^* T^a t_{L\beta} + {\tilde{t}_{R}}^* T^a t_{R\beta} + \cdots \right] , \label{eq: ClassicalGluinoField}\end{aligned}$$ where the ellipsis designate higher order terms of ${\ensuremath{\mathcal{O}(\partial/m_{{\tilde{g}^{}}})}}$ with at least one derivative. Inserting into both the kinetic term of the gluino and the interaction Lagrangian one finds the tree-level values of $c_{5i} ^{AB}$ ($A,B\in \{L,R\}$) to be $$\begin{aligned} c_{51} ^{LL,{\ensuremath{\text{tree}}\xspace}}&=c_{52} ^{LL,{\ensuremath{\text{tree}}\xspace}}=c_{51} ^{RR,{\ensuremath{\text{tree}}\xspace}}=c_{52} ^{RR,{\ensuremath{\text{tree}}\xspace}}=\frac{g^2_3}{m_{{\tilde{g}^{}}}}, \\ c_{51} ^{LR,{\ensuremath{\text{tree}}\xspace}}&=c_{51} ^{RL,{\ensuremath{\text{tree}}\xspace}}=-\frac{2g^2_3}{m_{{\tilde{g}^{}}}}, \\ c_{52} ^{LR,{\ensuremath{\text{tree}}\xspace}}&=c_{52} ^{RL,{\ensuremath{\text{tree}}\xspace}}=0.\end{aligned}$$ At one-loop the relevant contributions from the UOLEA are $$\begin{aligned} \frac{1}{\kappa}{\mathcal{L}}_{\ensuremath{\text{EFT}}\xspace}^{1{\ensuremath{\ell}}} = \operatorname{tr}\Big\{&(-{\tilde{\mathcal{I}}}[q^4]^{31} _{{\tilde{g}^{}} 0} +\frac{m^2_{{\tilde{g}^{}}}}{12} {\tilde{\mathcal{I}}}[q^2] ^{22} _{{\tilde{g}^{}} 0}) \gamma_\mu [P^\nu,(\mathbf{X}_{\Xi \xi})^a_i] \gamma ^\mu [P_\nu,(\mathbf{X}_{\xi \Xi})^a_i] \nonumber \\ & +(-2{\tilde{\mathcal{I}}}[q^4]^{31} _{{\tilde{g}^{}} 0} +\frac{m^2_{{\tilde{g}^{}}}}{6} {\tilde{\mathcal{I}}}[q^2] ^{22} _{{\tilde{g}^{}} 0}) \gamma_\mu [P^\mu,(\mathbf{X}_{\Xi \xi})^a_i] \gamma ^\nu [P_\nu,(\mathbf{X}_{\xi \Xi})^a_i] \nonumber \\ & + (-{\tilde{\mathcal{I}}}[q^{2}]^{12} _{{\tilde{g}^{}}0}-2 m^2_{\phi_i} {\tilde{\mathcal{I}}}[q^2] ^{13} _{{\tilde{g}^{}}0}) (\mathbf{X}_{\phi \Xi})_i \gamma ^\mu [P_\mu,(\mathbf{X}_{\Xi \phi})_i]\nonumber \\ & +\frac{1}{4} {\tilde{\mathcal{I}}}[q^2] ^{22}_{{\tilde{g}^{}}0} (\mathbf{X}_{\phi \Xi})_i \gamma^\mu (\mathbf{X}_{\Xi \phi})_j (\mathbf{X}_{\phi \Xi})_j \gamma_\mu (\mathbf{X}_{\Xi \phi})_i \nonumber \\ & -\frac{1}{2}m_{{\tilde{g}^{}}} {\tilde{\mathcal{I}}}^{12}_{{\tilde{g}^{}}0}(\mathbf{X}_{\phi \phi})_{ij} (\mathbf{X}_{\phi \Xi})_j (\mathbf{X}_{\Xi \phi})_i \nonumber \\ & +\frac{1}{4} m^2_{{\tilde{g}^{}}} {\tilde{\mathcal{I}}}^{22}_{{\tilde{g}^{}}0} (\mathbf{X}_{\phi \Xi})_i (\mathbf{X}_{\Xi \phi})_j (\mathbf{X}_{\phi \Xi})_j (\mathbf{X}_{\Xi \phi})_i-\frac{1}{2} {\tilde{\mathcal{I}}}[q^2]^{11} _{{{\tilde{g}^{}}} 0} \gamma^\mu (\mathbf{X}_{\Xi \xi})_i \gamma_\mu (\mathbf{X}_{\xi \Xi})_i \nonumber \\ & -\frac{1}{4}m^2_{{\tilde{g}^{}}} {\tilde{\mathcal{I}}}[q^2] ^{22} _{{\tilde{g}^{}}0} (\mathbf{X}_{\Xi \xi})^a_i \gamma^\mu (\mathbf{X}_{\xi \Xi})^b_i (\mathbf{X}_{\Xi \xi})^b_j \gamma_\mu (\mathbf{X}_{\xi \Xi})^a_j \nonumber \\ & -\frac{1}{4} {\tilde{\mathcal{I}}}[q^4] ^{22} _{{\tilde{g}^{}}0} g_{\mu \nu \rho \sigma} (\mathbf{X}_{\Xi \xi})^a_i \gamma^\mu (\mathbf{X}_{\xi \Xi})^b_i \gamma^\nu (\mathbf{X}_{\Xi \xi})^b_j \gamma^\rho (\mathbf{X}_{\xi \Xi})^a_j \gamma^\sigma \nonumber \\ & -\frac{1}{2}m^2 _{{\tilde{g}^{}}} {\tilde{\mathcal{I}}}[q^4] ^{33} _{{\tilde{g}^{}}0} g_{\mu \nu \rho \sigma} (\mathbf{X}_{\Xi \xi})^a_i \gamma^\mu (\mathbf{X}_{\xi \Xi})^b_i (\mathbf{X}_{\Xi \xi})^b_j \gamma^\nu (\mathbf{X}_{\xi \Xi})^c_j \gamma^\rho (\mathbf{X}_{\Xi \xi})^c_k \gamma^\sigma (\mathbf{X}_{\xi \Xi})^a_k \nonumber \\ & -\frac{1}{6} {\tilde{\mathcal{I}}}[q^6] ^{33} _{{\tilde{g}^{}}0} g_{\mu \nu \rho \sigma \kappa \lambda} (\mathbf{X}_{\Xi \xi})^a_i \gamma^\mu (\mathbf{X}_{\xi \Xi})^b_i \gamma^\nu (\mathbf{X}_{\Xi \xi})^b_j \gamma^\rho (\mathbf{X}_{\xi \Xi})^c_j \gamma^\sigma (\mathbf{X}_{\Xi \xi})^c_k \gamma^\kappa (\mathbf{X}_{\xi \Xi})^a_k \gamma^\lambda \nonumber \\ & +\frac{1}{6}{\tilde{\mathcal{I}}}^{2} _{{\tilde{g}^{}}}[P_\mu,P_\nu][P^\mu,P^\nu]\Big\}, \label{eq: UOLEAContrGluino}\end{aligned}$$ where $g_{\mu \nu \cdots}$ is the combination of metric tensors which is totally symmetric in all indices, see [appendix \[sec:loop\_functions\]]{}. The derivatives with respect to the stops and the gluino have already been calculated in [section \[sec:matching\_MSSM\_to\_SMEFT\]]{} and are given by $$\begin{aligned} \mathbf{X}_{\phi \Xi} &= \begin{pmatrix} X_{\sigma ^* \Lambda} \\ X_{\sigma \Lambda} \end{pmatrix} = \begin{pmatrix} (X_{{\tilde{t}_{L}}^* {\tilde{g}^{a}}})_{i\alpha}^a \\ (X_{{\tilde{t}_{R}}^* {\tilde{g}^{a}}})_{i\alpha}^a \\ (X_{{\tilde{t}_{L}} {\tilde{g}^{a}}})_{i\alpha}^a \\ (X_{{\tilde{t}_{R}} {\tilde{g}^{a}}})_{i\alpha}^a \end{pmatrix} = \sqrt{2} g_3 \begin{pmatrix} T^a_{ij} ({\mathcal{C}}P_L t_j)_{\alpha} \\ -T^a_{ij} ({\mathcal{C}}P_R t_j)_{\alpha} \\ -(\bar{t}_j P_R)_{\alpha} T^a_{ji} \\ (\bar{t}_j P_L)_{\alpha} T^a_{ji} \end{pmatrix}, \\ \mathbf{X}_{\Xi \phi} &= \begin{pmatrix} {\mathcal{C}}^{-1} X_{\Lambda \sigma}, & {\mathcal{C}}^{-1} X_{\Lambda \sigma ^*} \end{pmatrix} \\ &= ({\mathcal{C}}^{-1})_{\alpha\beta} \begin{pmatrix} (X_{{\tilde{g}^{a}} {\tilde{t}_{L}}})_{i\beta}^a, & (X_{{\tilde{g}^{a}} {\tilde{t}_{R}}})_{i\beta}^a, & (X_{{\tilde{g}^{a}} {\tilde{t}_{L}}^*})_{i\beta}^a, & (X_{{\tilde{g}^{a}} {\tilde{t}_{R}}^*})_{i\beta}^a \end{pmatrix} \\ &= \sqrt{2} g_3 \begin{pmatrix} -(\bar{t}_j P_R {\mathcal{C}})_{\alpha} T^a_{ji}, & (\bar{t}_j P_L {\mathcal{C}})_{\alpha} T^a_{ji}, & T^a_{ij} (P_L t_j)_{\alpha}, & -T^a_{ij} (P_R t_j)_{\alpha} \end{pmatrix},\end{aligned}$$ the difference being that the stops are now considered to be light fields. For the purpose of this application we also need the derivatives with respect to a top and a gluino, which read $$\begin{aligned} (X_{\bar{t} {\tilde{g}^{a}}})_{i \alpha \beta}^a&=-\sqrt{2}g_3T^a_{ij}\left[(P_R)_{\alpha\beta}{\tilde{t}_{Lj}}-(P_L)_{\alpha\beta}{\tilde{t}_{Rj}}\right],\\ (X_{t {\tilde{g}^{a}}})_{i \alpha \beta}^a&=-\sqrt{2}g_3T^a_{ji}\left[-{\tilde{t}_{Lj}}^*({\mathcal{C}}P_L)_{\beta \alpha}+{\tilde{t}_{Rj}}^*({\mathcal{C}}P_R)_{\beta \alpha}\right],\\ (X_{{\tilde{g}^{a}}\bar{t}})_{i \alpha \beta}^a&=\sqrt{2}g_3T^a_{ij}\left[(P_R)_{\beta \alpha}{\tilde{t}_{Lj}}-(P_L)_{\beta \alpha}{\tilde{t}_{Rj}}\right],\\ (X_{ {\tilde{g}^{a}} t})_{i \alpha \beta}^a&=\sqrt{2}g_3T^a_{ji}\left[-{\tilde{t}_{Lj}}^*({\mathcal{C}}P_L)_{ \alpha \beta}+{\tilde{t}_{Rj}}^*({\mathcal{C}}P_R)_{\alpha \beta}\right],\end{aligned}$$ and are collected into $$\begin{aligned} \mathbf{X}_{\Xi \xi}&=\begin{pmatrix} {\mathcal{C}}^{-1} X_{\Lambda \omega}, & {\mathcal{C}}^{-1} X_{\Lambda \bar{\omega}} {\mathcal{C}}^{-1} \end{pmatrix}\\ &=\begin{pmatrix} ({\mathcal{C}}^{-1}X_{ {\tilde{g}^{a}} t})_{i \alpha \beta}^a , & ({\mathcal{C}}^{-1} X_{{\tilde{g}^{a}}\bar{t}} {\mathcal{C}}^{-1})_{i \alpha \beta}^a \end{pmatrix} \\ &= \begin{pmatrix} -\sqrt{2}g_3T^a_{ji}\left[{\tilde{t}_{Lj}}^*(P_L)_{ \alpha \beta}-{\tilde{t}_{Rj}}^*(P_R)_{ \alpha \beta}\right] , & -\sqrt{2}g_3T^a_{ij}\left[(P_R)_{\alpha \beta}{\tilde{t}_{Lj}}-(P_L)_{\alpha \beta}{\tilde{t}_{Rj}}\right] \end{pmatrix}, \\ \mathbf{X}_{\xi \Xi}&=\begin{pmatrix} X_{\bar{\omega} \Lambda} \\ {\mathcal{C}}^{-1} X_{\omega \Lambda} \end{pmatrix}=\begin{pmatrix} (X_{\bar{t} {\tilde{g}^{a}}})_{i \alpha \beta}^a \\ ({\mathcal{C}}^{-1}X_{t {\tilde{g}^{a}}})_{i \alpha \beta}^a \end{pmatrix}=\begin{pmatrix} -\sqrt{2}g_3T^a_{ij}\left[(P_R)_{\alpha\beta}{\tilde{t}_{Lj}}-(P_L)_{\alpha\beta}{\tilde{t}_{Rj}}\right]\\ -\sqrt{2}g_3T^a_{ji}\left[{\tilde{t}_{Lj}}^*(P_L)_{\alpha \beta}-{\tilde{t}_{Rj}}^*(P_R)_{\alpha \beta}\right] \end{pmatrix}.\end{aligned}$$ Finally we give the derivatives with respect to two stops $$\begin{aligned} \mathbf{X}_{\phi \phi} &= \begin{pmatrix} \mathbf{Y}_{\phi \phi} & \mathbf{0}_{2\times2} \\ \mathbf{0}_{2\times2} & (\mathbf{Y}_{\phi \phi})^* \end{pmatrix},\\ \mathbf{Y}_{\phi \phi} &= \begin{pmatrix} x_t {\tilde{t}_{Rj}}^* {\tilde{t}_{Ri}}-\frac{g_3 ^2}{6 }{\tilde{t}_{R}}^* {\tilde{t}_{R}} \delta_{ij} && x_t \delta_{ij} {\tilde{t}_{L}} {\tilde{t}_{R}}^*-\frac{g_3^2}{6}{\tilde{t}_{Li}} {\tilde{t}_{Rj}}^* \\ x_t \delta_{ij} {\tilde{t}_{L}}^* {\tilde{t}_{R}}-\frac{g_3^2}{6}{\tilde{t}_{Ri}} {\tilde{t}_{Lj}}^* && x_t {\tilde{t}_{Lj}}^* {\tilde{t}_{Li}}-\frac{g_3 ^2}{6 }{\tilde{t}_{L}}^* {\tilde{t}_{L}} \delta_{ij} \end{pmatrix},\end{aligned}$$ where we have introduced the abbreviation $x_t \equiv y_t^2-g_3 ^2/2$. Substituting these derivatives into and summing over all indices one finds $$\begin{aligned} c_{t_L}&=\frac{16}{3}g^2_3\left({\tilde{\mathcal{I}}}[q^{2}]^{12} _{{\tilde{g}^{}}0}+2 m^2_{\tilde{q}} {\tilde{\mathcal{I}}}[q^2] ^{13} _{{\tilde{g}^{}}0}\right), \\ c_{t_R}&=\frac{16}{3}g^2_3\left({\tilde{\mathcal{I}}}[q^{2}]^{12} _{{\tilde{g}^{}}0}+2 m^2_{\tilde{u}} {\tilde{\mathcal{I}}}[q^2] ^{13} _{{\tilde{g}^{}}0}\right), \\ c_{{\tilde{t}_{L}}}&=c_{{\tilde{t}_{R}}}=\frac{32}{3}g^2_3(d+2)\left(-{\tilde{\mathcal{I}}}[q^{4}]^{31} _{{\tilde{g}^{}}0}+ \frac{m^2_{\tilde{q}}}{2} {\tilde{\mathcal{I}}}[q^2] ^{22} _{{\tilde{g}^{}}0}\right),\\ c_{61}^{L^\mu L_\mu}&=c_{61} ^{R^\mu R_\mu}=\frac{7}{6}g^4_3 {\tilde{\mathcal{I}}}[q^2] ^{22}_{{\tilde{g}^{}}0}, \\ c_{62} ^{L^\mu L_\mu}&=c_{62} ^{ R^\mu R_\mu}=\frac{1}{18}g^4_3 {\tilde{\mathcal{I}}}[q^2] ^{22}_{{\tilde{g}^{}}0}, \\ c_{61} ^{(LR)^\mu (RL)_\mu}&=\frac{10}{9}g_3^4 {\tilde{\mathcal{I}}}[q^2] ^{22}_{{\tilde{g}^{}}0}, \\ c_{62}^{(LR)^\mu (RL)_\mu}&=-\frac{2}{9}g_3^4 {\tilde{\mathcal{I}}}[q^2] ^{22}_{{\tilde{g}^{}}0}, \\ c_{61} ^{LL}&=c_{61} ^{ R R}=\frac{5}{18}g^4_3 m^2_{{\tilde{g}^{}}}{\tilde{\mathcal{I}}}[q^2] ^{22}_{{\tilde{g}^{}}0}, \\ c_{62} ^{LL}&=c_{62} ^{ R R}=-\frac{1}{6}g^4_3 m^2_{{\tilde{g}^{}}}{\tilde{\mathcal{I}}}[q^2] ^{22}_{{\tilde{g}^{}}0}, \\ c_{61} ^{(LR)(RL)}&=\frac{7}{6}g_3^4 m^2_{{\tilde{g}^{}}}{\tilde{\mathcal{I}}}[q^2] ^{22}_{{\tilde{g}^{}}0}, \\ c_{62} ^{(LR) (RL)}&=\frac{1}{18}g_3^4 m^2_{{\tilde{g}^{}}}{\tilde{\mathcal{I}}}[q^2] ^{22}_{{\tilde{g}^{}}0}, \\ \delta m_{{\tilde{q}}} ^2 &= \delta m_{{\tilde{u}}} ^2 =\frac{16}{3}dg^2 _3 {\tilde{\mathcal{I}}}[q^2] ^{11} _{{\tilde{g}^{}}0}, \\ c^L _{41} &= -\frac{40}{9}m^2_{{\tilde{g}^{}}}g_3^4 {\tilde{\mathcal{I}}}[q^2] ^{22} _{{\tilde{g}^{}}0}-\frac{1}{9}d(d+2)g_3^4 {\tilde{\mathcal{I}}}[q^4] ^{22} _{{\tilde{g}^{}}0}, \\ c^R _{4} &= -\frac{16}{3}m^2_{{\tilde{g}^{}}}g_3^4 {\tilde{\mathcal{I}}}[q^2] ^{22} _{{\tilde{g}^{}}0}-\frac{22}{9}d(d+2)g_3^4 {\tilde{\mathcal{I}}}[q^4] ^{22} _{{\tilde{g}^{}}0}, \\ c^L _{42} &= \frac{8}{3}m^2_{{\tilde{g}^{}}}g_3^4 {\tilde{\mathcal{I}}}[q^2] ^{22} _{{\tilde{g}^{}}0}-\frac{7}{3}d(d+2)g_3^4 {\tilde{\mathcal{I}}}[q^4] ^{22} _{{\tilde{g}^{}}0}, \\ c^{LR} _{41} &= -\frac{8}{9}m^2_{{\tilde{g}^{}}}g_3^4 {\tilde{\mathcal{I}}}[q^2] ^{22} _{{\tilde{g}^{}}0}-\frac{20}{9}d(d+2)g_3^4 {\tilde{\mathcal{I}}}[q^4] ^{22} _{{\tilde{g}^{}}0}, \\ c^{LR} _{42} &= -\frac{56}{3}m^2_{{\tilde{g}^{}}}g_3^4 {\tilde{\mathcal{I}}}[q^2] ^{22} _{{\tilde{g}^{}}0}+\frac{4}{9}d(d+2)g_3^4 {\tilde{\mathcal{I}}}[q^4] ^{22} _{{\tilde{g}^{}}0}, \\ c^{L} _{61} &= \frac{1}{54}d(d+2)g_3^6 m^2_{{\tilde{g}^{}}} {\tilde{\mathcal{I}}}[q^4] ^{33} _{{\tilde{g}^{}}0}+\frac{2}{81}d(d^2+6d+8)g_3^6 {\tilde{\mathcal{I}}}[q^6] ^{33} _{{\tilde{g}^{}}0}, \\ c^{L} _{62} &=- \frac{2}{3}d(d+2)g_3^6 m^2_{{\tilde{g}^{}}} {\tilde{\mathcal{I}}}[q^4] ^{33} _{{\tilde{g}^{}}0}-\frac{2}{9}d(d^2+6d+8)g_3^6 {\tilde{\mathcal{I}}}[q^6] ^{33} _{{\tilde{g}^{}}0}, \\ c^{L} _{63} &= \frac{1}{2}d(d+2)g_3^6 m^2_{{\tilde{g}^{}}} {\tilde{\mathcal{I}}}[q^4] ^{33} _{{\tilde{g}^{}}0}-\frac{4}{3}d(d^2+6d+8)g_3^6 {\tilde{\mathcal{I}}}[q^6] ^{33} _{{\tilde{g}^{}}0}, \\ c^{R} _{6} &=-\frac{4}{27}d(d+2)g_3^6 m^2_{{\tilde{g}^{}}} {\tilde{\mathcal{I}}}[q^4] ^{33} _{{\tilde{g}^{}}0}-\frac{124}{81}d(d^2+6d+8)g_3^6 {\tilde{\mathcal{I}}}[q^6] ^{33} _{{\tilde{g}^{}}0}, \\ c^{LR} _{61} &= \frac{1}{18}d(d+2)g_3^6 m^2_{{\tilde{g}^{}}} {\tilde{\mathcal{I}}}[q^4] ^{33} _{{\tilde{g}^{}}0} +\frac{2}{27}d(d^2+6d+8)g_3^6 {\tilde{\mathcal{I}}}[q^6] ^{33} _{{\tilde{g}^{}}0},\\ c^{LR} _{62} &=- \frac{12}{9}d(d+2)g_3^6 m^2_{{\tilde{g}^{}}} {\tilde{\mathcal{I}}}[q^4] ^{33} _{{\tilde{g}^{}}0}-\frac{10}{9}d(d^2+6d+8)g_3^6 {\tilde{\mathcal{I}}}[q^6] ^{33} _{{\tilde{g}^{}}0}, \\ c^{LR} _{63} &= -\frac{1}{6}d(d+2)g_3^6 m^2_{{\tilde{g}^{}}} {\tilde{\mathcal{I}}}[q^4] ^{33} _{{\tilde{g}^{}}0}-\frac{14}{9}d(d^2+6d+8)g_3^6 {\tilde{\mathcal{I}}}[q^6] ^{33} _{{\tilde{g}^{}}0}, \\ c^{LR} _{64} &= \frac{2}{9}d(d^2+6d+8)g_3^6 {\tilde{\mathcal{I}}}[q^6] ^{33} _{{\tilde{g}^{}}0}, \\ c^{RL} _{61} &= -\frac{1}{9}d(d+2)g_3^6 m^2_{{\tilde{g}^{}}} {\tilde{\mathcal{I}}}[q^4] ^{33} _{{\tilde{g}^{}}0}-\frac{40}{27}d(d^2+6d+8)g_3^6 {\tilde{\mathcal{I}}}[q^6] ^{33} _{{\tilde{g}^{}}0},\\ c^{RL} _{62} &=- \frac{12}{9}d(d+2)g_3^6 m^2_{{\tilde{g}^{}}} {\tilde{\mathcal{I}}}[q^4] ^{33} _{{\tilde{g}^{}}0}+\frac{8}{9}d(d^2+6d+8)g_3^6 {\tilde{\mathcal{I}}}[q^6] ^{33} _{{\tilde{g}^{}}0}, \\ c_{51} ^{LR\text{,1{\ensuremath{\ell}}}}&=c_{51} ^{RL\text{,1{\ensuremath{\ell}}}}=-\frac{g_3^4}{3}m_{{\tilde{g}^{}}} {\tilde{\mathcal{I}}}^{12}_{{\tilde{g}^{}}0}, \\ c_{52} ^{LR\text{,1{\ensuremath{\ell}}}}&=c_{52} ^{RL\text{,1{\ensuremath{\ell}}}}=-\frac{8}{3}g_3^4 x_t m_{{\tilde{g}^{}}} {\tilde{\mathcal{I}}}^{12}_{{\tilde{g}^{}}0}, \\ c_G&=-\frac{g_3^2}{2}{\tilde{\mathcal{I}}}^{2} _{{\tilde{g}^{}}}.\end{aligned}$$ In the calculation of these corrections the relations $g^{\mu\nu}g_{\mu\nu} = d = 4 - {\epsilon}$ and were used repeatedly. The one-loop corrections $\delta m_{{\tilde{q}}}^2$ and $\delta m_{{\tilde{u}}}^2$ to the third generation squark mass parameters have already been calculated in [@Aebischer:2017aqa] and our results agree with the expressions found there. Since supersymmetry is only softly broken in the MSSM it is convenient to use DRED as a regulator. Once the gluino is integrated out from the theory, supersymmetry is explicitly broken and it is natural to regularize the EFT in DREG. This switch in the regularization scheme introduces further contributions to the couplings of the EFT coming from the epsilon scalars. In the formalism of the UOLEA the relevant operators which contribute here are given by [@Summ:2018oko] $$\begin{aligned} \begin{split} \frac{{\epsilon}}{\kappa} {\mathcal{L}}^{1\ell}_\text{reg} = &-\sum _{i} (m^2_{{\epsilon}})_{i} ({\ensuremath{\breve{X}}}^\mu _{{\epsilon}{\epsilon}\mu})_{ii} + \frac{1}{2} \sum_{ij} ({\ensuremath{\breve{X}}}^{\mu}_{{\epsilon}{\epsilon}\nu})_{ij} ({\ensuremath{\breve{X}}}^{\nu}_{{\epsilon}{\epsilon}\mu})_{ji} \\ &+\sum_{ij} 2^{c_{F_j}} \left\{2 m_{\psi j} ({\ensuremath{\breve{X}}}^\mu_{{\epsilon}\psi})_{ij} ({\ensuremath{\breve{X}}} _{\bar{\psi} {\epsilon}\mu})_{ji} + ({\ensuremath{\breve{X}}}^\mu_{{\epsilon}\psi})_{ij} \gamma^\nu \left[P_\nu,({\ensuremath{\breve{X}}}_{\bar{\psi} {\epsilon}\mu})_{ji}\right]\right\} \\ &-\sum_{i j k} 2^{c_{F_j}+c_{F_k}-1} ({\ensuremath{\breve{X}}}^\mu_{{\epsilon}\psi})_{ij} \gamma ^\nu (X_{\bar{\psi} \psi})_{jk} \gamma_{\nu} ({\ensuremath{\breve{X}}}_{\bar{\psi} {\epsilon}\mu})_{ki} \\ & + \frac{{\epsilon}}{12} \operatorname{tr}\left[ G'_{\mu \nu} G'^{\mu \nu} \right], \end{split} \label{eq:epsilon-scalar contributions}\end{aligned}$$ The ${\ensuremath{\breve{X}}}$ operators are projections of the corresponding $4$-dimensional ones ${\ensuremath{\mathring{X}}}$ onto the ${\epsilon}$-dimensional $Q{\epsilon}S$ space, i.e. $$\begin{aligned} {\ensuremath{\breve{X}}}^\mu &= {\ensuremath{\breve{g}}}^\mu_\sigma {\ensuremath{\mathring{X}}}^\sigma, \\ {\ensuremath{\breve{X}}}^{\mu\nu} &= {\ensuremath{\breve{g}}}^\mu_\sigma {\ensuremath{\breve{g}}}^\nu_\rho {\ensuremath{\mathring{X}}}^{\sigma\rho},\end{aligned}$$ see [appendix \[sec:DREG\_DRED\]]{}. Furthermore, $G'_{\mu\nu} = -ig_3 G^a_{\mu\nu} T^a$ is the gluon field strength tensor. For the top quark (a Dirac fermion) we have $c_F = 0$, and for the gluino (a Majorana fermion) $c_F = 1$. From we obtain the following additional contributions to the couplings of the EFT $$\begin{aligned} (\delta m^2 _{{\tilde{q}}})_{\epsilon}&=(\delta m ^2 _{{\tilde{u}}})_{\epsilon}=-\frac{4}{3} g_3^2 m_{\epsilon}^2, \label{eq:delta_m2_eps} \\ (c_{t_L})_{\epsilon}&= (c_{t_R})_{\epsilon}=\frac{4}{3}g_3^2, \\ (c^L_{41})_{\epsilon}&= \frac{1}{72}g_3^4, \\ (c^L_{42})_{\epsilon}&=\frac{7}{24}g_3^4, \\ (c^R_{4})_{\epsilon}&= \frac{11}{36}g_3^4, \\ (c^{LR}_{41})_{\epsilon}&= \frac{1}{36}g_3^4, \\ (c^{LR}_{42})_{\epsilon}&= \frac{7}{12}g_3^4, \\ (c^{LL}_{51})_{\epsilon}&= (c^{LL}_{52})_{\epsilon}= (c^{RR}_{51})_{\epsilon}=(c^{RR}_{52})_{\epsilon}= \frac{3g_3^4 }{2 m_{{\tilde{g}^{}}}}d, \\ (c^{LR}_{51})_{\epsilon}&= (c^{RL}_{52})_{\epsilon}= -\frac{3g_3^4 }{m_{{\tilde{g}^{}}}}d, \\ (c_G)_{\epsilon}&= -\frac{g_3^2}{4}.\end{aligned}$$ The term $\propto m_{\epsilon}^2$ on the r.h.s. of can be removed by switching from the [$\overline{\text{DR}}$]{} to the [$\overline{\text{DR}}'$]{} scheme [@Jack:1994rk], which involves shifting $m^2_{{\tilde{q}}}$ and $m ^2_{{\tilde{u}}}$ by finite terms. Notice also that the one-loop DRED–DREG conversion corrections to the coefficients of the dimension 5 operators arise from the third line of , which among other terms contains the term $$\begin{aligned} ({\ensuremath{\breve{X}}}^\mu_{{\epsilon}t})\gamma ^\nu (X_{\bar{t} {\tilde{g}^{}}}) \gamma_{\nu} ({\ensuremath{\breve{X}}}_{\bar{{\tilde{g}^{}}} {\epsilon}\mu}).\end{aligned}$$ Here $({\ensuremath{\breve{X}}}_{\bar{{\tilde{g}^{}}} {\epsilon}\mu})$ has an explicit dependence on the gluino spinor ${\tilde{g}^{}}$, $$\begin{aligned} ({\ensuremath{\breve{X}}}_{\bar{{\tilde{g}^{}}} {\epsilon}\mu})^{ba}=\frac{ig_3}{2}{\ensuremath{\breve{\gamma}}}^\mu f^{abc}{\tilde{g}^{c}},\end{aligned}$$ which must be eliminated by inserting the background field from . As noted above the threshold corrections for the two stop masses agree with the results derived in [@Aebischer:2017aqa] when the effect of the sbottom quarks is neglected. Conclusions {#sec:conclusions} =========== In this paper we have presented an extension of the Universal One-Loop Effective Action (UOLEA) by all one-loop operators up to dimension 6 for generic theories with scalar and fermionic fields, excluding operators stemming from open covariant derivatives in the UV Lagrangian. Our generic results can be used to derive the analytic expressions of all one-loop Wilson coefficients up to dimension 6 of an effective Lagrangian from a given UV theory with heavy scalar or fermionic particles, as long as second derivatives of the UV Lagrangian w.r.t. the fields do not contain covariant derivatives. Thus, our new results allow for an application of the UOLEA to a broader class of UV models than before. To illustrate and test our generic results we have applied the UOLEA to different EFTs of the SM and the MSSM, where parts of the spectrum are heavy. We were able to reproduce known results from the literature, including the prediction of some one-loop Wilson coefficients of higher-dimensional operators of the SMEFT. We have published our results in form of the two ancillary Mathematica files `UOLEA.m` and `LoopFunctions.m`, which allow for a direct use of our expressions and a potential implementation into generic tools such as [`CoDEx`]{}or spectrum generator generators such as [`SARAH`]{} and [`FlexibleSUSY`]{}. Fermionic shifts {#sec:shifts} ================ In this section we discuss the consistency of the shift . The treatment of the shift given in is analogous but somewhat more involved. Since $\xi$ is a multiplet of Majorana-like component spinors, for the shift $$\begin{aligned} \delta \xi' = \delta \xi+\mathbf{\Delta}_\xi^{-1}\left[\tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi-\tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi-\tilde{\mathbf{X}}_{\xi \phi} \delta \phi\right] \label{eq:xishift2}\end{aligned}$$ to be consistent it is necessary and sufficient that $$\begin{aligned} \left(\mathbf{\Delta}_\xi^{-1}\left[\tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi-\tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi-\tilde{\mathbf{X}}_{\xi \phi} \delta \phi\right]\right)^{T}=\left[\delta \Xi^T \tilde{\mathbf{X}}_{\Xi \xi}+\delta \Phi^T \tilde{\mathbf{X}}_{\Phi \xi}+\delta \phi^T \tilde{\mathbf{X}}_{\phi \xi}\right]\overleftarrow{\mathbf{\Delta}}_\xi^{-1}. \label{eq:shiftCondition}\end{aligned}$$ In the following we show that holds. We first construct $\mathbf{\Delta}_\xi ^{-1}$ in position space through its Neumann series[^5] $$\begin{aligned} \mathbf{\Delta}_\xi ^{-1}(x,y) &= \sum _{n=0} ^{\infty} \left(\prod _{\substack{i=1 \\ n>0}} ^{n} \int {\ensuremath{\mathrm{d}}}^d x_i \; \mathbf{S}(x_{i-1},x_i) \left(-\mathbf{X}_{\xi \xi}(x_i)\right)\right) \mathbf{S}(x_n,y) \tilde{\mathds{1}} {\mathcal{C}}^{-1} \nonumber \\ & \equiv \sum _{n=0} ^{\infty} \left(\prod _{\substack{i=1 \\ n>0}} ^{n} \mathbf{S}_{x_{i-1} x_i} \left(-\mathbf{X}_{\xi \xi x_i}\right)\right) \mathbf{S}_{x_n y} \tilde{\mathds{1}} {\mathcal{C}}^{-1}, \end{aligned}$$ where $x_0\equiv x$ and $\mathbf{S}(x,y)$ is the matrix-valued Green’s function for $(\slashed{P}-M_\xi)$, which itself can be expressed through a Neumann series. To keep expressions short we also introduced the convention of denoting space-time points by indices, where repeated indices are integrated over. We may write $(\slashed{P}-M_\xi) = (i\slashed{\partial}-M_\xi-\mathbf{A})$ with $$\begin{aligned} \mathbf{A}=i\sum_j g_j\slashed{A}_j^a T_j^a,\end{aligned}$$ where we sum over all factors of the gauge group for a direct product group and $T_j^a$ is a block-diagonal matrix which generates the reducible representation of $\xi$. Due to the fact that $\xi$ contains $\omega$, ${\omega^C}$ and $\lambda$ (see ), the generator is of the form $$\begin{aligned} T^a=\begin{pmatrix} T^a _{R(\omega)} && 0 && 0 \\ 0 && T^a _{\bar{R}(\omega)} && 0 \\ 0 && 0 && T^a_{R( \lambda )} \end{pmatrix},\end{aligned}$$ where $R(\omega)$ is the representation under which $\omega$ transforms, $\bar{R}(\omega)$ its conjugate representation and $R(\lambda)$ is the representation of $\lambda$, which is necessarily real. We then have $$\begin{aligned} \mathbf{S}_{x y}=\sum _{k=0} ^{\infty} \left(\prod _{\substack{i=1 \\ k>0}} ^{k} \mathbf{S}_{f,x_{i-1} x_i} \mathbf{A}_{x_i}\right) \mathbf{S}_{f, x_k y},\end{aligned}$$ where again $x_0 \equiv x$ and $\mathbf{S}_{f,x y}$ is the matrix containing the Green’s function of the free Dirac equation on its diagonal. It can be verified by explicit calculation that $$\begin{aligned} \mathbf{S}_{x y} \left(-i\overleftarrow{\slashed{\partial}_y}-M_\xi-\mathbf{A}_y \right) &= \delta_{x y},\end{aligned}$$ which means that $$\begin{aligned} \mathbf{\Delta}_{\xi,x y} ^{-1}\overleftarrow{\mathbf{\Delta}}_{\xi, y}=\delta_{x y}\end{aligned}$$ and therefore $\overleftarrow{\mathbf{\Delta}}_{\xi,yx} ^{-1}=\mathbf{\Delta}_{\xi,yx}^{-1}$. Hence reads $$\begin{aligned} \left(\mathbf{\Delta}_{\xi, x y}^{-1} \left[\tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi-\tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi-\tilde{\mathbf{X}}_{\xi \phi} \delta \phi\right]_y\right)^{T}=\left[\delta \Xi^T \tilde{\mathbf{X}}_{\Xi \xi}+\delta \Phi^T \tilde{\mathbf{X}}_{\Phi \xi}+\delta \phi^T \tilde{\mathbf{X}}_{\phi \xi}\right]_y\mathbf{\Delta}_{\xi, yx}^{-1}.\end{aligned}$$ It is then useful to calculate $$\begin{aligned} {\mathcal{C}}\tilde{\mathds{1}} \mathbf{S}^T_{xy}&={\mathcal{C}}\tilde{\mathds{1}}\sum _{k=0} ^{\infty} \mathbf{S}^T_{f,x_k y} \left(\prod _{\substack{i=k \\ k>0}} ^{1} \mathbf{A}^T_{x_i}\mathbf{S}^T_{f,x_{i-1}x_i} \right) \\ &= {\mathcal{C}}\tilde{\mathds{1}}\sum _{k=0} ^{\infty} {\mathcal{C}}\mathbf{S}_{f,y x_k} {\mathcal{C}}^{-1} \left(\prod _{\substack{i=k \\ k>0}} ^{1} \mathbf{A}^T _{x_i}{\mathcal{C}}\mathbf{S}_{f,x_i x_{i-1}}{\mathcal{C}}^{-1} \right) \\ &= -\sum _{k=0} ^{\infty} \tilde{\mathds{1}} \mathbf{S}_{f,y x_k} \tilde{\mathds{1}} \tilde{\mathds{1}} \left(\prod _{\substack{i=k \\ k>0}} ^{1} (-\mathbf{A}^t _{x_i})\tilde{\mathds{1}} \tilde{\mathds{1}} \mathbf{S}_{f,x_i x_{i-1}} \tilde{\mathds{1}} \tilde{\mathds{1}} \right){\mathcal{C}}^{-1} \\ &= -\sum _{k=0} ^{\infty} \mathbf{S}_{f, y x_k} \left(\prod _{\substack{i=k \\ k>0}} ^{1} (-\tilde{\mathds{1}}\mathbf{A}^t_{x_i}\tilde{\mathds{1}})\mathbf{S}_{f,x_i x_{i-1}} \right)\tilde{\mathds{1}}{\mathcal{C}}^{-1} \\ &= -\sum _{k=0} ^{\infty} \mathbf{S}_{f,y x_k} \left(\prod _{\substack{i=k \\ k>0}} ^{1} \mathbf{A}_{x_i} \mathbf{S}_{f,x_i x_{i-1}} \right)\tilde{\mathds{1}}{\mathcal{C}}^{-1} \\ &= -\mathbf{S}_{yx}\tilde{\mathds{1}}{\mathcal{C}}^{-1},\end{aligned}$$ where $\mathbf{A}^t$ means taking the transpose of the gauge group generators only and we used that $$\begin{aligned} \tilde{\mathds{1}}\begin{pmatrix} A && 0 && 0 \\ 0 && B && 0 \\ 0 && 0 && C \end{pmatrix} \tilde{\mathds{1}}=\begin{pmatrix} B && 0 && 0 \\ 0 && A && 0 \\ 0 && 0 && C \end{pmatrix}.\end{aligned}$$ We then find $$\begin{aligned} \left(\mathbf{\Delta}_{\xi, xy} ^{-1}\right)^T &= {\mathcal{C}}\tilde{\mathds{1}}\sum _{n=0} ^{\infty} \mathbf{S}_{x_n,y}^T \left(\prod _{\substack{i=1 \\ n>0}} ^{n} \left(-\mathbf{X}_{\xi \xi, x_i}\right)^T \mathbf{S}_{x_{i-1}x_i}^T \right) \\ &= \sum _{n=0} ^{\infty} \mathbf{S}_{y x_n} \tilde{\mathds{1}}{\mathcal{C}}\left(\prod _{\substack{i=1 \\ n>0}} ^{n} \left(-\mathbf{X}_{\xi \xi, x_i}\right)^T \mathbf{S}_{x_{i-1} x_i}^T \right) \\ &= \sum _{n=0} ^{\infty} \mathbf{S}_{y x_n} \tilde{\mathds{1}}{\mathcal{C}}\left(\prod _{\substack{i=1 \\ n>0}} ^{n} \left(-\mathbf{X}_{\xi \xi, x_i}\right)^T \tilde{\mathds{1}}{\mathcal{C}}^{-1} \tilde{\mathds{1}}{\mathcal{C}}\mathbf{S}_{x_{i-1} x_i}^T \tilde{\mathds{1}}{\mathcal{C}}^{-1} \tilde{\mathds{1}}{\mathcal{C}}\right) \\ &= \sum _{n=0} ^{\infty} \mathbf{S}_{y x_n} \tilde{\mathds{1}}{\mathcal{C}}\left(\prod _{\substack{i=1 \\ n>0}} ^{n} \left(-\mathbf{X}_{\xi \xi, x_i}\right)^T \tilde{\mathds{1}}{\mathcal{C}}^{-1} \mathbf{S}_{x_i x_{i-1}} \tilde{\mathds{1}}{\mathcal{C}}\right) \\ &= -\sum _{n=0} ^{\infty} \mathbf{S}_{y x_n} \left(\prod _{\substack{i=1 \\ n>0}} ^{n} \left(-\mathbf{X}_{\xi \xi, x_i}\right) \mathbf{S}_{x_i x_{i-1}} \right) \tilde{\mathds{1}}{\mathcal{C}}^{-1} \\ &= -\mathbf{\Delta}_{\xi yx} ^{-1},\end{aligned}$$ where we used that $$\begin{aligned} {\mathcal{C}}\tilde{\mathds{1}} \mathbf{X}^T_{\xi \xi} \tilde{\mathds{1}} {\mathcal{C}}^{-1}=\mathbf{X}_{\xi \xi}.\end{aligned}$$ Noting that $$\begin{aligned} \tilde{\mathbf{X}}^T_{\xi \Xi}&=-\tilde{\mathbf{X}}_{\Xi \xi}, \\ \tilde{\mathbf{X}}^T_{\xi \Phi}&=\tilde{\mathbf{X}}_{\Phi \xi}, \\ \tilde{\mathbf{X}}^T_{\xi \phi}&=\tilde{\mathbf{X}}_{\phi \xi},\end{aligned}$$ the validity of follows immediately. Loop functions {#sec:loop_functions} ============== The integrals ${\tilde{\mathcal{I}}}[q^{2n_c}]^{n_i n_j \dots n_L} _{i j \dots 0}$ are defined as in [@Zhang:2016pja], that is $$\begin{aligned} \int \frac{{\ensuremath{\mathrm{d}}}^d q}{(2\pi)^d} \frac{q^{\mu_1} q^{\mu_2} \dots q^{\mu_{2n_c}}}{(q^2-M_i^2)^{n_i}(q^2-M_j^2)^{n_j}\dots (q^2)^{n_L}}\equiv \frac{i}{16 \pi ^2} g^{\mu_1 \mu_2 \dots \mu_{2n_c}} {\tilde{\mathcal{I}}}[q^{2n_c}]^{n_i n_j \dots n_L} _{i j \dots 0},\end{aligned}$$ where $g^{\mu_1 \mu_2 \dots \mu_{2n_c}}$ is the completely symmetric combination of metric tensors with $2n_c$ indices, for instance $g^{\mu \nu \rho \sigma}=g^{\mu \nu}g^{\rho \sigma}+g^{\mu \rho}g^{\nu \sigma}+g^{\mu \sigma}g^{\nu \rho}$. For $n_c=0$ we define the shorthand notation ${\tilde{\mathcal{I}}}[q^0]^{n_i n_j \dots n_L} _{i j \dots 0} \equiv {\tilde{\mathcal{I}}}^{n_i n_j \dots n_L} _{i j \dots 0}$. The integrals can be reduced to basis integrals using the reduction relations [@Zhang:2016pja] $$\begin{aligned} {\tilde{\mathcal{I}}}[q^{2n_c}] ^{n_i n_j \dots n_L} _{i j \dots 0} &= \frac{1}{\Delta^2 _{ij}}\left({\tilde{\mathcal{I}}}[q^{2n_c}]^{n_i n_j-1 \dots n_L}-{\tilde{\mathcal{I}}}[q^{2n_c}]^{n_i-1 \, n_j \dots n_L}\right), \\ {\tilde{\mathcal{I}}}[q^{2n_c}] ^{n_i n_j \dots n_L} _{i j \dots 0} &= \frac{1}{M_i^2}\left({\tilde{\mathcal{I}}}[q^{2n_c}]^{n_i n_j \dots n_L-1}-{\tilde{\mathcal{I}}}[q^{2n_c}]^{n_i-1 \, n_j \dots n_L}\right),\end{aligned}$$ where $\Delta^2 _{ij}=M_i^2-M_j^2$. For convenience we have included the reduction algorithm and the basis integrals in the ancillary Mathematica file `LoopFunctions.m` in the arXiv submission of this publication with the correspondence $$\begin{aligned} {\tilde{\mathcal{I}}}[q^{2n_c}]^{n_i n_j \dots n_L} _{i j \dots 0} \equiv J[n_c, \{\{M_i,n_i\}, \{M_j,n_j\}, \ldots\}, n_L] .\end{aligned}$$ Useful relations for spinors and $SU(N)$ groups {#sec: spinor algebra} =============================================== We define the charge conjugate ${\psi^C}$ of a 4-spinor $\psi$ as $$\begin{aligned} {\psi^C} &\equiv {\mathcal{C}}\bar{\psi}^T, & \overline{{\psi^C}} &= \psi^T {\mathcal{C}},\end{aligned}$$ where ${\mathcal{C}}$ is the charge conjugation operator and $\bar{\psi}=\psi^\dagger \gamma^0$. It follows from this definition that $$\begin{aligned} {(\psi_R)^C} &= {\mathcal{C}}\,\overline{\psi_L}^T, & {(\psi_L)^C} &= {\mathcal{C}}\,\overline{\psi_R}^T.\end{aligned}$$ The following properties of ${\mathcal{C}}$ hold in the Dirac and Weyl representation: $$\begin{aligned} {\mathcal{C}}&= i\gamma^2\gamma^0, \\ {\mathcal{C}}&= -{\mathcal{C}}^{-1} = -{\mathcal{C}}^\dagger = -{\mathcal{C}}^T, \\ {\mathcal{C}}\gamma^\mu {\mathcal{C}}^{-1} &= - (\gamma^\mu)^T, \\ {\mathcal{C}}\gamma^5 {\mathcal{C}}^{-1} &= (\gamma^5)^T = \gamma^5, \\ {\mathcal{C}}\gamma^5 \gamma^\mu {\mathcal{C}}^{-1} &= (\gamma^5 \gamma^\mu)^T = (\gamma^\mu)^T \gamma^5, \\ {\mathcal{C}}P_L {\mathcal{C}}^{-1} &= (P_L)^T = P_L, \\ {\mathcal{C}}P_R {\mathcal{C}}^{-1} &= (P_R)^T = P_R.\end{aligned}$$ In our formalism we require that if a model contains Dirac spinors $\psi$, then the Lagrangian is expressed in terms of $\psi$ and $\bar{\psi}$. If the model contains Majorana spinors $\lambda$, we require that the Lagrangian is expressed *only* in terms of $\lambda$, but *not* in terms of $\bar{\lambda}$. Note that $\bar{\lambda}$ can always be rewritten as $$\begin{aligned} \bar{\lambda} &= ({\lambda^C})^T {\mathcal{C}}= \lambda^T {\mathcal{C}}\end{aligned}$$ because for Majorana fermions ${\lambda^C} = \lambda$. When contracting spinor indices the following identity may be used $$\begin{aligned} \psi^T \Gamma^T \bar{\psi}^T &= - \bar{\psi} \Gamma \psi.\end{aligned}$$ A useful relation for the generators $T^a$ of the fundamental representation of $SU(N)$ is $$\begin{aligned} T^a_{ij}T^a_{kl}=\frac{1}{2}\left(\delta_{il}\delta_{jk}-\frac{1}{N}\delta_{ij}\delta_{kl}\right). \label{eq:genrel}\end{aligned}$$ Dimensional regularization and dimensional reduction {#sec:DREG_DRED} ==================================================== Throughout this publication we have assumed that the models are regularized in dimensional regularization (DREG) [@tHooft:1972tcz], where loop calculations are performed in a quasi-$d$-dimensional space $QdS$ with the metric tensor ${\ensuremath{g}}^{\mu\nu}$ with the property $$\begin{aligned} {\ensuremath{g}}^{\mu\nu} {\ensuremath{g}}_{\mu\nu} &= d = 4 - {\epsilon}.\end{aligned}$$ Although DREG is suited for non-supersymmetric models, it is cumbersome to use in supersymmetric models, as it explicitly breaks supersymmetry [@Delbourgo:1974az]. For supersymmetric models regularization by dimensional reduction (DRED) [@Siegel:1979wq] is more suited, because it is currently known to not break supersymmetry up to the three-loop level [@Capper:1979ns; @Stockinger:2005gx; @Stockinger:2018oxe]. In DRED the quasi-$4$-dimensional space, denoted as $Q4S$, is decomposed into a quasi-$d$-dimensional space $QdS$ and a quasi-${\epsilon}$-dimensional space $Q{\epsilon}S$, as $Q4S=QdS\oplus Q{\epsilon}S$ [@Stockinger:2005gx]. The corresponding $4$- and ${\epsilon}$-dimensional metrics are denoted as ${\ensuremath{\mathring{g}}}^{\mu\nu}$ and ${\ensuremath{\breve{g}}}^{\mu\nu}$, respectively, and the following properties hold: $$\begin{aligned} {\ensuremath{\mathring{g}}}^{\mu\nu} &= {\ensuremath{g}}^{\mu\nu} + {\ensuremath{\breve{g}}}^{\mu\nu}, \\ {\ensuremath{\breve{g}}}^{\mu}_\sigma {\ensuremath{\mathring{g}}}^{\sigma\nu} &= {\ensuremath{\breve{g}}}^{\mu\nu}, \\ {\ensuremath{g}}^{\mu}_\sigma {\ensuremath{\mathring{g}}}^{\sigma\nu} &= {\ensuremath{g}}^{\mu\nu}, \\ {\ensuremath{\mathring{g}}}^{\mu\nu} {\ensuremath{\mathring{g}}}_{\mu\nu} &= 4, \\ {\ensuremath{g}}^{\mu\nu} {\ensuremath{g}}_{\mu\nu} &= d, \\ {\ensuremath{\breve{g}}}^{\mu\nu} {\ensuremath{\breve{g}}}_{\mu\nu} &= {\epsilon}, \\ {\ensuremath{\breve{g}}}^{\mu\nu} {\ensuremath{g}}_{\mu\nu} &= 0, \\ \operatorname{tr}(\gamma^\mu\gamma_\mu) &= 4d.\end{aligned}$$ [^1]: In principle the results obtained in this paper can also be applied to a setting where dimensional reduction is used as a regularization scheme, see [@Summ:2018oko]. [^2]: As discussed in [@Zhang:2016pja] and [section \[sec:results\_vectors\]]{}, our final expression for the UOLEA can also be used in a more general setting, including, for example, massive vector fields. [^3]: An explicit example is given in [section \[sec: gluinoOut\]]{} in the treatment of dimension 5 operators. [^4]: It was noted in [@Bagnaschi:2017xid] that the logarithmic term in the last line of eq. (D.4) in [@Drozd:2015rsp] should come with a minus sign. [^5]: In what follows we always write the whole series. In practice, however, we are only ever interested in a finite number of terms with all higher order terms being suppressed by higher powers of couplings.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present the derivation of the normalization constant for the perturbation matrix method recently proposed. The method is tested on the problem of a binary waveguide array for which an exact and an approximate solution are known. In our analysis, we show that to third order the normalized matrix method approximate solution gives results coinciding with the exact known solution.' author: - title: Normalization of the wavefunction obtained from perturbation theory based on a matrix method --- Introduction ============ In quantum mechanics the wavefunction plays an important role since it contains all the relevant information about the dynamical behavior of a physical quantum system [@1; @2]. To determine the evolution of the wavefunction, one must solve the time-dependent Schrödinger equation [@3]; in consequence, an analytic exact solution of this equation is of utmost importance to extract the whole appropriate information about a system [@4; @5]. Nevertheless, the number of problems that can be solved analytically is limited [@6; @7; @8]; for example, the harmonic oscillator [@9], the finite depth potential well [@10] and the hydrogen atom [@11], among others. Under these circumstances, it is necessary to resort to methods, like perturbation theory, that allows to obtain an approximate solution [@12]; one of these methods is the Rayleigh-Schrödinger perturbation theory, which has been widely used in many branches of physics with the main objective of finding solutions very close to the exact one [@13]. In spite of this, there are particular cases where the obtained results using the Rayleigh-Schrödinger perturbation theory show serious problems of convergence [@14]; for instance, the quantum system of a charged particle hopping on an infinite linear chain driven by an electric field [@15]. This has prompted researchers to develop and implement new techniques to get better approximate solutions for the time-dependent Schrödinger equation.\ An alternative perturbative approach, that we will focus along this work, is the Matrix Method [@16; @17]. [This new scheme, based on the implementation of triangular matrices, allows to solve approximately the time-dependent Schrödinger equation in an elegant and simple manner. The method has demonstrated that the corrections to the wavefunction and the energy can be contained in only one expression, unlike the standard perturbation theory where it is needed to calculate them in separated ways [@16; @17]. Moreover, the Matrix Method may also be used when one may find a unitary evolution operator for the unperturbed Hamiltonian but it is not possible to find its eigenstates. ]{} [On the other hand, its approximated solutions not only present conventional stationary terms, but also time dependent factors which allows us to know the temporal evolution of the corrections. A remarkable feature is that the general expression to compute them does not distinguish if the Hamiltonian is degenerate or not [@16; @17]. Besides, the formalism offers an alternative to express the Dyson series in a matrix form. Even, an extension of this mathematical analysis has been developed to obtain approximative solutions to the Lindblad-type master equation [@18]. Therefore, the Matrix Method possess many attractive features that cannot be found in the conventional treatments of the perturbation theory. However, it is worth noting that the perturbed solutions of the Schrödinger equation using the Matrix Method are not normalized, therefore, the main goal of this work is to find the normalization factor which gives us a complete perturbative description of the solutions.]{}\ The remainder of this work is organized as follows: In Section 2, we briefly review the Matrix Method described above. In Section 3, making use of the formalism of the Matrix Method, we show the development to obtain the general expression to compute the normalization constant to any order. In Section 4, we apply this approach to the particular problem of a binary waveguide array, which not only has a known exact analytic solution, but also has an approximate solution given by the small rotation method. Hence, in this particular problem, we shall show that the behavior of the field intensity distribution, given by the perturbed normalized result for the first three waveguides, has a very similar behavior with the known exact solution and has a much better accuracy than the solution obtained by the small rotation method. Finally, conclusions are given in Section 5. The Matrix Method ================= The Matrix Method [@16; @17; @18] arises from the formal solution of the time-dependent Schrödinger equation $\left|\psi(t)\right \rangle=e^{-i t \hat{H}} \left|\psi(0) \right \rangle$, with the complete Hamiltonian $ \hat{H}= \hat H_0 +\lambda \hat{H_p}$ divided into an unperturbed part $\hat H_0$ and a perturbed part $\hat{H_p}$, being $\lambda $ a perturbation parameter. The formal solution of the Schrödinger equation can be expanded in a Taylor series and sorted in powers of $\lambda$; for example, the expansion up to first-order is $$\label{1} \left|\psi(t) \right \rangle= \left[ e^{-i \hat H_0 t} + \lambda \sum\limits_{n=1}^{\infty} \frac{(-it)^n}{n!} \sum\limits_{k=0}^{n-1} \hat H_0^{n-1-k} \hat{H_p} \hat{H_0}^{k}\right] \left| \psi(0)\right \rangle.$$ The key to simplify and get a solution of the above is to consider the triangular matrix $$\label{2} M= \begin{pmatrix} \hat {H_0} & \hat{H_p} \\ 0 & \hat H_0 \\ \end{pmatrix},$$ whose diagonal elements are conformed by the unperturbed part of the Hamiltonian and the superior triangle by the perturbation. One can find that if we multiply the matrix $M$ by itself $n$-times, its upper element will contain exactly the same products of $\hat H_0$ and $\hat{H_p}$ defined within summation in Eq.. In other words, the matrix element $M_{1,2}$ will give us the first order correction; based on this consideration, we arrive to $$\label{3} \left|\psi(t) \right \rangle= \left[ e^{-i \hat H_0 t} + \lambda (e^{-i M t})_{1,2}\right] \left| \psi(0)\right \rangle,$$ where the first split part corresponds to the zero-order term and the other part refers to the first order correction. The above relation can be rewritten as $$\label{4} \left|\psi(t) \right \rangle= \left| \psi^{(0)} \right \rangle + \lambda \left( \left| \psi^{P} \right \rangle\right)_{1,2},$$ where $\left| \psi^{P} \right \rangle$ is a matrix defined as $$\label{5} \left| \psi^{P} \right \rangle= \begin{pmatrix} \left| \psi_{1,1} \right \rangle & \left| \psi_{1,2} \right \rangle \\ \left| \psi_{2,1} \right \rangle & \left| \psi_{2,2} \right \rangle \\ \end{pmatrix}.$$ The solution to first order can be determined if we derived the equations and with respect to time, equate the corresponding coefficients of $\lambda$ and perform the algebraic steps outlined in [@16; @17]; we obtain $$\label{6} \left|\psi_{1,2} \right \rangle = -i e^{-i \hat H_0 t}\left[\int\limits_0^t e^{i \hat H_0 t_1} \hat{H_{p}} e^{-i \hat H_0 t_1} dt_1 \right]\left|\psi(0) \right \rangle.$$ All the information for the second order correction will be enclosed in the element $M_{1,3}$ of a newly defined $3 \times 3$ triangular matrix $M$, completely similar to . Thus, the Matrix Method allows to transform the Taylor series of the formal solution of the time-dependent Schrödinger equation in a power series of the matrix $M$, which can be handled easily. Likewise, this process allows us to find any $k$th-order correction in a simple and straightforward way through the following relation [@16; @17] $$\label{7} \left|\psi(t) \right \rangle = \left| \psi^{(0)} \right \rangle + \sum\limits_{j = 1}^{k} \lambda^j \left( \left| \psi^{P} \right \rangle\right)_{1,j+1},$$ with $$\label{8} \left| \psi^{P} \right \rangle= \begin{pmatrix} \left|\psi_{1,1} \right \rangle & \hdots & \left|\psi_{1,j+1} \right \rangle \\ \vdots & \ddots & \vdots \\ \left|\psi_{j+1,1}\right \rangle & \hdots & \left|\psi_{j+1,j+1} \right \rangle \end{pmatrix},$$ being the matrix element $\left|\psi_{1,j+1} \right \rangle$ the relevant solution we are looking for, which is expressed in the form $$\label{9} \left|\psi_{1,j+1} \right \rangle = -i e^{-i \hat H_0 t}\int\limits_0^t e^{i \hat H_0 t_1} \hat{H_p} \Biggl[ -i e^{-i \hat H_0 t_1} \int\limits_0^{t_1 } e^{i \hat H_0 t_2}\hat{H_p} \Biggl[-i e^{-i \hat H_0 t_2} \int\limits_0^{t_2 } .....dt_3 \Biggr] dt_2\left|\psi(0) \right \rangle \Biggr] dt_1 .$$ This time-ordered series, restricted to the interval $ [0, t] $, is the fundamental piece to calculate the different correction terms; furthermore, we should point out that the relationship is the mathematical representation of the Dyson series [@19; @20]. [ Although this latter expression only applies for weak perturbations, its strong perturbation counterpart ($\lambda \to \infty $) can be derived in a straightforward way interchanging the unperturbed Hamiltonian $\hat{H_0}$ with the perturbation $\hat{H_p}$ and rescaling the time as $\tau=\lambda t$. This duality on the Matrix Method gives us the possibility to analyze the solution of a quantum system in both regimes of the perturbative parameter. However, in this work, we are limited to weak perturbations, as the example in Section 4.]{} Normalization constant ====================== In the previous section, we appreciated that the approximated solution of the Schrödinger equation can be written as a power series of the perturbative parameter $\lambda$, along with the element $\left|\psi_{1,j+1} \right \rangle$ of the perturbed matrix. It is appropriate to mention that the expression is not normalized and it is convenient to get a normalization factor $N_k$ that preserves its norm at any order. Then, let us define the next normalized solution $$\label{10} \left|\Psi(t) \right \rangle = N_k \left( \left| \psi^{(0)} \right \rangle + \sum\limits_{j = 1}^{k} \lambda^j \left| \psi_{1,j+1} \right \rangle \right),$$ where the corresponding value of $N_k$ may be easily determined by the normalization condition $\left\langle \Psi(t) | \Psi(t) \right\rangle= 1$ and can be expressed as follows $$\label{11} N_k=\left[ 1+ 2 \sum\limits_{j = 1}^{k} \lambda^j \Re \left( \left\langle \psi^{(0)} | \psi_{1,j+1} \right \rangle \right)+ \sum\limits_{m,j = 1}^{k}\lambda^{m+j} \left\langle \psi_{1,m+1} | \psi_{1,j+1} \right\rangle \right]^{-\frac{1}{2}},$$ being the first contribution due to $ \left\langle \psi^{(0)} | \psi^{(0)} \right \rangle $, whereas the second one arises from two single finite sums, one referent to the inner product of the zero-order term with the $j$th-order correction $\left\langle \psi^{(0)} | \psi_{1,j+1} \right \rangle$, and the other respect to its complex conjugate $\left\langle \psi_{1,j+1} | \psi^{(0)} \right \rangle$; as consequence, a purely real contribution is obtained of the sum of both over all $k$. The last part of the above equation is merely handled if we run $m$ and $j$ from $1$ to $k$, such as presented in Table \[tab:1\]. It is easy to see that the double summation in can be split in two parts, one where $m=j$ and which contains all diagonal terms, and the other for the off-diagonal terms represented in a double sum of the real part of $ \left\langle \psi_{1,m+1} | \psi_{1,j+1} \right\rangle$, i.e. $$\label{12} \sum\limits_{m,j = 1}^{k}\lambda^{m+j} \left\langle \psi_{1,m+1} | \psi_{1,j+1} \right\rangle = \sum\limits_{n = 1}^{k}\lambda^{2n} \left\langle \psi_{1,n+1} | \psi_{1,n+1} \right\rangle + 2 \sum_{\substack{n=1\\ k>1}}^{k-1} \sum\limits_{m = n+1}^{k} \lambda^{n+m}\Re \left( \left\langle \psi_{1,n+1} | \psi_{1,m+1} \right\rangle \right);$$ applying the change of variable $m=p-n$ and replacing in the Eq., we arrive to $$\label{13} \begin{split} N_k = \left[ 1 + 2 \sum\limits_{j = 1}^{k} \lambda^j \Re \left( \left\langle \psi^{(0)} | \psi_{1,j+1} \right \rangle \right) + \sum\limits_{n = 1}^{k}\lambda^{2n} \left\langle \psi_{1,n+1} | \psi_{1,n+1} \right\rangle \right. \\ \left. +2 \sum_{\substack{n=1\\ k>1}}^{k-1} \sum\limits_{p = 2n+1}^{n+k} \lambda^{p}\Re \left( \left\langle \psi_{1,n+1} | \psi_{1,p-n+1} \right\rangle \right) \right]^{-\frac{1}{2}}, \end{split}$$ which is the normalization constant for the approximate analytical solution of the Schrödinger equation defined in Eq.. In principle, the inclusion of the factor $N_k$ in our calculations can give a fairly good approximation to the solution without convergence difficulties. [An important remark on the proposed normalization procedure is that we have not invoked the usual intermediate normalization used in the standard perturbation theory, i.e. the imposition $\left\langle \psi^{(0)} | \psi_{1,n+1} \right\rangle=0$ for all $\lambda$. Such condition does not apply in our case as can be seen clearly in Appendix B, where it is shown that the inner product of the zero-order correction with the first two correction terms are different from zero. In particular, these complex inner products have non-zero imaginary parts which should not be neglected if the called intermediate normalization is applied; for this reason, we have adopted other procedure to obtain the factor $ N_{k}$, which ensures real values at any power of $\lambda$.]{} The Binary waveguide array as an example ======================================== In order to test the accuracy of our perturbative method, we apply it to a problem whose exact solution is known; in this case, a waveguide array. These optical structures have demonstrated its potential to emulate some particular problems related with quantum mechanics, such as optical Bloch oscillations [@21], discrete spatial solitons [@22], quantum walks [@23], discrete Fourier transforms [@Leija] and parity time-symmetry [@24], to name a few. In particular, the linear behavior of light propagation over this kind of waveguide arrangement is usually governed by the infinite system of differential equations $$\label{14} i\frac{d \mathscr{E}_{n}}{d z}= \omega \left(-1 \right)^n \mathscr{E}_{n} + \alpha \left(\mathscr{E}_{n+1}+ \mathscr{E}_{n-1} \right), \qquad n=-\infty,...,-2,-1,0,1,2,...,\infty,$$ where $\mathscr{E}_{n}$ represents the amplitudes of the light field confined in the $n$th waveguide, $z$ the longitudinal propagation distance, $2 \omega$ the mismatch propagation constant and $\alpha$ the hopping rate between two adjacent waveguides. Physically, equation describes the effective evanescent field coupling between the nearest-neighbor waveguide interactions. Moreover, it has been demonstrated [@25] that this system can be associated with a Schrödinger-type equation $$\label{15} i \frac{d \left|\psi(z)\right \rangle}{d z}= \hat{H}\left|\psi(z)\right \rangle,$$ where $\hat{H}=\omega \left(-1\right)^{\hat{n}} + \alpha \left(\hat{V}+\hat{V}^\dagger \right)$, being $\hat{n}$ the number operator, $\left(-1\right)^{\hat{n}}$ the parity operator and $\hat V$ and $\hat{V}^\dagger$ the ladder operators defined as $$\label{16} \hat V= \sum_{n=- \infty}^\infty \left|n \right \rangle \left\langle n+1 \right| , \quad \hat{V}^\dagger= \sum_{n=-\infty}^\infty \left|n+1 \right \rangle \left\langle n \right|.$$ Note that if the solution is written in terms of the orthonormal (Wannier) states [@Kenkre] as $\left|\psi(z)\right \rangle= \displaystyle\sum_{n=-\infty}^\infty \mathscr{E}_ {n} (z) \left|n \right \rangle$, and if this proposal is substituted into Eq., the infinite system given by Eq. is recovered. In fact, it has been shown [@26], that the exact solution of this system is $$\label{17} \mathscr{E}_{n}(z)= \frac{1}{\pi} \int\limits_0^\pi {\cos \left( n\phi\right) \left\lbrace \cos[\Omega(\phi)z] -i[2\alpha \cos\phi + (-1)^n \omega]\frac{\sin[\Omega(\phi) z]}{\Omega(\phi)} \right\rbrace d\phi },$$ with $$\label{18} \Omega(\phi)=\sqrt{\omega^2 + 4 \alpha^2 \cos^2\phi }.$$ Despite the above equation represents the exact solution for the amplitude of the light field, $\mathscr{E}_{n}= \left\langle n |\psi(z) \right \rangle $, a substantial alternative approximated solution is derived in [@26]; under the condition $\alpha \ll \omega $ and performing the unitary transformation $ \tiny {\hat{R} =\exp{\left[ \frac{\alpha}{2\omega}\left( -1\right)^{\hat{n}} \left(\hat{V}+\hat{V}^\dagger \right)\right] }}$, the following solution is found, $$\label{19} \mathscr{E}_{n}=\left(-1 \right)^\frac{n(n-1)}{2} \sum\limits_{r =- \infty}^{\infty} \sum\limits_{s =- \infty}^{\infty} \left(-1 \right)^{s r} e^{-i \left(-1 \right)^s \left(\frac{\omega^2+\alpha^2}{\omega} \right)z}i^{r} \times J_r\left(\frac{\alpha^2}{\omega} z \right) J_s\left(\frac{\alpha}{\omega} z \right) J_{n+2r+s}\left(\frac{\alpha}{\omega} z \right),$$ where $J_n(z)$ are the Bessel functions of the first kind [@25]; in fact, the unitary transformation $ \small{\hat{R}}$ constitutes the small rotation approximation [@27].\ Let us now solve the Schrödinger-type equation with the formalism presented in Section 2. In the first place, we need to regard the variable $ z $ as the time; this is intuitively reasonable if we want to describe the optical field propagation on the waveguide array in evolutionary terms. Under this assumption, and considering that $\omega \left(-1\right)^{\hat{n}} $ is the unperturbed part, $\hat{V}+\hat{V}^\dagger $ is the perturbation and $\alpha $ as the perturbation parameter, one readily obtains from Eq. $\eqref{9}$, $$\label{20} \left|\psi_{1,2} \right \rangle = -i \frac{\sin(\omega z) }{\omega} \left( \hat{V} + \hat{V}^\dagger \right) \left|m \right \rangle.$$ Since the problem is linear, we have considered the initial condition $\left|\psi(0)\right \rangle= \left|m\right \rangle $ which corresponds to a single excitation in the $m$th guide. The action of the ladder operators on this state leads to the first-order correction of the solution as follows $$\label{21} \left|\Psi(z) \right \rangle = e^{-i \omega (-1)^{m} z} N_1 \left|m \right \rangle -i \frac{\alpha\sin(\omega z)}{\omega} N_1 \left( \left|m-1 \right \rangle + \left|m+1 \right \rangle \right),$$ with $$\label{22} N_1= \left\lbrace 1+ 2 \left[ \frac{\alpha \sin(\omega z)}{\omega}\right]^2 \right\rbrace ^{-1/2}.$$ All these derivations are offered in detail in the Appendix A. Now, if we consider the case in which the light field is launched into the first site of the waveguide array, i.e. $\left|m \right \rangle= \left|0 \right \rangle $ and $\mathscr{E}_{n}\left( z\right) = \left\langle n |\Psi(z) \right \rangle$, we get $$\label{23} \mathscr{E}_n(z) = e^{-i \omega z} N_1 \delta_{n,0} -i \frac{\alpha \sin(\omega z)}{\omega} N_1 \left( \delta_{n,-1} + \delta_{n,1} \right).$$ Note that this equation describes the propagation of the electromagnetic field either towards the left side or towards the right side of the photonic waveguide array, but since this array is symmetric and infinite, we can simplify the analysis considering only the positive values of $n$. Therefore, the first-order solution is reduced to $$\label{24} \mathscr{E}_n(z) = e^{-i \omega z} N_1 \delta_{n,0} -i \frac{\alpha\sin(\omega z)}{\omega} N_1 \delta_{n,1},$$ and satisfies the same initial conditions as those reported in the literature [@26].\ The second order term can be calculated using again the Eq.$\eqref{9}$, $$\label{25} \left|\psi_{1,3} \right \rangle = i \frac{(-1)^m \cos(\omega z)}{2 \omega^2} A(z)\left( \hat{V} + \hat{V}^\dagger \right)^2 \left|m \right \rangle,$$ where the function $A(z)$ is defined by $$\label{26} A(z)=\tan \left( \omega z \right) \left[1+ iz\omega (-1)^m \right]-z\omega.$$ Thus, we can write the second-order correction as $$\begin{aligned} \label{27} \left|\Psi(z) \right \rangle =& \cos\left( {\omega z}\right) \left\{1 + i (-1)^m \left[ \frac{\alpha^2}{\omega^2} A(z)-\tan\left( {\omega z}\right) \right] \right\} N_2 \left|m \right \rangle - i \frac{\alpha\sin\left( \omega z\right) }{\omega} N_2 \left( \left|m-1 \right \rangle + \left|m+1 \right \rangle \right) \nonumber \\ & + i \frac{ \alpha^2 (-1)^m A(z) \cos\left(\omega z\right) }{2 \omega^2} N_2 \left( \left|m-2 \right \rangle + \left|m+2 \right \rangle \right),\end{aligned}$$ with its corresponding normalization constant $$\label{28} N_2=\left\{ 1+\frac{3}{2} \left[ \frac{\alpha^2 \cos\left( \omega z\right) \, || A(z)|| }{\omega^2} \right]^2 \right\}^{-1/2}.$$ Now, assuming that the light is injected into the first guide, $m=0$, and considering the symmetry of the photonic array, we arrive to the reduced form of the field $$\begin{aligned} \label{29} \mathscr{E}_n (z) =& \cos(\omega z) \left\lbrace1 + i \left[ \frac{\alpha^2}{\omega^2} A(z)-\tan(\omega z) \right] \right\rbrace N_2 \delta_{n,0} -i \frac{\alpha\sin(\omega z)}{\omega}N_2 \delta_{n,1} \nonumber \\ & + i \frac{ \alpha^2 A(z) \cos(\omega z) }{2 \omega^2} N_2 \delta_{n,2}.\end{aligned}$$ The third-order term can be easily derived repeating the same steps used above for the first two order corrections, $$\label{30} \left|\psi_{1,4} \right \rangle =\frac{ i}{2 \omega^3} \cos(\omega z) \left[ \tan(\omega z) - z\omega \right] \left( \hat{V} + \hat{V}^\dagger \right)^3 \left|m \right \rangle,$$ and the complete solution is given by $$\begin{aligned} \label{31} \left|\Psi(z) \right \rangle &= \left\lbrace e^{-i\omega (-1)^m z} - \frac{\alpha^2 }{\omega^2} (-1)^m \left[ z\omega (-1)^m \sin(\omega z) -i B(z) \right] \right\rbrace N_3 \left|m \right \rangle \nonumber \\ & + i \left[ \frac{3 \alpha^3 B(z)}{2\omega^3 } -\frac{\alpha \sin(\omega z) }{\omega} \right] N_3 \left( \left|m-1 \right \rangle + \left|m+1 \right \rangle \right) \nonumber \\ & - \frac{\alpha^2 }{2 \omega^2} (-1)^m \left[ z\omega (-1)^m \sin(\omega z) - i B(z) \right] N_3 \left( \left|m-2 \right \rangle + \left|m+2 \right \rangle \right) \nonumber \\ & + \frac{ i\alpha^3}{2 \omega^3} B(z) N_3 \left( \left|m-3 \right \rangle + \left|m+3 \right \rangle \right),\end{aligned}$$ with $$\label{32} B(z)=\cos(\omega z) \left[ \tan(\omega z) -z\omega \right]$$ and $$\label{33} N_3=\left\lbrace 1+ \frac{3}{2} \left(\frac{\alpha}{\omega} \right)^4 \left[ B(z)^2 -4 B(z) \sin(\omega z) + z^2 {\omega}^2 \sin^2(\omega z) \right]+ 5 \left( \frac{\alpha}{\omega} \right)^6 B(z)^2\right\rbrace^{-1/2}.$$ In this case, the third-order solution for the amplitude of the electric field on the waveguide array is given by $$\begin{aligned} \label{34} \mathscr{E}_n(z) =& \left\lbrace e^{-i\omega z} - \frac{\alpha^2 }{\omega^2} \left[ z\omega \sin(\omega z) -i B(z) \right] \right\rbrace N_3 \delta_{n,0} + i \left[ \frac{3 \alpha^3 B(z)}{2\omega^3 } -\frac{\alpha \sin(\omega z) }{\omega} \right] N_3 \delta_{n,1} \nonumber \\ & - \frac{\alpha^2 }{2 \omega^2} \left[ z\omega \sin(\omega z) - i B(z) \right]N_3 \delta_{n,2} + \frac{ i\alpha^3}{2 \omega^3} B(z) N_3 \delta_{n,3}.\end{aligned}$$ [In order to illustrate the high degree of accuracy that can be obtained with the third-order correction for the amplitude of the electrical field, the numerical comparison of this perturbative solution]{} with the exact solution and with the small rotation solution is given in Figs. \[fig:1\] and \[fig:2\], using the parameters $\omega=0.9$ and $\boldsymbol{n=0,1,2}$. In these figures, we present the intensity distribution $I(z)=|\mathscr{E}_n|^2 $ for the first three guides considering two values of the perturbation parameter, $\alpha=0.1 $ and $\alpha=0.3 $. It is noteworthy that for $\alpha=0.1 $ the approximated solution converges to the exact solution uniformly even with a large propagation distance; for $\alpha=0.3$ both solutions are very similar only for short distances, but we still obtain a good approximation. Moreover, for these two values of $\alpha$, the third-order correction shows to be better than the small rotation approximation. \[h\] [@c@]{}\ (a) Guide 1 [@c@]{}\ (b) Guide 2 [@c@]{}\ (c) Guide 3 [@c@]{}\ (a) Guide 1 [@c@]{}\ (b) Guide 2 [@c@]{}\ (c) Guide 3 Conclusions =========== In summary, we have successfully obtained the normalization constant of Eq. which complements the theoretical analysis of the Matrix Method. The perturbative solutions of the equations that describe the binary waveguide array, obtained applying this method, are highly accurate. Furthermore, it is shown that the third-order approximated solution matches exactly the known exact solution, not only for small values of the perturbative parameter $\alpha$, but also for large values; in fact, the real measure of the perturbation is the product $\alpha z$. On the other hand, it becomes evident the improvement of this method with respect to the reported results using the small rotation method. Therefore, the assessment of higher-order terms can give us a reliable solution into the system described here. Acknowledgments {#acknowledgments .unnumbered} =============== B.M. Villegas-Martínez thanks the support given by the National Council on Science and Technology (CONACYT). [99]{} From Eq. $\eqref{10}$, we can write the analytic solution to first-order as $$\label{35} \left|\Psi(t) \right \rangle = N_1 \left| \psi^{(0)} \right \rangle + \alpha N_1 \left| \psi_{1,2} \right \rangle.$$ The zero-order solution is trivial, $\left| \psi^{(0)} \right \rangle= e^{-i \omega (-1)^{\hat{n}} z} \left|m \right \rangle $. The first-order correction, $\left| \psi_{1,2} \right \rangle $, requires the use of Eq.$\eqref{9}$, together with the conditions established in Sec.4, and gives $$\label{36} \left|\psi_{1,2} \right \rangle =-i e^{-i \omega (-1)^{\hat{n}} z}\int\limits_0^z e^{i \omega (-1)^{\hat{n}} z_1} \left( \hat{V} + \hat{V}^\dagger \right) e^{-i \omega (-1)^{\hat{n}} z_1} \left|m \right \rangle dz_1 .$$ We expand in Taylor series the product of operators inside the integral as $$\label{37} e^{i \omega (-1)^{\hat{n}} z_1} \left( \hat{V} + \hat{V}^\dagger \right) e^{-i \omega (-1)^{\hat{n}} z_1} = \sum\limits_{l,r = 1}^{\infty} \frac{\left(-1\right)^r \left(i \omega z_1 \right)^{l+r}}{l! r!} \left(-1 \right)^{l \hat{n}} \left( \hat{V} + \hat{V}^\dagger \right) \left(-1 \right)^{r \hat{n}} ,$$ but $$\begin{aligned} \left(-1 \right)^{l \hat{n}} \left(\hat{V} + \hat{V}^\dagger \right) \left(-1 \right)^{r \hat{n}}\left|m \right \rangle &= \left(-1 \right)^{r m} \left(-1 \right)^{l \hat{n}} \left( \left|m-1 \right \rangle + \left|m+1 \right \rangle \right) \\ &=\left(-1 \right)^{l} \left(-1 \right)^{(r+l)m} \left( \hat{V} + \hat{V}^\dagger \right) \left|m \right \rangle,\end{aligned}$$ then $$\begin{aligned} e^{i \omega (-1)^{\hat{n}} z_1} \left( \hat{V} + \hat{V}^\dagger \right) e^{-i \omega (-1)^{\hat{n}} z_1} & = \sum\limits_{l,r = 1}^{\infty} \frac{ \left[-i \omega \left(-1 \right)^m z_1 \right]^{l+r}}{l! r!} \left(\hat{V} + \hat{V}^\dagger \right)\\ & = e^{-2 i \omega (-1)^{m} z_1} \left(\hat{V} + \hat{V}^\dagger \right).\end{aligned}$$ The substitution of the previous equation into gives us $$\begin{aligned} \label{38} \left|\psi_{1,2} \right \rangle &=-i e^{-i \omega (-1)^{\hat{n}} z}\int\limits_0^z e^{-2 i \omega (-1)^{m} z_1} dz_1 \left( \hat{V} + \hat{V}^\dagger \right) \left|m \right \rangle \nonumber \\ &= -i \frac{ e^{-i \omega (-1)^{m} z} \sin\left[\omega z (-1)^m\right] }{ \omega (-1)^{m}} e^{-i \omega (-1)^{\hat{n}} z} \left( \hat{V} + \hat{V}^\dagger \right) \left|m \right \rangle.\end{aligned}$$ As the sinus is an odd function and $(-1)^{\hat{n}} ( \hat{V} + \hat{V}^\dagger )= ( \hat{V} + \hat{V}^\dagger)(-1)^{\hat{n}+1}$, we arrive to the solution $$\label{39} \left|\psi_{1,2} \right \rangle = -i \frac{\sin(\omega z)}{ \omega} \left( \hat{V} + \hat{V}^\dagger \right) \left|m \right \rangle.$$ Thus, we can build the normalized first-order solution as $$\label{40} \left|\Psi(z) \right \rangle = e^{-i \omega (-1)^{m} z} N_1 \left|m \right \rangle -i \frac{\alpha\sin(\omega z)}{\omega} N_1 \left( \hat{V} + \hat{V}^\dagger \right)\left|m \right \rangle.$$ The normalization constant $N_1$ is obtained by considering $k=1$ in Eq.$\eqref{13} $, $$\label{41} N_1=\left\lbrace 1+ 2 \alpha \Re \left( \left\langle \psi^{(0)} | \psi_{1,2} \right\rangle \right) + \alpha^2 \left\langle \psi_{1,2} | \psi_{1,2} \right\rangle \right\rbrace^{-1/2};$$ in this problem the odd powers of perturbative parameter $\alpha$ do not contribute to the normalization constant since $\left\langle m \right|( \hat{V} + \hat{V}^\dagger )^{2n+1} \left|m \right \rangle=0$. Thus, the inner product of $\left|\psi_{1,2} \right \rangle$ with itself is $$\label{42} \left\langle \psi_{1,2} | \psi_{1,2} \right\rangle = \left[ \frac{\sin(\omega z)}{ \omega} \right]^2 \left\langle m \right|( \hat{V} + \hat{V}^\dagger )^2 \left|m \right \rangle= 2 \left[ \frac{\sin(\omega z)}{ \omega} \right]^2.$$ The second-order correction can be obtained again from Eq.$\eqref{10}$, $$\label{43} \left|\Psi(t) \right \rangle = N_2 \left| \psi^{(0)} \right \rangle + \alpha N_2 \left| \psi_{1,2} \right \rangle + \alpha^2 N_2 \left| \psi_{1,3} \right \rangle.$$ Through the application of Eq.$\eqref{9}$, we compute the second-order term $\left| \psi_{1,3} \right \rangle$ as $$\label{44} \left|\psi_{1,3} \right \rangle = - \frac{ e^{-i \omega (-1)^{m} z}}{\omega} \int\limits_0^z e^{ i \omega (-1)^{m} z_1} \sin(\omega z_1) d{z_1} \left( \hat{V} + \hat{V}^\dagger \right)^2 \left|m \right \rangle.$$ It is easy to see that $$\label{45} \int\limits_0^z e^{ i \omega (-1)^{m} z_1} \sin(\omega z_1) d{z_1} = \frac{1-\cos(2 \omega z)}{4\omega} + i (-1)^m \left[ \frac{z}{2}-\frac{\sin(2 \omega z)}{4\omega} \right] ;$$ substituting this last result into Eq. and after some algebra, we get $$\label{46} \left|\psi_{1,3} \right \rangle = i \frac{ (-1)^m \cos(\omega z)}{2 \omega^2} A(z) \left( \hat{V} + \hat{V}^\dagger \right)^2 \left|m \right \rangle,$$ with $$\label{47} A(z)=\tan \left( \omega z \right) \left[1+ iz\omega (-1)^m \right]-z\omega.$$ Thus, the solution of wavefunction to second-order is $$\begin{aligned} \label{48} \left|\Psi(z) \right \rangle=& e^{-i \omega (-1)^m z} N_2 \left|m \right \rangle - i \frac{\alpha\sin(\omega z) }{\omega}\left( \hat{V} + \hat{V}^\dagger \right) N_2 \left|m \right \rangle \nonumber \\ & + i \frac{ \alpha^2 (-1)^m A(z) \cos(\omega z) }{2 \omega^2} \left( \hat{V} + \hat{V}^\dagger \right)^2 N_2 \left|m \right \rangle.\end{aligned}$$ Using this information and carrying out the sums in Eq. for $k=2$, the normalization constant is obtained $$\begin{aligned} \label{49} N_2&=\left\lbrace 1+ \alpha^2 \left[ 2\Re \left( \left\langle \psi^{(0)} | \psi_{1,3} \right \rangle \right) +\left\langle \psi_{1,2} | \psi_{1,2} \right\rangle \right]+ \alpha^4 \left\langle \psi_{1,3} | \psi_{1,3} \right\rangle \right\rbrace ^{-1/2} \nonumber \\ &=\left\{ 1+\frac{3}{2} \left[ \frac{\alpha^2 \cos\left( \omega z\right) \, || A(z)|| }{\omega^2} \right]^2 \right\}^{-1/2},\end{aligned}$$ where the terms that contain $\alpha^2$ sum zero. For the next-order correction, we have $$\label{50} \left|\Psi(t) \right \rangle = N_3 \left| \psi^{(0)} \right \rangle + \alpha N_3 \left| \psi_{1,2} \right \rangle + \alpha^2 N_3 \left| \psi_{1,3} \right \rangle + \alpha^3 N_3 \left| \psi_{1,4} \right \rangle.$$ We get the third-order term from Eq., $$\begin{aligned} \label{51} \left|\psi_{1,4} \right \rangle & = \frac{(-1)^m}{2\omega^2} e^{-i \omega (-1)^{\hat{n}} z} \int\limits_0^z e^{ i \omega (-1)^{\hat{n}} z_1} \cos(\omega z_1) A(z_1)\left( \hat{V} + \hat{V}^\dagger \right)^3 \left|m \right \rangle d{z_1} \nonumber \\ &= \frac{i (-1)^m}{2\omega^2} e^{-i \omega (-1)^{m} z} \int\limits_0^z e^{-i \omega (-1)^{m} z_1} \sin(\omega z_1) d{z_1} \left( \hat{V} + \hat{V}^\dagger \right)^3 \left|m \right \rangle \nonumber \\ & \quad - \frac{i (-1)^m}{2\omega} e^{-i \omega (-1)^{m} z} \int\limits_0^z z_1 e^{ -2 i \omega (-1)^{m} z_1} d{z_1} \left(\hat{V} + \hat{V}^\dagger \right)^3 \left|m \right \rangle ,\end{aligned}$$ where we have used that $\cos(\omega z_1) A(z_1)=\sin(\omega z_1)-z_1\omega e^{-i \omega(-1)^m z_1} $ and that $(-1)^n ( \hat{V} + \hat{V}^\dagger )^3= ( \hat{V} + \hat{V}^\dagger )^3 (-1)^{\hat{n}+1} $. The evaluation of the integrals via integration by parts leads to $$\begin{aligned} \label{52} \left|\psi_{1,4} \right \rangle &=\frac{i \cos(\omega z)}{2 \omega^3} \left[\tan(\omega z)-z\omega \right]= \frac{i B(z)}{2 \omega^3};\end{aligned}$$ thus, becomes $$\begin{aligned} \label{53} \left|\Psi(z) \right \rangle& = e^{-i\omega (-1)^m z} N_3 \left|m \right \rangle - i \frac{\alpha \sin(\omega z) }{\omega} \left(\hat{V} + \hat{V}^\dagger \right) N_3 \left|m \right \rangle \nonumber \\ & - \frac{\alpha^2 }{2 \omega^2} (-1)^m \left[ z\omega (-1)^m \sin(\omega z) - i B(z)\right] \left( \hat{V} + \hat{V}^\dagger \right)^2 N_3 \left|m\right \rangle + \frac{ i\alpha^3}{2 \omega^3} B(z)\left( \hat{V} + \hat{V}^\dagger \right)^3 N_3 \left|m \right \rangle. \end{aligned}$$ The application of the ladder operators to the initial state $\left|m \right \rangle$ gives us the result previously presented in the Section 4. Finally, the normalization constant to this order is given by Eq., $$\label{54} N_3=\left\lbrace 1 + \alpha^4 \left[ 2 \Re\left( \left\langle \psi_{1,2} | \psi_{1,4} \right \rangle \right) + \left\langle \psi_{1,3} | \psi_{1,3} \right\rangle\right] + \alpha^6 \left\langle \psi_{1,4} | \psi_{1,4} \right \rangle \right\rbrace^{-1/2}.$$ An alternative procedure to obtain the same results without integration schemes consist in writing the completeness relation $\hat{I}= \sum\limits_{k} \left|k^{(0)} \right \rangle \left\langle k^{(0)} \right| $ in terms of the complete orthonormal set of eigenfunctions of the unperturbed Hamiltonian. If we insert this identity operator inside of Eq., together with the initial condition $\displaystyle \left| \psi(0) \right \rangle= \left|n^{(0)} \right \rangle $, we arrive to $$\begin{aligned} \label{55} \left|\psi_{1,2} \right \rangle & = -i t E_n^{(1)} e^{- i E^{(0)}_n t } \left|n^{(0)} \right \rangle \nonumber \\ &- 2i \sum\limits_{k \not = n} e^{- i \frac{t}{2} \left( E^{(0)}_{n} + E^{(0)}_k \right)} \frac{ \sin \left[ \frac{t}{2} \left( E^{(0)}_{n}- E^{(0)}_{k} \right)\right]} { {E^{(0)}_{n} - E^{(0)}_{k}}} \left\langle k^{(0)} \right|\hat{H}_{p}\left|n^{(0)} \right \rangle \left|k^{(0)} \right \rangle ,\end{aligned}$$ where the first-order is now expressed in terms of the eigenvalues of $\hat{H_{0}}$, including also the first-order energy correction written in the form $$\label{56} E_n^{(1)}=\left\langle n^{(0)} \right|\hat{H}_{p}\left|n^{(0)} \right \rangle.$$ The inner products in are easy to calculate, $$\begin{aligned} \label{57} \left\langle \psi_{1,2} | \psi_{1,2} \right \rangle &= t^2 E^{2(1)}_n + 4 \sum\limits_{k \not = n} \frac{\sin^2\left[ \frac{t}{2} \left( E^{(0)}_{n}- E^{(0)}_{k} \right) \right]}{\left( E^{(0)}_{n}- E^{(0)}_k \right)^2}\ \left\Vert H_{p_{kn}} \right\Vert^2 ,\nonumber \\ \Re \left( \left\langle \psi^{(0)} | \psi_{1,2} \right \rangle \right)& =0 .\end{aligned}$$ In this case the normalization constant is given by $$\label{58} N_1=\left(1+ \lambda^2 \left\langle \psi_{1,2} | \psi_{1,2} \right\rangle \right)^{-\frac{1}{2}}.$$ The derivation for the second order is straightforward, but tedious, and gives $$\begin{aligned} \label{59} \left|\psi_{1,3} \right \rangle &= -i e^{-i\hat{H_0} t} \int\limits_0^{t} e^{-i\hat{H}_{0} t_{1}} \hat{H}_{p} \Big\lbrace -i t_{1} E_n^{(1)} e^{- i E^{(0)}_n t_{1} }\left|n^{(0)} \right \rangle \nonumber \\ & - 2i \lambda \sum\limits_{k \not = n} e^{ - i \frac{t_{1}}{2} \left( E^{(0)}_{k} + E^{(0)}_n \right)} \frac{\sin \left[ \frac{t_{1}}{2}\left( E^{(0)}_{n}- E^{(0)}_k \right)\right]}{E^{(0)}_{n} - E^{(0)}_{k}} H_{p_{kn}} \left|k^{(0)} \right \rangle \Big\rbrace dt_{1},\end{aligned}$$ where the expression inside of the curly brackets is the first order correction. Employing again the identity operator $\hat{I}$ and after some algebraic manipulation, one gets $$\begin{aligned} \label{60} \left|\psi_{1,3} \right \rangle =& - e^{- i E^{(0)}_n t } \left( \frac{t^2}{2} E^{2(1)}_n + i t E_n^{(2)} \right) \left|n^{(0)} \right \rangle \nonumber \\ & + i t e^{- i E^{(0)}_n t } \sum\limits_{k \not = n} \frac{e^{-i \frac{t}{2} \left( E^{(0)}_{k} -E^{(0)}_n \right) } }{E^{(0)}_{n} -E^{(0)}_k} H_{p_{kk}} H_{p_{kn}} \left|k^{(0)} \right \rangle \nonumber\\ & - i t e^{- i E^{(0)}_n t } E_n^{(1)} \sum\limits_{k \not = n} \frac{ H_{p_{kn}}}{E^{(0)}_{n} -E^{(0)}_k} \left|k^{(0)} \right \rangle \nonumber \\ & - i e^{- i E^{(0)}_n t } E_n^{(1)} \sum\limits_{k \not = n} e^{-i \frac{t}{2} \left( E^{(0)}_{k} -E^{(0)}_n \right) } \frac{\sin\left[\frac{t}{2}\left( E^{(0)}_{k} -E^{(0)}_n \right)\right] }{ \left( E^{(0)}_{n} -E^{(0)}_k \right)^2} H_{p_{kn}} \left|k^{(0)} \right \rangle \nonumber\\ & - 2i e^{ -i \frac{t}{2} E^{(0)}_n } \sum\limits_{k \not = n} \sum\limits_{q \not = n} e^{-i \frac{t}{2} E^{(0)}_{q} } \frac{\sin\left[\frac{t}{2}\left( E^{(0)}_{q} -E^{(0)}_n \right)\right] }{({ E^{(0)}_{q} -E^{(0)}_{n}}) ( {E^{(0)}_{n} -E^{(0)}_{k}})} H_{p_{qk}} H_{p_{kn}} \left|q^{(0)} \right \rangle \nonumber\\ &+ 2i \sum\limits_{k \not = n} \sum\limits_{q \not = k} e^{-i \frac{t}{2} \left( E^{(0)}_{q} + E^{(0)}_k \right) } \frac{\sin\left[\frac{t}{2}\left( E^{(0)}_{q} -E^{(0)}_k \right)\right] }{(E^{(0)}_{q} -E^{(0)}_k) (E^{(0)}_{n} -E^{(0)}_k)} H_{p_{qk}} H_{p_{kn}} \left|q^{(0)} \right \rangle\end{aligned}$$ with $$\label{61} E_n^{(2)}=\sum\limits_{k \not = n} \frac{|H_{p_{k n}}|^2}{E^{(0)}_{n} -E^{(0)}_k}.$$ Considering $ k=2 $ in Eq., $$\begin{aligned} \label{59} N_2=&\left\lbrace 1 + \lambda^2 \left[2 \Re \left( \left\langle \psi^{(0)} | \psi_{1,3} \right\rangle \right)+ \left\langle \psi_{1,2} | \psi_{1,2} \right\rangle \right] \nonumber \right. \\ & \left. +2 \lambda^3 \Re \left( \left\langle \psi_{1,2} | \psi_{1,3} \right\rangle \right) + \lambda^4 \left\langle \psi_{1,3} | \psi_{1,3} \right\rangle \right\rbrace^{-\frac{1}{2}};\end{aligned}$$ doing the inner products, one can find $$\begin{aligned} \label{60} &2 \Re \left( \left\langle \psi^{(0)} | \psi_{1,3} \right\rangle \right)= - \left\langle \psi_{1,2} | \psi_{1,2} \right\rangle, \nonumber \\ &2 \Re \left( \left\langle \psi_{1,2} | \psi_{1,3} \right\rangle \right) = 2t^2 E_n^{(1)} E_n^{(2)}-4 t E_n^{(1)} \sum\limits_{k \not = n} \frac{ \sin^2 \left[ \frac{t}{2} \left( E^{(0)}_{k}- E^{(0)}_n \right)\right]}{ \left( E^{(0)}_{n} - E^{(0)}_{k}\right)^3} |H_{p_{kn}}|^2 \nonumber\\ &\qquad +4t \sum\limits_{k \not = n} \frac{ \cos\left[ \frac{t}{2} \left( E^{(0)}_{k}- E^{(0)}_n \right)\right]\sin \left[ \frac{t}{2} \left( E^{(0)}_{k}- E^{(0)}_n \right)\right]}{ \left( E^{(0)}_{n} - E^{(0)}_{k}\right)^2} |H_{p_{kn}}|^2 H_{p_{kk}} \nonumber\\ &\qquad + 4 \sum\limits_{k \not = n} \sum\limits_{m \not = n} \frac{ \sin^2 \left[ \frac{t}{2} \left( E^{(0)}_{m}- E^{(0)}_n \right)\right]}{ \left( E^{(0)}_{n} - E^{(0)}_{m}\right)^2 \left( E^{(0)}_{n} - E^{(0)}_{k}\right)}H_{p_{nm}} H_{p_{mk}} H_{p_{kn}} \nonumber\\ &\qquad+ 4 \sum\limits_{k \not = n} \sum\limits_{m \not = n} \frac{ \cos\left[ \frac{t}{2} \left( E^{(0)}_{n}- E^{(0)}_k \right)\right]\sin \left[ \frac{t}{2} \left( E^{(0)}_{m}- E^{(0)}_n \right)\right]}{ \left( E^{(0)}_{n} - E^{(0)}_{k}\right)\left( E^{(0)}_{n} - E^{(0)}_{m}\right)\left( E^{(0)}_{m} - E^{(0)}_{k}\right)} \nonumber \\ & \qquad \qquad \times \sin \left[ \frac{t}{2} \left( E^{(0)}_{m}- E^{(0)}_k \right)\right] H_{p_{nm}} H_{p_{mk}} H_{p_{kn}}.\end{aligned}$$ Therefore, the normalization constant is $$\label{61} N_2=\left[ 1 +\lambda^3 \Re \left( \left\langle \psi_{1,2} | \psi_{1,3} \right\rangle \right) + \lambda^4 \left\langle \psi_{1,3} | \psi_{1,3} \right\rangle \right]^{-\frac{1}{2}}.$$
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper is devoted to pulse solutions in FitzHugh–Nagumo systems that are coupled parabolic equations with rapidly periodically oscillating coefficients. In the limit of vanishing periods, there arises a two-scale FitzHugh–Nagumo system, which qualitatively and quantitatively captures the dynamics of the original system. We prove existence and stability of pulses in the limit system and show their proximity on any finite time interval to pulse-like solutions of the original system.' author: - 'Pavel Gurevich[^1] [^2] and Sina Reichelt[^3]' bibliography: - 'Bib\_Sina.bib' title: | Pulses in FitzHugh–Nagumo systems\ with rapidly oscillating coefficients --- [**MSC 2010:** 35B40 , 37C29 , 37C75 , 37N25. ]{} [**Keywords:** traveling waves, pulse solutions, FitzHugh–Nagumo system, two-scale convergence, spectral decomposition, semigroups.]{} Introduction {#sec:intro} ============ The famous FitzHugh–Nagumo equations, first mentioned in [@NAY1962], model the pulse transmission in animal nerve axons. The fast, nonlinear elevation of the membrane voltage $u$ is diminished over time by a slower, linear recovery variable $v$. The activator $u$ and the inhibitor $v$ are the solutions of a nonlinear partial differential equation (PDE) coupled with a linear ordinary differential equation (ODE) \[eq:system-orig-sub\] $$\label{eq:system-orig} \tag{\ref{eq:system-orig-sub}.OG} \begin{aligned} u_t & = u_{xx} + f(u) - \alpha v , \\ v_t & = - bv + \beta u , \end{aligned}$$ where the nonlinearity is typically given by the cubic function $f(u) = u(1 - u)(u - a)$ for $a \in (0, 1)$. The other parameters usually satisfy $\alpha=1$ and $0 < b\leq \beta \ll 1$. The existence of traveling wave solutions, such as pulses and fronts, are well-known for system , see e.g. [@McK1970; @Hast1976; @Carp1977; @Hast1982; @JKP1991; @AK2015] for pulses and [@Deng1991; @Szmo1991] for fronts. We are mainly interested in pulse solutions and consider the following FitzHugh–Nagumo system with rapidly oscillating coefficients in space \[eq:system-eps-sub\] $$\label{eq:system-eps} \tag{\ref{eq:system-eps-sub}.S$_{\varepsilon}$} \begin{aligned} u^{\varepsilon}_t & = u^{\varepsilon}_{xx} + f(u^{\varepsilon}) - \alpha(\tfrac{x}{{\varepsilon}}) v^{\varepsilon}, \\ v^{\varepsilon}_t & = \left( {\varepsilon}^2 d(\tfrac{x}{{\varepsilon}}) v^{\varepsilon}_x \right)_x - b(\tfrac{x}{{\varepsilon}}) v^{\varepsilon}+ \beta(\tfrac{x}{{\varepsilon}}) u^{\varepsilon}, \end{aligned}$$ where $x \in {\mathbb{R}}$ and $t>0$. All coefficients belong to the space ${\mathrm{L}}^\infty({\mathbb{S}})$ with ${\mathbb{S}}= {\mathbb{R}}/\mathbb{Z}$ being the periodicity cell, which means that they are $1$-periodic on ${\mathbb{R}}$. We imagine that these oscillations model heterogeneity within an excitable medium and ${\varepsilon}> 0$ is the characteristic length scale of the periodic microstructure. Moreover, in we allow for a small (slow) diffusion of the inhibitor $v^{\varepsilon}$, as it is also done in e.g. [@Szmo1991]. In this paper we study pulse-type solutions in system , including the case $d \equiv 0$. To our best knowledge, there are no results in the literature on the existence of *pulses* in FitzHugh–Nagumo systems with periodic coefficients. However, there exists an extensive literature on traveling *fronts* in reaction-diffusion equations with periodic data, see e.g. [@HuZi1995; @BeHa2002] for continuous periodic media, [@GuHa2006; @CGW2008] for discrete periodic media, and [@Xin2000] for a review and further references to earlier works. The article [@Hei2001] investigates front solutions in perforated domains for single equations as well as monotone systems. Most of these results are based on the maximum principle, which fails for the FitzHugh–Nagumo system. In [@MSU2007] reaction-diffusion systems are studied and exponential averaging is used to show that traveling wave solutions can be described by a spatially homogeneous equation and exponentially small remainders. The existence of generalized (oscillating) traveling waves $u^{\varepsilon}(t,x) = {{\mathbf u}}(x + ct, \frac{x}{{\varepsilon}})$ and their convergence to a limiting wave $U(t,x) = {{\mathbf u}}_0 (x + ct)$ is proved for parabolic equations in [@BoMa2014]. In their approach, the authors reformulate the problem as a spatial dynamical system and use a centre manifold reduction. In *all* previous results the limit equation is always “one-scale”. Our approach to find pulses in the FitzHugh–Nagumo system is, first, to derive an effective system for vanishing ${\varepsilon}$ and, secondly, to study the existence of pulses in this new system. In the limit ${\varepsilon}\to 0$, we obtain the following two-scale system \[eq:system-lim-sub\] $$\label{eq:system-lim} \tag{\ref{eq:system-lim-sub}.S$_0$} \begin{aligned} U_t & = U_{xx} + f(U) - \int_0^1 \alpha(y) V(t,x,y) {\,\mathrm{d}}y , \\ V_t & = \left( d(y) V_y \right)_y - b(y) V + \beta(y) U , \end{aligned}$$ where $(x,y) \in {\mathbb{R}}\times {\mathbb{S}}$ and $t>0$. Notice that $U(t,x)$ only depends on the macroscopic scale $x\in{\mathbb{R}}$, whereas $V(t,x,y)$ also depends on the microscopic scale $y \in {\mathbb{S}}$. We prove that this system admits two-scale pulse solutions $({{\mathbf u}}(x + ct), {{\mathbf v}}(x + ct,y))$ under certain assumptions on the parameters $(\alpha, \beta, b, d)$. The main idea of the proof is to decompose $\alpha$ into a sum of eigenfunctions of the differential operator ${{\mathcal L}}V = \left( d(y) V_y \right)_y - b(y) V$ and to project the $V$-component onto the corresponding eigenspaces. These projections yield a *guiding system*, which is of the form and is known to possess a stable pulse solution, and a remaining *guided* part, for which we prove the existence and stability of a pulse solution. Moreover, we show that the two-scale pulse $({{\mathbf u}}, {{\mathbf v}})$ is exponentially stable if the pulse of the corresponding guiding system is exponentially stable. Furthermore, we show that solutions of the original system satisfy $$\begin{aligned} (u^{\varepsilon}(t,x) , v^{\varepsilon}(t,x)) = \left( {{\mathbf u}}(x + ct), {{\mathbf v}}(x + ct, \tfrac{x}{{\varepsilon}}) \right) + O({\varepsilon}) \quad\text{as } {\varepsilon}\to0\end{aligned}$$ for suitable initial conditions and finite times $t \leq T$. These pulse-type solutions have a profile with a periodic microstructure. In other words, the pulse (its inhibitor component) oscillates in time via ${{\mathbf v}}(z, \tfrac{z + ct}{{\varepsilon}})$. Since our approach yields an explicit relation between two-scale pulses and pulses from the guiding system, we are able to provide numerical examples for pulses in both systems, and . Interestingly, in one example, a pulse exists, although the microscopic average over ${\mathbb{S}}$ of the inhibitor ${{\mathbf v}}$ vanishes at every macroscopic point $x\in{\mathbb{R}}$. *This paper is structured as follows.* In Section \[sec:justify\] we derive the two-scale system and prove ${\mathrm{L}}^2$-error estimates for the difference between the solutions $(u^{\varepsilon}, v^{\varepsilon})$ and $(U,V)$ of and , respectively. Section \[subsec:pulse-exist\] is devoted to the existence of two-scale pulses $({{\mathbf u}}, {{\mathbf v}})$. The stability of these pulses is studied in Section \[subsec:stability\]. Finally, we provide three numerical examples in Section \[sec:numerics\]. Justification of the two-scale system {#sec:justify} ===================================== We aim to justify the two-scale FitzHugh–Nagumo system and derive error estimates for the difference of $(u^{\varepsilon},v^{\varepsilon})$ and $(U,V)$ being the solutions of the systems and , respectively. Since we do not know whether there exist pulses for the original system, arbitrary solutions to coupled parabolic equations are considered in this section. In order to compare the two inhibitors $v^{\varepsilon}(t,x)$ and $V(t,x,y)$, which depend on different variables, the *macroscopic reconstruction ${\mathcal{R}_{\varepsilon}}$* is defined via $$\begin{aligned} {\mathcal{R}_{\varepsilon}}: {\mathrm{L}}^2({\mathbb{R}}; {\mathrm{C}}^0({\mathbb{S}})) \to {\mathrm{L}}^2({\mathbb{R}}); \quad ({\mathcal{R}_{\varepsilon}}\Phi)(x) := \Phi(x, \tfrac{x}{{\varepsilon}}) .\end{aligned}$$ We require continuity with respect to at least one of the two variables $(x,y)$ such that the function $\Phi(x,\frac{x}{{\varepsilon}})$ is measurable on the null-set $\lbrace (x,\frac{x}{{\varepsilon}}) \,|\, x\in {\mathbb{R}}\rbrace \subset {\mathbb{R}}\times {\mathbb{S}}$. The operator ${\mathcal{R}_{\varepsilon}}: {\mathrm{C}}^0({\mathbb{R}}; {\mathrm{L}}^2({\mathbb{S}})) \to {\mathrm{L}}^2({\mathbb{R}})$ is also well-defined, see e.g. [@LNW02] for more details on the regularity of two-scale functions. To derive quantitative error estimates, we postulate the following assumptions here and throughout the whole text. \[assump:coeff\] 1. \[assump:coeff1\] The coefficients satisfy $\alpha, \beta, b \in {\mathrm{L}}^\infty({\mathbb{S}})$ and $d \in {\mathrm{C}}^1({\mathbb{S}})$. Moreover, either $$\begin{aligned} & \text{(a)} \quad \exists\, d_*>0: \quad d(y) \geq d_* \quad \text{for all } y \in {\mathbb{S}}, \quad\text{or} \\ & \text{(b)} \quad d(y) \equiv 0 .\end{aligned}$$ 2. \[assump:coeff2\] The nonlinear function $f \in {\mathrm{C}}^1({\mathbb{R}})$ admits the growth conditions $$\begin{aligned} f(u) \geq c_1 u - c_2 \quad\text{if } u \leq 0 \quad\text{and}\quad f(u) \leq c_3 u + c_4 \quad\text{if } u \geq 0\end{aligned}$$ for some constants $c_1, c_2,c_3,c_4 \geq 0$. A prototype nonlinearity $f : {\mathbb{R}}\to {\mathbb{R}}$ that we have in mind is $$\begin{aligned} \label{eq:f} f(u) = u(1 - u)(u - a) \quad\text{with}\quad a \in ( 0,1 ) .\end{aligned}$$ Of course, our theory also applies to other bistable nonlinearities $f$ with similar properties. Before we derive error estimates, we make sure that unique classical solutions exist. Therefore, the differential operators ${{\mathcal L}}_2^{\varepsilon}: D({{\mathcal L}}_{\varepsilon}) \to {\mathrm{L}}^2({\mathbb{R}})$ and ${{\mathcal L}}_2^0 : D({{\mathcal L}}_0) \to {\mathrm{L}}^2({\mathbb{R}}\times{\mathbb{S}})$ are introduced via $$\begin{aligned} \begin{array}{ll} ({{\mathcal L}}_{\varepsilon}\varphi)(x) := \left( {\varepsilon}^2 d(\tfrac{x}{{\varepsilon}}) \varphi_x \right)_x - b(\tfrac{x}{{\varepsilon}}) \varphi ,\quad & D({{\mathcal L}}_{\varepsilon}) := \lbrace \varphi \in {\mathrm{L}}^2({\mathbb{R}}) \,|\, {{\mathcal L}}_{\varepsilon}\varphi \in {\mathrm{L}}^2({\mathbb{R}}) \rbrace , \\ ({{\mathcal L}}_0 \Phi)(x,y) := \left( d(y) \Phi_y \right)_y - b(y) \Phi ,\quad & D({{\mathcal L}}_0) := \lbrace \Phi \in {\mathrm{L}}^2({\mathbb{R}}\times{\mathbb{S}}) \,|\, {{\mathcal L}}_0 \Phi \in {\mathrm{L}}^2({\mathbb{R}}\times{\mathbb{S}}) \rbrace . \end{array}\end{aligned}$$ Notice that in case (a) of Assumption \[assump:coeff\].\[assump:coeff1\] with microscopic diffusion of the inhibitor, we have $D({{\mathcal L}}_{\varepsilon}) = {\mathrm{H}}^2({\mathbb{R}})$ and $D({{\mathcal L}}_0) = {\mathrm{L}}^2({\mathbb{R}}; {\mathrm{H}}^2({\mathbb{S}}))$; in case (b), $D({{\mathcal L}}_{\varepsilon}) = {\mathrm{L}}^2({\mathbb{R}})$ and $D({{\mathcal L}}_0) = {\mathrm{L}}^2({\mathbb{R}}\times{\mathbb{S}})$. With a slight abuse of notation, we identify the functions $U(t) \in {\mathrm{H}}^2({\mathbb{R}})$ and $U(t,x)$, etc. \[defin:class\_sol\] 1. We call $(u^{\varepsilon},v^{\varepsilon})$ a classical solution of system , if $(u_{\varepsilon},v_{\varepsilon})$ is continuous on $[0,T]$, continuously differentiable on $(0,T)$, satisfies $u^{\varepsilon}(t) \in {\mathrm{H}}^2({\mathbb{R}})$ and $v^{\varepsilon}(t) \in D({{\mathcal L}}_{\varepsilon})$ for $0<t<T$, and solves on $[0,T]$ the equations $$\begin{aligned} \begin{aligned} & u^{\varepsilon}_t = u^{\varepsilon}_{xx} + f(u^{\varepsilon}) - \alpha(\tfrac{x}{{\varepsilon}}) v^{\varepsilon}, & \qquad u^{\varepsilon}(0) = u^{\varepsilon}_0 ,\\ & v^{\varepsilon}_t = {{\mathcal L}}_{\varepsilon}v^{\varepsilon}+ \beta(\tfrac{x}{{\varepsilon}}) u^{\varepsilon}, & \qquad v^{\varepsilon}(0) = v^{\varepsilon}_0 . \end{aligned}\end{aligned}$$ 2. We call $(U,V)$ a classical solution of system , if $(U,V)$ is continuous on $[0,T]$, continuously differentiable on $(0,T)$, satisfies $U(t) \in {\mathrm{H}}^2({\mathbb{R}})$ and $V(t) \in D({{\mathcal L}}_0)$ for $0<t<T$, and solves on $[0,T]$ the equations $$\begin{aligned} \begin{aligned} & U_t = U_{xx} + f(U) - \int_0^1 \alpha(y) V(t)(x,y) {\,\mathrm{d}}y , & \qquad U(0) = U_0 ,\\ & V_t = {{\mathcal L}}_0 V + \beta(y) U , & \qquad V(0) = V_0 . \end{aligned}\end{aligned}$$ We will take initial data for $V$ in the two-scale space $$\begin{aligned} \mathbb{V}_{{{\mathcal L}}_0} := \lbrace \Phi \in D({{\mathcal L}}_0) \,|\, \Phi_x, \Phi_{xx} \in D({{\mathcal L}}_0) \rbrace \cap {\mathrm{L}}^\infty({\mathbb{R}}\times{\mathbb{S}}) .\end{aligned}$$ Notice that for $d > 0$, there holds $\mathbb{V}_{{{\mathcal L}}_0} = {\mathrm{H}}^2({\mathbb{R}}; {\mathrm{H}}^2({\mathbb{S}}))$ and all functions belonging to ${\mathrm{H}}^2({\mathbb{R}}; {\mathrm{H}}^2({\mathbb{S}}))$ are essentially bounded by the Sobolev embeddings ${\mathrm{H}}^2({\mathbb{R}}) \subset {\mathrm{L}}^\infty({\mathbb{R}})$ and ${\mathrm{H}}^2({\mathbb{S}}) \subset {\mathrm{L}}^\infty({\mathbb{S}})$. In contrast, for $d = 0$, we need the additional restriction to the set of bounded functions and $\mathbb{V}_{{{\mathcal L}}_0} = {\mathrm{H}}^2({\mathbb{R}}; {\mathrm{L}}^2({\mathbb{S}})) \cap {\mathrm{L}}^\infty({\mathbb{R}}\times{\mathbb{S}})$. \[assump:initial-1\] 1. The two-scale initial conditions $(U_0,V_0)$ for system satisfy $U_0 \in {\mathrm{H}}^2({\mathbb{R}})$ and $V_0 \in \mathbb{V}_{{{\mathcal L}}_0}$. 2. \[assump:initial-1b\] The one-scale initial conditions $(u^{\varepsilon}_0 , v^{\varepsilon}_0)$ for system satisfy $u^{\varepsilon}_0 \in {\mathrm{H}}^2({\mathbb{R}})$ and $v^{\varepsilon}_0 \in D({{\mathcal L}}_{\varepsilon}) \cap {\mathrm{L}}^\infty({\mathbb{R}})$, and fulfill the estimate $$\begin{aligned} \exists\,C\geq 0 : \quad \Vert u^{\varepsilon}_0 - U_0 \Vert_{{\mathrm{L}}^2({\mathbb{R}})} + \Vert v^{\varepsilon}_0 - {\mathcal{R}_{\varepsilon}}V_0 \Vert_{{\mathrm{L}}^2({\mathbb{R}})} \leq {\varepsilon}C \quad \text{for all } {\varepsilon}\in (0,1] .\end{aligned}$$ Notice that $V_0 \in {\mathrm{C}}^1({\mathbb{R}}; {\mathrm{L}}^2({\mathbb{S}}))$, thanks to the Sobolev embedding ${\mathrm{H}}^2({\mathbb{R}}) \subset {\mathrm{C}}^1({\mathbb{R}})$, so that ${\mathcal{R}_{\varepsilon}}V_0$ is indeed well defined. Under the above assumptions, we obtain the existence of classical solutions via the semigroup theory. \[thm:sol-exist\] Let Assumptions \[assump:coeff\] and \[assump:initial-1\] hold. Then the following is true. \(i) For every $T>0$ and ${\varepsilon}>0$, there exists a unique classical solution $(u^{\varepsilon},v^{\varepsilon})$ of system . Moreover, $$\begin{aligned} \label{eq:bound-eps} \Vert (u^{\varepsilon}, v^{\varepsilon}) \Vert_{{\mathrm{C}}^1( [0,T]; {\mathrm{L}}^2({\mathbb{R}}))} + \Vert ( u^{\varepsilon}_x, {\varepsilon}v^{\varepsilon}_x) \Vert_{{\mathrm{L}}^2((0,T)\times{\mathbb{R}}))} + \Vert ( u^{\varepsilon}, v^{\varepsilon}) \Vert_{{\mathrm{L}}^\infty((0,T)\times{\mathbb{R}}))} \leq C\end{aligned}$$ for some constant $C = C(T) > 0$ independent of ${\varepsilon}$. \(ii) For every $T>0$, there exists a unique classical solution $(U,V)$ of the two-scale system . In addition, the inhibitor satisfies $V \in C^0([0,T]; \mathbb{V}_{{{\mathcal L}}_0})$. For arbitrary $M >0$, we define the function $f_M : {\mathbb{R}}\to {\mathbb{R}}$ via $$\begin{aligned} f_M(u) := \left\lbrace \begin{array}{ll} f(-M) + f'(-M)u \quad & \text{if } u < -M , \\ f(u) & \text{if } |u| \leq M , \\ f(M) + f'(M)u & \text{if } u > M . \end{array}\right.\end{aligned}$$ Notice that $f_M \in {\mathrm{C}}^1({\mathbb{R}})$ is globally Lipschitz continuous. Then for every $T>0$, the existence of unique classical solutions $(u^{\varepsilon}_M, v^{\varepsilon}_M)$ and $(U_M, V_M)$ according to Definition \[defin:class\_sol\] follows from the semigroup theory, see e.g. [@Paz83 Sec. 6.1, Thm. 1.5]. The higher regularity $x \mapsto V(t,x,y) \in {\mathrm{H}}^2({\mathbb{R}})$ follows by taking finite differences as in [@Diss-SR Prop. 2.3.17]. According to Lemma \[lemma:max-bound\] and Remark \[rem:A1\], the solutions $u^{\varepsilon}_M, v^{\varepsilon}_M, U_M$ are bounded in ${\mathrm{C}}^0([0,T];{\mathrm{L}}^\infty({\mathbb{R}}))$ and $V_M$ in ${\mathrm{C}}^0([0,T]; {\mathrm{L}}^\infty({\mathbb{R}}\times{\mathbb{S}}))$ uniformly with respect to ${\varepsilon}$ and $M$. Hence, the result also holds for the unmodified function $f$. The upper bound for $\Vert (u^{\varepsilon}, v^{\varepsilon}) \Vert_{{\mathrm{C}}^1( [0,T]; {\mathrm{L}}^2({\mathbb{R}}))} + \Vert ( u^{\varepsilon}_x, {\varepsilon}v^{\varepsilon}_x) \Vert_{{\mathrm{L}}^2((0,T)\times{\mathbb{R}}))}$ follows from testing the equations with the solution itself and applying Grönwall’s Lemma, see e.g. [@Diss-SR Sec. 2.1.2] or [@MRT14 Sec. 4.1]. The upper bound for $\Vert ( u^{\varepsilon}, v^{\varepsilon}) \Vert_{{\mathrm{L}}^\infty((0,T)\times{\mathbb{R}}))}$ is immediate from Lemma \[lemma:max-bound\]. Finally, we prove error estimates for the difference of the original solution $(u^{\varepsilon}, v^{\varepsilon})$ and the effective solution $(U, {\mathcal{R}_{\varepsilon}}V)$, which justifies our investigation of the two-scale system in the next section. \[thm:error-est\] Let Assumptions \[assump:coeff\] and \[assump:initial-1\] hold. Moreover, let $(u^{\varepsilon},v^{\varepsilon})$ and $(U,V)$ denote classical solutions of the original system and the two-scale system , respectively. Then for every $T>0$, there exists a constant $C>0$ depending on $(U,V)$ but not ${\varepsilon}$ such that $$\begin{aligned} \label{eq:est} &\Vert u^{\varepsilon}- U \Vert_{{\mathrm{C}}^0([0,T]; {\mathrm{L}}^2({\mathbb{R}}))} + \Vert v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V \Vert_{{\mathrm{C}}^0([0,T]; {\mathrm{L}}^2({\mathbb{R}}))} \leq {\varepsilon}C , \\ \label{eq:est-grad} & \Vert u^{\varepsilon}_x - U_x \Vert_{{\mathrm{L}}^2((0,T)\times{\mathbb{R}}))} + \Vert {\varepsilon}v^{\varepsilon}_x - {\mathcal{R}_{\varepsilon}}V_y \Vert_{{\mathrm{L}}^2((0,T)\times{\mathbb{R}}))} \leq {\varepsilon}C .\end{aligned}$$ For brevity, we write the coefficients as $\alpha_{\varepsilon}(x) := \alpha(\tfrac{x}{{\varepsilon}})$, etc. Subtracting the equations for $u^{\varepsilon}$ and $U$ and respectively $v^{\varepsilon}$ and $V$ in and , testing with $u^{\varepsilon}- U$, respectively $v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V$, and integrating over ${\mathbb{R}}$ yields for all $t \in [0,T]$ $$\begin{aligned} \label{eq:error-1a} \int_{\mathbb{R}}(u^{\varepsilon}- U)_t (u^{\varepsilon}- U) {\,\mathrm{d}}x & = \int_{\mathbb{R}}\bigg\lbrace (u^{\varepsilon}- U)_{xx} (u^{\varepsilon}- U) + [f(u^{\varepsilon}) - f(U)](u^{\varepsilon}- U) \nonumber \\ & \hspace{4em} - \bigg[ \alpha_{\varepsilon}v^{\varepsilon}- \int_0^1 \alpha(y) V(t,x,y){\,\mathrm{d}}y \bigg] (u^{\varepsilon}- U) \bigg\rbrace {\,\mathrm{d}}x\end{aligned}$$ as well as $$\begin{aligned} \label{eq:error-2a} \int_{\mathbb{R}}(v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V)_t (v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V) {\,\mathrm{d}}x & = \int_{\mathbb{R}}\bigg\lbrace \big( ( {\varepsilon}^2 d_{\varepsilon}v^{\varepsilon}_x )_x - {\mathcal{R}_{\varepsilon}}[ (d V_y)_y] \big) (v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V) \nonumber \\ & \hspace{2em} - b_{\varepsilon}|v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V |^2 + \beta_{\varepsilon}(u^{\varepsilon}- U) (v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V ) \bigg\rbrace {\,\mathrm{d}}x .\end{aligned}$$ In case (a) of Assumption \[assump:coeff\].\[assump:coeff1\], using the relation ${\varepsilon}({\mathcal{R}_{\varepsilon}}V)_x = {\mathcal{R}_{\varepsilon}}({\varepsilon}V_x + V_y)$, we obtain $$\begin{aligned} & \left( {\varepsilon}^2 d_{\varepsilon}({\mathcal{R}_{\varepsilon}}V)_x \right)_x = {\mathcal{R}_{\varepsilon}}[( d V_y)_y ] + \Delta^{\varepsilon}, \\ & \Delta^{\varepsilon}:= {\varepsilon}{\mathcal{R}_{\varepsilon}}[(d V_y)_x] + {\varepsilon}{\mathcal{R}_{\varepsilon}}[(d V_x)_y] + {\varepsilon}^2 {\mathcal{R}_{\varepsilon}}[(d V_x)_x]\end{aligned}$$ with $d = d(y)$ and by Theorem \[thm:sol-exist\] (ii), we find the upper bound $$\begin{aligned} \label{eq:error-Delta} \Vert \Delta^{\varepsilon}\Vert_{{\mathrm{L}}^2({\mathbb{R}})} \leq {\varepsilon}C_1(t) \quad\text{with}\quad C_1(t) := \Vert d \Vert_{{\mathrm{C}}^1({\mathbb{S}})} \Vert V(t,\cdot,\cdot)\Vert_{{\mathrm{H}}^2({\mathbb{R}}; {\mathrm{L}}^2({\mathbb{S}})) \cap {\mathrm{H}}^1({\mathbb{R}}; {\mathrm{H}}^1({\mathbb{S}}))} .\end{aligned}$$ In case (b) of Assumption \[assump:coeff\].\[assump:coeff1\], we set $C_1(t) : = 0$. Applying partial integration with the boundary conditions $$\begin{aligned} \lim_{x \to\pm\infty} u^{\varepsilon}_x(t,x) = 0, \quad \lim_{x \to\pm\infty} v^{\varepsilon}_x(t,x) = 0, \quad \lim_{x \to\pm\infty} U_x(t,x) = 0, \quad \lim_{x\to\pm\infty} V_x(t,x,y) = 0 ,\end{aligned}$$ for all $t \in [0,T]$ and almost all $y \in {\mathbb{S}}$, and the chain rule $\frac12\frac{{\,\mathrm{d}}}{{\,\mathrm{d}}t} \Vert u \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} = \int_{{\mathbb{R}}} \dot{u} u {\,\mathrm{d}}x$, we see that the two equations and take the form $$\begin{aligned} \label{eq:error-1b} \frac12 \frac{{\,\mathrm{d}}}{{\,\mathrm{d}}t} \Vert u^{\varepsilon}- U \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} & = \int_{\mathbb{R}}\bigg\lbrace - |u^{\varepsilon}_x - U_x|^2 + [f(u^{\varepsilon}) - f(U)](u^{\varepsilon}- U) \nonumber \\ & \hspace{6em}- \bigg[ \alpha_{\varepsilon}v^{\varepsilon}- \int_0^1 \alpha(y) V(t,x,y){\,\mathrm{d}}y \bigg] (u^{\varepsilon}- U) \bigg\rbrace {\,\mathrm{d}}x\end{aligned}$$ as well as $$\begin{aligned} \label{eq:error-2b} \frac12 \frac{{\,\mathrm{d}}}{{\,\mathrm{d}}t} \Vert v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} & = \int_{\mathbb{R}}\bigg\lbrace - {\varepsilon}^2 d_{\varepsilon}|v^{\varepsilon}_x - ({\mathcal{R}_{\varepsilon}}V)_x|^2 - \Delta^{\varepsilon}(v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V) \nonumber \\ & \hspace{5em} - b_{\varepsilon}|v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V |^2 + \beta_{\varepsilon}(u^{\varepsilon}- U) (v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V) \bigg\rbrace {\,\mathrm{d}}x .\end{aligned}$$ Applying Hölder’s and Young’s inequality gives $$\begin{aligned} \label{eq:error-Delta2} \int_{\mathbb{R}}\big| \Delta^{\varepsilon}(v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V) \big| {\,\mathrm{d}}x \leq \frac12 \Vert \Delta^{\varepsilon}\Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + \frac12 \Vert v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} .\end{aligned}$$ According to Lemma \[lemma:eck\] we have for the dual norm $$\begin{aligned} \label{eq:error-meanVal} \begin{aligned} &\Vert {\mathcal{R}_{\varepsilon}}(\alpha V) - \textstyle\int_0^1 \alpha(y) V(t,x,y) {\,\mathrm{d}}y \Vert_{{\mathrm{H}}^1({\mathbb{R}})^*} \leq {\varepsilon}C_2(t) \\ & \text{with} \quad C_2(t) := \Vert \alpha V(t, \cdot, \cdot) \Vert_{{\mathrm{H}}^1({\mathbb{R}}; {\mathrm{L}}^2({\mathbb{S}}))} . \end{aligned}\end{aligned}$$ Using ${\mathcal{R}_{\varepsilon}}(\alpha V) = \alpha_{\varepsilon}\,({\mathcal{R}_{\varepsilon}}V)$ and , we obtain with Hölder’s and Young’s inequality $$\begin{aligned} \label{eq:error-meanVal2} & \left\vert \int_{\mathbb{R}}\left[ \alpha_{\varepsilon}v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}(\alpha V) + {\mathcal{R}_{\varepsilon}}(\alpha V) - \int_0^1 \alpha(y) V(t,x,y){\,\mathrm{d}}y \right] (u^{\varepsilon}- U) {\,\mathrm{d}}x \right\vert \nonumber \\ & \leq \frac12 \Vert \alpha \Vert_{{\mathrm{L}}^\infty({\mathbb{S}})} \left( \Vert v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + \Vert u^{\varepsilon}- U \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} \right) + {\varepsilon}^2 \frac12 C_2(t) + \frac12 \Vert u^{\varepsilon}- U \Vert^2_{{\mathrm{H}}^1({\mathbb{R}})} .\end{aligned}$$ Using the uniform ${\mathrm{L}}^\infty({\mathbb{R}})$-bound for $u^{\varepsilon}, U$ and arguing as in the proof of Theorem \[thm:sol-exist\], we can consider $f$ to be globally Lipschitz continuous. Adding and , recalling that $d(y) \geq d_* > 0$, and using , , and , we arrive at $$\begin{aligned} \label{eq:error-3} & \frac12 \frac{{\,\mathrm{d}}}{{\,\mathrm{d}}t} \left\lbrace \Vert u^{\varepsilon}- U \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + \Vert v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} \right\rbrace + \Vert u^{\varepsilon}_x - U_x \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + {\varepsilon}^2 d_* \Vert v^{\varepsilon}_x - ({\mathcal{R}_{\varepsilon}}V)_x \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} \nonumber \\ & \leq L(t) \left\lbrace \Vert u^{\varepsilon}- U\Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + \Vert v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + {\varepsilon}^2 \big( (C_1(t))^2 + (C_2(t))^2 \big) \right\rbrace ,\end{aligned}$$ where $L(t)>0$ depends on the Lipschitz properties of $f$, the upper bound of $\Vert u^{\varepsilon}(t) \Vert_{{\mathrm{L}}^\infty({\mathbb{R}})} + \Vert U (t) \Vert_{{\mathrm{L}}^\infty({\mathbb{R}})}$ in Lemma \[lemma:max-bound\] and Remark \[rem:A1\], as well as $\max \lbrace \Vert \alpha \Vert_{{\mathrm{L}}^\infty({\mathbb{S}})}, \Vert \beta \Vert_{{\mathrm{L}}^\infty({\mathbb{S}})}, \Vert b \Vert_{{\mathrm{L}}^\infty({\mathbb{S}})} \rbrace$. Applying Grönwall’s Lemma with Assumption \[assump:initial-1\].\[assump:initial-1b\] for the initial conditions gives for all $t \geq 0$ $$\begin{aligned} \label{eq:error-4} \max_{0 \leq t\leq T} \left\lbrace \Vert u^{\varepsilon}(t) - U(t) \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + \Vert v^{\varepsilon}(t) - {\mathcal{R}_{\varepsilon}}V(t) \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} \right\rbrace \leq {\varepsilon}^2 C_3(t)e^{\int_0^t L(\tau) {\,\mathrm{d}}\tau} ,\end{aligned}$$ where $C_3 (t) >0 $ is bounded on $[0,T]$ and independent of ${\varepsilon}$. Hence, estimate follows by choosing $t = T$ on the right-hand side in and taking the square root. Moreover, integrating over $[0,T]$ gives with the gradient estimate . Let us introduce the periodic unfolding operator ${{\mathcal T}}_{\varepsilon}: {\mathrm{L}}^2({\mathbb{R}}) \to {\mathrm{L}}^2({\mathbb{R}}\times {\mathbb{S}})$ following [@CDG02] $$({{\mathcal T}}_{\varepsilon}v)(x,y) : = v \left( {\varepsilon}[\tfrac{x}{{\varepsilon}}] + {\varepsilon}y \right) ,$$ where $[x] \in \mathbb{Z}$ denotes the integer part of $x\in{\mathbb{R}}$. Noting that $({{\mathcal T}}_{\varepsilon}{\mathcal{R}_{\varepsilon}}V)(x,y) = V({\varepsilon}[\tfrac{x}{{\varepsilon}}] + {\varepsilon}y , y)$ and $x \mapsto V(x,y)$ is Lipschitz continuous, yields the equivalence $$\Vert v^{\varepsilon}- {\mathcal{R}_{\varepsilon}}V \Vert_{{\mathrm{L}}^2({\mathbb{R}})} \leq {\varepsilon}C \qquad\Longleftrightarrow\qquad \Vert {{\mathcal T}}_{\varepsilon}v^{\varepsilon}- V \Vert_{{\mathrm{L}}^2({\mathbb{R}}\times{\mathbb{S}})} \leq {\varepsilon}C .$$ In particular, implies that the inhibitor $v^{\varepsilon}$ converges to $V$ strongly in the two-scale sense according to the definition of two-scale convergence in [@MT07]. In the same manner, yields the strong two-scale convergence of ${\varepsilon}v^{\varepsilon}_x$ to $\nabla_y V$. Pulses in the two-scale system {#sec:pulses} ============================== We seek solutions $(U ,V)$ of the two-scale system that are frame invariant with respect to the co-moving frame $z = x + {{\mathbf c}}t$ such that $$\begin{aligned} U(t,x) = {{\mathbf u}}(x + {{\mathbf c}}t) \quad\text{and}\quad V(t,x,y) = {{\mathbf v}}(x + {{\mathbf c}}t,y) ,\end{aligned}$$ where ${{\mathbf c}}\geq 0$ denotes the constant wave speed. Inserting this ansatz into yields the nonlocally coupled system of an ODE and a PDE \[eq:ComovingSystem-sub\] $$\label{eq:ComovingSystem} \tag{\ref{eq:ComovingSystem-sub}.Co-S$_0$} \begin{aligned} {{\mathbf c}}{{\mathbf u}}' & = {{\mathbf u}}'' + f({{\mathbf u}}) - \int_0^1 \alpha(y) {{\mathbf v}}(\cdot,y) {\,\mathrm{d}}y , \\ {{\mathbf c}}{{\mathbf v}}' & = - {{\mathcal L}}{{\mathbf v}}+ \beta(y) {{\mathbf u}}, \end{aligned}$$ where ${{\mathbf u}}' = {{\mathbf u}}_z$. The differential operator ${{\mathcal L}}: D({{\mathcal L}}) \to {\mathrm{L}}^2({\mathbb{S}})$ is given via $$({{\mathcal L}}\varphi)(y):= - (d(y) \varphi_y)_y + b(y)\varphi \quad\text{and}\quad D({{\mathcal L}}):=\{\varphi\in {\mathrm{L}}^2({\mathbb{S}}) \,|\, {{\mathcal L}}\varphi\in {\mathrm{L}}^2({\mathbb{S}}) \} .$$ We denote by $\|\cdot\|_{D({{\mathcal L}})}$ the graph norm and by ${\mathrm{spec}}({{\mathcal L}})$ the spectrum of ${{\mathcal L}}$. The unknowns of our pulse solution in demand are $$\label{eq:Unknowns} {{\mathbf c}}\geq 0,\quad {{\mathbf u}}:{\mathbb{R}}\to{\mathbb{R}},\quad {{\mathbf v}}:{\mathbb{R}}\times{\mathbb{S}}\to {\mathbb{R}}.$$ \[defin:PulseTS\] The triple $({{\mathbf c}},{{\mathbf u}}(x + {{\mathbf c}}t), {{\mathbf v}}(x + {{\mathbf c}}t,y))$ is called a two-scale pulse solution of the two-scale system  if ${{\mathbf u}}\in {\mathrm{C}}^2({\mathbb{R}})$, ${{\mathbf v}}\in {\mathrm{C}}^0 ({\mathbb{R}};D({{\mathcal L}}))\cap {\mathrm{C}}^1({\mathbb{R}}; {\mathrm{L}}^2({\mathbb{S}}))$, equations  hold, and $({{\mathbf u}},{{\mathbf v}})$ is a homoclinic orbit of , i.e., $$\label{eqHomoclinicH1} \lim\limits_{z\to\pm\infty} {{\mathbf u}}(z)=0,\quad \lim\limits_{z\to\pm\infty}\|{{\mathbf v}}(z,\cdot)\|_{D({{\mathcal L}})}=0.$$ Throughout this section, we assume the following. \[assump:EllipticOrNot\] There holds $0\notin{\mathrm{spec}}({{\mathcal L}})$. If $d(y) \equiv 0$, then $b(y ) \equiv b_0$ for some $b_0 > 0$. Assumptions \[assump:coeff\].\[assump:coeff1\] and \[assump:EllipticOrNot\] together imply that the spectrum of ${{\mathcal L}}$ is discrete and we can find a spectral gap around zero. Existence of two-scale pulse solutions {#subsec:pulse-exist} -------------------------------------- In this section, we provide sufficient conditions under which pulse solutions exist and are determined by what we will call a [*guiding system*]{} of finitely many ODEs. Our main assumptions that allow us to reduce the nonlocally coupled PDE system  to a system of ODEs are as follows. \[assump:AlphaEigenfunction\] The function $\alpha(y)$ is a finite sum of eigenfunctions of the operator ${{\mathcal L}}$, i.e., there exist $m \geq 1$, $\widetilde\alpha_i \in D({{\mathcal L}})$, and $\lambda_i\in{\mathbb{R}}$ such that $\widetilde{\alpha}_1, ..., \widetilde{\alpha}_m$ are linearly independent and $$\begin{aligned} \alpha (y) = \sum_{i=1}^m \widetilde\alpha_i (y) \qquad\text{with}\qquad {{\mathcal L}}\widetilde \alpha_i=\lambda_i \widetilde\alpha_i .\end{aligned}$$ To be definite, we assume that $\lambda_i>0$. Notice that the eigenvalues $\lambda_i$ in Assumption \[assump:AlphaEigenfunction\] are not assumed to be simple. With this, we introduce the new parameters (for $i = 1,...,m$) $$\label{eq:Alpha*Beta*} \alpha_i:=\| \widetilde\alpha_i\|_{{\mathrm{L}}^2({\mathbb{S}})} \quad\text{and}\quad \beta_i:=\dfrac{(\beta, \widetilde\alpha_i)_{{\mathrm{L}}^2({\mathbb{S}})}}{\| \widetilde\alpha_i\|_{{\mathrm{L}}^2({\mathbb{S}})}}.$$ \[assump:GuidingSystem\] The ODE system \[eq:ComovingSystemGuiding-sub\] $$\label{eq:ComovingSystemGuiding} \tag{\ref{eq:ComovingSystemGuiding-sub}.GS} \begin{aligned} c u'&=u'' + f(u) - \sum_{i=1}^m \alpha_i v_i , \\ c v_i'&=-\lambda_i v_i + \beta_i u, \hspace{4em} i=1,...,m, \end{aligned}$$ where $\alpha_i,\beta_i,\lambda_i \in{\mathbb{R}}$ are given by  and Assumption \[assump:AlphaEigenfunction\], admits a homoclinic orbit $(c,u,v_1,...,v_m)$ satisfying $$\label{eq:UnknownsGuiding} c\ge 0 \quad\text{and}\quad u,v_i\in {\mathrm{C}}^\infty({\mathbb{R}}).$$ Moreover, there exists $\sigma > 0$ such that $$\label{eq:HomoclinicGuiding} \lim\limits_{z\to\pm\infty}e^{\sigma |z|} u(z)=0,\quad \lim\limits_{z\to\pm\infty}e^{\sigma |z|} u'(z)=0, \quad \lim\limits_{z\to\pm\infty} v_i(z)=0 .$$ We will refer to system  as to the [*guiding system*]{}. \[rem:param\] 1. System is known to possess a homoclinic orbit, e.g., for cubic functions $f$ as in and certain parameter sets $(\alpha_i,\beta_i,\lambda_i)_{i=1}^m$, c.f. [@Carp1977] on “pulses in systems with $l$ fast and $m$ slow equations”. Typically, these parameters are within the range $$\begin{aligned} \alpha_i = 1, \qquad 0 < \beta_i \ll 1, \qquad 0 \leq \lambda_i \ll 1 ,\end{aligned}$$ see also the numerical examples in Section \[sec:numerics\]. 2. \[rem:sign\] Interestingly, neither $\alpha(y)$, $\beta(y)$, nor their product $\alpha(y) \beta(y)$ need to be sign preserving. Moreover, the case $\int_0^1\alpha(y){\,\mathrm{d}}y=0$ is [*not*]{} excluded in general, unless $\int_0^1 \alpha(y)\beta(y) {\,\mathrm{d}}y = 0$. In the latter case, $\beta_i=0$ and the system  decouples and has no homoclinics. The former case is exemplarily treated in Section \[subsec:two-alpha\]. 3. Let $b(y) \equiv b_0$. Then, the two-scale system  takes the form $$\label{eq:system_b0} \begin{aligned} {{\mathbf c}}{{\mathbf u}}'& = {{\mathbf u}}'' + f({{\mathbf u}}) - \int_0^1 \alpha(y) {{\mathbf v}}(z,y){\,\mathrm{d}}y,\\ {{\mathbf c}}{{\mathbf v}}'& = -b_0 {{\mathbf v}}+ \beta(y) {{\mathbf u}}. \end{aligned} $$ Obviously, *any* $\alpha\in {\mathrm{L}}^2({\mathbb{S}})$ satisfies Assumption \[assump:AlphaEigenfunction\] with $\lambda_0=b_0$. This situation is illustrated by a numerical example in Section \[subsec:jumps\]. The main result of this paper is the following theorem. \[thm:PDEPulse\] Let Assumptions \[assump:coeff\], \[assump:EllipticOrNot\], \[assump:AlphaEigenfunction\], and \[assump:GuidingSystem\] hold. Then the two-scale system admits a pulse solution $({{\mathbf c}}, {{\mathbf u}}, {{\mathbf v}})$ such that the pair $({{\mathbf c}},{{\mathbf u}}) = (c,u)$ is the same as in  and ${{\mathbf v}}$ satisfies the estimate $$\label{eq:PDEPulseVEstimate} \|{{\mathbf v}}(z,\cdot)\|_{D({{\mathcal L}})}+\|{{\mathbf v}}_z(z,\cdot)\|_{{\mathrm{L}}^2({\mathbb{S}})} \leq C e^{-\gamma|z|} \qquad\text{for } z\in{\mathbb{R}},$$ where $C,\gamma>0$ do not depend on $z\in{\mathbb{R}}$. Moreover, if $\beta \in D({{\mathcal L}})$, then $$\label{eq:PDEPulseVEstimate2} \| {{\mathcal L}}{{\mathbf v}}(z,\cdot) \|_{D({{\mathcal L}})} + \|{{\mathbf v}}_z(z,\cdot)\|_{D({{\mathcal L}})} \leq C e^{-\gamma|z|} \qquad\text{for } z\in{\mathbb{R}}.$$ The proof is based on the spectral decomposition of the space ${\mathrm{L}}^2({\mathbb{S}})$ to recover the guiding system and semigroup properties to derive the exponential decay in –. *Step 1: spectral decomposition.* Under Assumption \[assump:EllipticOrNot\], ${{\mathcal L}}$ is a sectorial self-adjoint operator. Its spectrum is bounded from below and consists of isolated real eigenvalues, which admit possible multiple geometric multiplicity. The corresponding eigenfunctions form a basis for ${\mathrm{L}}^2({\mathbb{S}})$. We denote by $e^{-{{\mathcal L}}t}$, $t\ge 0$, the analytic semigroup in ${\mathrm{L}}^2({\mathbb{S}})$ generated by ${{\mathcal L}}$. Set $\Sigma_-:={\mathrm{spec}}({{\mathcal L}})\cap\{\lambda<0\}$ and $\Sigma_+:={\mathrm{spec}}({{\mathcal L}})\cap\{\lambda>0\}$. Let ${{\mathcal P}}_i$ be the orthogonal projector onto the eigenspace ${\mathrm{Span}}(\widetilde\alpha_i)$, $i = 1,...,m$, ${{\mathcal P}}_-$ onto the eigenspace corresponding to $\Sigma_-$ and $\sum_{i=1}^m {{\mathcal P}}_i + {{\mathcal P}}_+$ onto the eigenspace corresponding to $\Sigma_+$. Set $Y_i:={{\mathcal P}}_i({\mathrm{L}}^2({\mathbb{S}}))$, $Y_-={{\mathcal P}}_-({\mathrm{L}}^2({\mathbb{S}}))$, and $Y_+ = {{\mathcal P}}_+({\mathrm{L}}^2({\mathbb{S}}))$. The spaces $Y_i$, $Y_+$, and $Y_-$ are pairwise orthogonal and invariant under ${{\mathcal L}}$. Moreover, $Y_i$ and $Y_-$ are finite-dimensional. By Assumption \[assump:AlphaEigenfunction\], the restriction of ${{\mathcal L}}$ onto $Y_i$ is a multiplication by $\lambda_i$. Let ${{\mathcal L}}_\pm$ denote the restrictions of ${{\mathcal L}}$ onto $Y_\pm$. Then, we have (cf. [@Hen81 Sec. 1.5]) $$\begin{aligned} &{{\mathcal L}}_-:Y_-\to Y_-\ \text{is bounded, } & &{\mathrm{spec}}({{\mathcal L}}_-)=\Sigma_-,\\ &D({{\mathcal L}}_+)=D({{\mathcal L}})\cap Y_+, & &{\mathrm{spec}}({{\mathcal L}}_+) \subseteq \Sigma_+. \end{aligned}$$ Notice that eigenvalues $\lambda_i$ may but need not belong to ${\mathrm{spec}}({{\mathcal L}}_+)$. Moreover, due to Assumption \[assump:EllipticOrNot\], there exists $\sigma_\pm > 0$ such that $\Sigma_-$ is below $-\sigma_-$ and $\Sigma_+$ is above $\sigma_+$. Therefore, there exists $C_1>0$ such that $$\label{eq:ExponentialEstimates} \begin{aligned} \|e^{-{{\mathcal L}}_- t}\|_{Y_-}& \le C_1 e^{\sigma_- t}, & & t\leq 0,\\ \|e^{-{{\mathcal L}}_+ t}\|_{Y_+}& \le C_1 e^{-\sigma_+ t}, & & t > 0, \end{aligned}$$ as well as $$\label{eq:ExponentialEstimatesL} \begin{aligned} \|{{\mathcal L}}_+ e^{-{{\mathcal L}}_+ t}\|_{Y_+}\le C_1 t^{-1} e^{-\sigma_+ t},\quad t> 0. \end{aligned}$$ *Step 2: orthogonal projection.* Further in the proof, we assume that $c > 0$ in Assumption \[assump:GuidingSystem\], whereas the modifications for the case $c = 0$ are obvious. We will show that the pulse solution for the two-scale system  is given by $({{\mathbf c}},{{\mathbf u}},{{\mathbf v}})$, where $({{\mathbf c}},{{\mathbf u}}) = (c,u)$ are as in , and the ${{\mathbf v}}$-component is represented via $$\label{eq:VSeries} {{\mathbf v}}(z,\cdot) = \sum_{i=1}^m {{\mathbf v}}_i(z) \cdot \dfrac{\widetilde\alpha_i(\cdot)}{\alpha_i} + {{\mathbf v}}_+(z) + {{\mathbf v}}_-(z) \qquad\text{for } z \in {\mathbb{R}},$$ where ${{\mathbf v}}_i(z)\in {\mathbb{R}}$ and ${{\mathbf v}}_\pm(z) \in Y_\pm$. Exploiting the orthogonal decomposition and setting $\beta_i:={{\mathcal P}}_i(\beta)$ as well as $\beta_\pm:={{\mathcal P}}_\pm(\beta)$, we obtain that the co-moving two-scale system  is equivalent to the system $$\label{eq:PDEPulse1-4} \begin{aligned} & {{\mathbf c}}{{\mathbf u}}' = {{\mathbf u}}'' + f({{\mathbf u}}) - \sum_{i=1}^m \alpha_i {{\mathbf v}}_i ,\\ & {{\mathbf c}}{{\mathbf v}}_i' = - \lambda_i {{\mathbf v}}_i + \beta_i {{\mathbf u}}, \hspace{4em} i = 1,...,m,\\ & {{\mathbf c}}{{\mathbf v}}_\pm' = - {{\mathcal L}}_\pm {{\mathbf v}}_\pm + \beta_\pm {{\mathbf u}}. \end{aligned} $$ By Assumption \[assump:GuidingSystem\], the first $1+m$ equations admit a pulse solution with $({{\mathbf c}}, {{\mathbf u}}, {{\mathbf v}}_1,..., {{\mathbf v}}_m) = (c,u,v_1,...,v_m)$ given by . Since $\lambda_i > 0$ and the ${{\mathbf v}}_i$’s are bounded, we have $$\label{eq:V0Explicit} {{\mathbf v}}_i(z) = \dfrac{1}{c}\int_{-\infty}^z e^{-\frac{\lambda_i}{c}(z - \xi)}\beta_i {{\mathbf u}}(\xi){\,\mathrm{d}}\xi.$$ Moreover, we set $$\label{eq:VjExplicit} \begin{aligned} {{\mathbf v}}_+(z) & :=\dfrac{1}{c}\int_{-\infty}^z e^{-\frac{{{\mathcal L}}_+}{c}(z - \xi)}\beta_+ {{\mathbf u}}(\xi){\,\mathrm{d}}\xi , \\ {{\mathbf v}}_-(z) & := -\dfrac{1}{c}\int_z^{+\infty} e^{-\frac{{{\mathcal L}}_-}{c}(z - \xi)}\beta_- {{\mathbf u}}(\xi){\,\mathrm{d}}\xi. \end{aligned}$$ Since ${{\mathbf u}}\in {\mathrm{C}}^1({\mathbb{R}})$, it follows from [@Paz83 Sec. 4.3, Thm.  3.5] that ${{\mathbf v}}_\pm\in {\mathrm{C}}^0({\mathbb{R}};D(L_\pm))\cap {\mathrm{C}}^1({\mathbb{R}};Y_\pm)$. Hence, ${{\mathbf v}}\in {\mathrm{C}}^0({\mathbb{R}};D(L))\cap {\mathrm{C}}^1({\mathbb{R}};{\mathrm{L}}^2({\mathbb{S}}))$. *Step 3: exponential decay.* Let $C_2>0$ be according to such that $$\begin{aligned} | {{\mathbf u}}(z)|\le C_2 e^{-\sigma|z|},\quad z\in{\mathbb{R}}.\end{aligned}$$ Then the estimate of  and  with the help of  shows that there exist $C_3>0$ and $0<\gamma<\min(\sigma,\lambda_i/c,\sigma_\pm/c)$ such that $$\label{eq:VjEstimate} | {{\mathbf v}}_i(z)| \le C_3 \beta_i e^{-\gamma|z|},\quad \| {{\mathbf v}}_\pm(z)\|_{Y_\pm}\le C_3 \|\beta_\pm\|_{Y_\pm} e^{-\gamma|z|}, \quad z\in{\mathbb{R}}.$$ Additionally using  and the boundedness of ${{\mathcal L}}_-$, we can find $C_4>0$ such that $$\label{eq:VjEstimate1} | {{\mathbf v}}_i'(z)|\le C_4\beta_i e^{-\gamma|z|},\quad \|{{\mathcal L}}_- {{\mathbf v}}_-(z)\|_{Y_-} + \| {{\mathbf v}}_-'(z)\|_{Y_-}\le C_4 \|\beta_-\|_{Y_-} e^{-\gamma|z|},\quad z\in{\mathbb{R}}.$$ To control ${{\mathcal L}}_+ {{\mathbf v}}_+ (z)$, we represent ${{\mathbf v}}_+(z)$ as follows: $$\begin{aligned} {{\mathbf v}}_+(z) & = \dfrac{1}{c}\int_{-\infty}^z e^{-\frac{{{\mathcal L}}_+}{c}(z - \xi)}\beta_+ [ {{\mathbf u}}(\xi) - {{\mathbf u}}(z) ] {\,\mathrm{d}}\xi + \dfrac{1}{c}\int_{-\infty}^z e^{-\frac{{{\mathcal L}}_+}{c}(z - \xi)}\beta_+ {{\mathbf u}}(z) {\,\mathrm{d}}\xi \\ & =: \bar{{{\mathbf v}}}_1(z) + \bar{{{\mathbf v}}}_2(z) .\end{aligned}$$ According to , we have $|{{\mathbf u}}'(z)|\le C_2 e^{-\sigma|z|}$, $z\in{\mathbb{R}}$, and hence $$\begin{array}{llll} \text{(a)} & |{{\mathbf u}}(z) - {{\mathbf u}}(\xi)| \leq & \hspace{-7pt} C_2 e^{-\sigma|z|} |z - \xi | \quad & \text{for } 0 \geq z \geq \xi , \\ \text{(b)} & | {{\mathbf u}}(z) - {{\mathbf u}}(\xi)| \leq & \hspace{-7pt} C_2 e^{-\sigma|\xi|} |z - \xi | \quad & \text{for } z \geq \xi \geq 0, \\ \text{(c)} & | {{\mathbf u}}(z) - {{\mathbf u}}(\xi)| \leq & \hspace{-7pt} C_2 |z - \xi | \quad & \text{for all } z, \xi \in {\mathbb{R}}. \end{array}$$ First, let $z\leq 0$ be fixed. Exploiting relation  and (a) yields $C_5 > 0$ such that $$\begin{aligned} \Vert {{\mathcal L}}_+ \bar{{{\mathbf v}}}_1(z) \Vert_{Y_+} & \leq \dfrac{1}{c}\int_{-\infty}^z \big\Vert {{\mathcal L}}_+ e^{-\frac{{{\mathcal L}}_+}{c}(z - \xi)} \big\Vert_{Y_+} \big\Vert \beta_+ \big( {{\mathbf u}}(\xi) - {{\mathbf u}}(z) \big) \big\Vert_{Y_+} {\,\mathrm{d}}\xi \\ & \leq C_1 C_2 \Vert \beta_+\Vert_{Y_+} \int_{-\infty}^z \dfrac{1}{z - \xi} e^{-\frac{\sigma_+}{c}(z - \xi)} e^{-\sigma|z|} |z - \xi| {\,\mathrm{d}}\xi \\ & \leq C_5 \Vert \beta_+\Vert_{Y_+} e^{-\gamma|z|} .\end{aligned}$$ Secondly, fix $z > 0$. Proceeding as in the previous estimate and using (b)–(c) yields $$\begin{aligned} \Vert {{\mathcal L}}_+ \bar{{{\mathbf v}}}_1(z) \Vert_{Y_+} & \leq C_1 C_2 \Vert \beta_+\Vert_{Y_+} \left\lbrace \int_{-\infty}^0 \dfrac{1}{z - \xi} e^{-\frac{\sigma_+}{c}(z - \xi)} |z - \xi| {\,\mathrm{d}}\xi \right. \\ & \hspace{8em} + \left. \int_{0}^z \dfrac{1}{z - \xi} e^{-\frac{\sigma_+}{c}(z - \xi)} e^{-\sigma|\xi|} |z - \xi| {\,\mathrm{d}}\xi \right\rbrace \\ & \leq C_5 \Vert \beta_+\Vert_{Y_+} e^{-\gamma|z|} .\end{aligned}$$ Next, we obtain similarly to [@Paz83 Sec. 1.2, Thm. 2.4(b)] $$\begin{aligned} {{\mathcal L}}_+ \bar{{{\mathbf v}}}_2(z) = - \dfrac{1}{c} {{\mathcal L}}_+ \left( \int_0^\infty e^{-\frac{{{\mathcal L}}_+}{c} \xi} \beta_+ {{\mathbf u}}(z) {\,\mathrm{d}}\xi \right) = \beta_+ {{\mathbf u}}(z) .\end{aligned}$$ Hence, $\Vert {{\mathcal L}}_+ \bar{{{\mathbf v}}}_2(z) \Vert_{Y_+} \leq C_5 \Vert \beta_+ \Vert_{Y_+} e^{-\sigma |z|}$ for all $z\in{\mathbb{R}}$. Combining the estimates for ${{\mathcal L}}_+ \bar{{{\mathbf v}}}_1$ and ${{\mathcal L}}_+ \bar{{{\mathbf v}}}_2$, and using once more gives $$\label{eq:VjEstimate2} \|{{\mathcal L}}_+ {{\mathbf v}}_+(z)\|_{Y_+}+\| {{\mathbf v}}_+'(z)\|_{Y_+}\le C_4 \|\beta_+\|_{Y_+} e^{-\gamma|z|},\quad z\in{\mathbb{R}}.$$ Overall, relations  as well as – imply estimate . If $\beta \in D({{\mathcal L}})$, then we have due to $${{\mathcal L}}_+ {{\mathbf v}}_+(z) =\dfrac{1}{c}\int_{-\infty}^z e^{-\frac{{{\mathcal L}}_+}{c}(z - \xi)} {{\mathcal L}}_+ \beta_+ {{\mathbf u}}(\xi){\,\mathrm{d}}\xi , \quad {{\mathcal L}}_- {{\mathbf v}}_-(z) = -\dfrac{1}{c}\int_z^{+\infty} e^{-\frac{{{\mathcal L}}_-}{c}(z - \xi)} {{\mathcal L}}_- \beta_- {{\mathbf u}}(\xi){\,\mathrm{d}}\xi.$$ Using the relation ${{\mathbf c}}{{\mathcal L}}_\pm {{\mathbf v}}_\pm' = - {{\mathcal L}}_\pm ({{\mathcal L}}_\pm {{\mathbf v}}_\pm) + {{\mathcal L}}_\pm {{\mathbf u}}$ together with and , yields the improved estimate . \[rem:zero\_spectrum\] 1. \[rem:v\_vanish\] Let $b(y) \equiv b_0$ and $\int_0^1 \beta (y) {\,\mathrm{d}}y =0$. Then the two-scale inhibitor ${{\mathbf v}}$ is macroscopically vanishing, i.e., $\int_0^1 {{\mathbf v}}(z,y) {\,\mathrm{d}}y \equiv 0$ for all $z\in {\mathbb{R}}$. This is immediate from integrating the ${{\mathbf v}}$-equation in over ${\mathbb{S}}$. The example in Section \[subsec:two-alpha\] illustrates this phenomenon. 2. \[rem:exact\_sol\] Let the parameters $(\alpha, \beta, b, d)$ satisfy $$\begin{aligned} |\alpha(y)|^2 \equiv 1 , \quad \beta(y) = \beta_1 \alpha(y), \quad b(y) \equiv \lambda_1, \quad d(y) \equiv 0 .\end{aligned}$$ Then the original system admits indeed a *generalized pulse solution* $(u^{\varepsilon}, v^{\varepsilon})$ of the form $$\begin{aligned} \label{eq:genetal_pulse} u^{\varepsilon}(t,x) = u(x + ct) \qquad\text{and}\qquad v^{\varepsilon}(t,x) = \alpha(\tfrac{x}{{\varepsilon}}) v_1(x + ct) ,\end{aligned}$$ where $u^{\varepsilon}$ is independent of ${\varepsilon}$, whenever $(c,u,v_1)$ is a homoclinic orbit for the guiding system $$\begin{aligned} \label{eq:num_guide} c u' = u'' + f(u) - v_1 , \qquad c v_1' = - \lambda_1 v_1 + \beta_1 u .\end{aligned}$$ Section \[subsec:jumps\] provides one example for such a generalized pulse solution. 3. The case of not exactly periodic coefficients such as $\alpha(x,\frac{x}{{\varepsilon}})$ with $\alpha \in{\mathrm{C}}^\infty ({\mathbb{R}}\times{\mathbb{S}})$ is in principle also manageable with our approach, however, the existence of homoclinic orbits for guiding systems with heterogeneous coefficients is beyond the scope of the present paper. 4. In the case where $\beta$ is orthogonal to all eigenfunctions $\widetilde{\alpha}_i$, $i=1,..,m$, all coefficients $\beta_i$ vanish and the equations for $v_i$ decouple from the activator $u$ in the guiding system . Then the remaining $u$-equation is of Nagumo type and it is known to possess heteroclinic orbits corresponding to traveling fronts, which can also be found in the two-scale system. 5. The guiding system may admit homoclinic orbits corresponding to multiple pulse solutions, in the sense of [@EFF1982]. Since they all satisfy , our two-scale system admits multiple pulse solutions as well. Stability of two-scale pulse solutions {#subsec:stability} -------------------------------------- Let us turn our attention back to the full two-scale system . By Theorem \[thm:PDEPulse\], it admits the family of pulse solutions $$\begin{aligned} \label{eq:fam_orig} ( {{\mathbf u}}, {{\mathbf v}})_{z_0\in{\mathbb{R}}} := \left\lbrace \big( {{\mathbf u}}_{z_0} (x + {{\mathbf c}}t) , {{\mathbf v}}_{z_0} (x + {{\mathbf c}}t,y) \big) \,|\, z_0 \in {\mathbb{R}}\right\rbrace ,\end{aligned}$$ where ${{\mathbf u}}_{z_0} (z) := {{\mathbf u}}(z + z_0)$ denotes the shifted function for any shift $z_0 \in {\mathbb{R}}$. Following [@EvaI1972; @AK2015], we define exponential stability with respect to the supremum norm for the $z$-variable. For the microscopic variable $y \in {\mathbb{S}}$, we distinguish between *weak* exponential stability in ${\mathrm{L}}^2({\mathbb{S}})$ and *strong* exponential stability in $D({{\mathcal L}})$. \[defin:Stability\] 1. \[defin:StabilityCond\] Let $(U,V)$ denote a solution of the two-scale system with initial condition $(U_0, V_0)$ and $\mathbb{X}$ denotes a real-valued Hilbert space. We say that the exponential stability condition holds if there exist constants $K_1, K_2, K_3, \kappa >0$ such that for any $$\begin{aligned} 0 \leq \delta \leq K_1 , \; z_0 \in {\mathbb{R}}: \quad \Vert U_0 - {{\mathbf u}}_{z_0} \Vert_{{\mathrm{L}}^\infty({\mathbb{R}})} + \Vert V_0 - {{\mathbf v}}_{z_0} \Vert_{{\mathrm{L}}^\infty({\mathbb{R}}; \mathbb{X})} \leq \delta ,\end{aligned}$$ there exists a shift $z_1$ with $|z_0 - z_1| \leq \delta K_2$ such that for all $t\geq 0$ $$\begin{aligned} \Vert U(t, \cdot) - {{\mathbf u}}_{z_1}(\cdot + {{\mathbf c}}t) \Vert_{{\mathrm{L}}^\infty({\mathbb{R}})} + \Vert V(t, \cdot) - {{\mathbf v}}_{z_1}(\cdot + {{\mathbf c}}t, \cdot) \Vert_{{\mathrm{L}}^\infty({\mathbb{R}}; \mathbb{X})} \leq \delta K_3 e^{-\kappa t} .\end{aligned}$$ 2. \[defin:StabilityWeak\] The family of pulse solutions $({{\mathbf u}}, {{\mathbf v}})_{z_0\in{\mathbb{R}}}$ in is weakly (strongly) exponentially stable, if the exponential stability condition holds with $\mathbb{X} = {\mathrm{L}}^2({\mathbb{S}})$ (with $\mathbb{X} = D({{\mathcal L}})$). We emphasize that our solutions are bounded according to Theorem \[thm:sol-exist\], which justifies the supremum norm in Definition \[defin:Stability\]. In the case $d(y) \equiv 0$ (no microscopic diffusion), the notions of weak and strong exponential stability coincide. Furthermore, notice that $$\begin{aligned} \label{eq:fam_guide} (u, v_1, ..., v_m)_{z_0\in{\mathbb{R}}} := \left\lbrace \big( u_{z_0} (x + ct), v_{1,z_0} (x + ct) , ..., v_{m,z_0}(x + ct) \big) \, |\, z_0 \in {\mathbb{R}}\right\rbrace\end{aligned}$$ with $u$ and $(v_1, ..., v_m)$ given by Assumption \[assump:GuidingSystem\] is a family of pulse solutions for the standard reaction-diffusion FitzHugh–Nagumo-type system \[eq:ComovingSystemGuidingPDE-sub\] $$\label{eq:ComovingSystemGuidingPDE} \tag{\ref{eq:ComovingSystemGuidingPDE-sub}.GS-PDE} \begin{aligned} U_t(t,x) & = U_{xx}(t,x) + f(U) - \sum_{i=1}^m \alpha_i V_i(t,x) , \\ (V_i)_t(t,x) & = - \lambda_i V_i(t,x) + \beta_i U(t,x), \quad i=1,...,m. \end{aligned}$$ We will refer to system  as to the [*guiding PDE system*]{}. \[assump:GuidingSystemPDE\] Let $(u, v_1, ..., v_m)_{z_0\in{\mathbb{R}}}$ be an exponentially stable family of pulse solutions for the guiding PDE system , i.e., the exponential stability condition in Definition \[defin:Stability\].\[defin:StabilityCond\] holds with $\mathbb{X} = {\mathbb{R}}^m$. For $m=1$, it is well-known that the pulses of system are stable, see e.g. [@Jon1984] for asymptotic stability and [@Yana1985; @AK2015] for exponential stability. We expect a similar result to hold true in the case of $m > 1$, however, this is beyond the scope of the present paper. \[thm:stability\] Let Assumptions \[assump:coeff\], \[assump:EllipticOrNot\], \[assump:AlphaEigenfunction\], \[assump:GuidingSystem\], \[assump:GuidingSystemPDE\] hold, and let ${\mathrm{spec}}({{\mathcal L}})\subset\{\lambda > 0 \}$. Then the family of pulse solutions $({{\mathbf u}},{{\mathbf v}})_{z_0\in{\mathbb{R}}}$ in for the two-scale system  is weakly exponentially stable. If $\beta \in D({{\mathcal L}})$, then $({{\mathbf u}},{{\mathbf v}})_{z_0\in{\mathbb{R}}}$ is also strongly exponentially stable. *Step 1: reduction to guiding system.* Since ${\mathrm{spec}}({{\mathcal L}}) \subset \{\lambda > 0 \}$, it follows that ${{\mathcal P}}_-=0$ and ${{\mathcal L}}_-=0$. Therefore, ${{\mathbf v}}(z,y)$ is given via the sum in , where ${{\mathbf v}}_-(z,y) \equiv 0$, and $({{\mathbf u}}, {{\mathbf v}}_1, ..., {{\mathbf v}}_m)_{z_0\in{\mathbb{R}}}$ is identical to the family of pulse solutions $(u,v_1, ..., v_m)_{z_0\in{\mathbb{R}}}$ in for the guiding PDE system . Given the initial conditions $U(0,x) = U_0(x)$ and $V(0,x,y) = V_0(x,y)$, we can decompose the two-scale system . Again, the $V$-component is given via the sum $$\label{eq:VSeriesTime} V(t,x,\cdot) = \sum_{i=1}^m V_i(t,x) \cdot \frac{\widetilde{\alpha}_i(\cdot)}{\alpha_i} + V_+(t,z,\cdot),$$ where $V_i(t,x)\in {\mathbb{R}}$ and $V_+(t,x):={{\mathcal P}}_+ V(t,x) \in Y_+$. With this, the full two-scale system reduces to the guiding part $$\label{eq:ComovingSystemGuidingPDEwithInitialData} \begin{aligned} & U_t(t,x) = U_{xx}(t,x) + f(U) - \sum_{i=1}^m \alpha_i V_i(t,x),\\ & (V_i)_t(t,x) = -\lambda_i V_i(t,x) + \beta_i U (t,x), \qquad i=1,...,m, \\ & U|_{t=0} = U_0(x),\quad V_i|_{t=0} = V_{0,i} (x):= {{\mathcal P}}_i V_0 (x,\cdot) = \frac{( V_0 (x,\cdot), \widetilde\alpha_i)_{L_2}}{\alpha_i} , \end{aligned}$$ and the guided part $$\label{eq:ComovingSystemGuidingPDEwithInitialDataPlus} \begin{aligned} & (V_+)_t(t,x) = -{{\mathcal L}}_+ V_+(t,x) + \beta_+ U(t,x),\\ & V_+|_{t=0} = {{\mathcal P}}_+ V_0 (x,\cdot). \end{aligned}$$ By Assumption \[assump:GuidingSystemPDE\], there exist constants $K_1, K_2, K_3, \kappa > 0$ such that for any $$\begin{aligned} 0 \leq \delta \leq K_1, \; z_0 \in {\mathbb{R}}: \quad \Vert U_0 - {{\mathbf u}}_{z_0} \Vert_{{\mathrm{L}}^\infty({\mathbb{R}})} + \sum_{i=1}^m \Vert V_{0,i} -{{\mathbf v}}_{i,z_0} \Vert_{{\mathrm{L}}^\infty({\mathbb{R}})} \leq \delta ,\end{aligned}$$ there exists a shift $z_1$ with $|z_0 - z_1| \leq \delta K_2$ such that for all $t\geq 0$ $$\begin{aligned} \label{eq:stability} \Vert U(t) - {{\mathbf u}}_{z_1} \Vert_{{\mathrm{L}}^\infty({\mathbb{R}})} + \sum_{i=1}^m \Vert V_i(t) - {{\mathbf v}}_{i,z_1} \Vert_{{\mathrm{L}}^\infty({\mathbb{R}})} \leq \delta K_3 e^{-\kappa t} .\end{aligned}$$ It remains to prove that $$\begin{aligned} \label{eq:stability2} \Vert {{\mathcal P}}_+ (V_0 - {{\mathbf v}}_{z_0}) \Vert_{{\mathrm{L}}^\infty({\mathbb{R}}; \mathbb{X})} \leq \delta\end{aligned}$$ implies for some $K_*, \kappa_* > 0$ and all $t \geq 0$ $$\begin{aligned} \label{eq:Implication} \Vert V_+(t) - {{\mathcal P}}_+ {{\mathbf v}}_{z_1} \Vert_{{\mathrm{L}}^\infty({\mathbb{R}}; \mathbb{X})} \leq \delta K_* e^{-\kappa_* t} ,\end{aligned}$$ where $\mathbb{X} = {\mathrm{L}}^2({\mathbb{S}})$ (and if $\beta\in D({{\mathcal L}})$, then $\mathbb{X} = D({{\mathcal L}})$) according to Definition \[defin:Stability\]. *Step 2: exponential decay of guided part.* System  is linear and $V_+$ is given via $$\begin{aligned} \label{eq:formula_Vplus} V_+(t,x) = e^{-{{\mathcal L}}_+ t} \big( {{\mathcal P}}_+ V_0(x) \big) + \int_0^t e^{-{{\mathcal L}}_+ (t - s)} \beta_+ U(s, x) {\,\mathrm{d}}s .\end{aligned}$$ Notice that ${{\mathcal P}}_+ {{\mathbf v}}= {{\mathbf v}}_+$ with ${{\mathbf v}}_+$ given in . Since ${{\mathbf v}}_+$ solves the ${{\mathbf v}}_+$-equation in , we have for all $t \geq 0$ the identity $$\begin{aligned} \label{eq:identity_Vplus} {{\mathbf v}}_+(x + ct + z_1) = e^{-{{\mathcal L}}_+ t} \big( {{\mathcal P}}_+ {{\mathbf v}}_{z_1} (x) \big) + \int_0^t e^{-{{\mathcal L}}_+ (t - s)} \beta_+ {{\mathbf u}}_{z_1}(x + cs) {\,\mathrm{d}}s .\end{aligned}$$ Subtracting the equations in and as well as using yields $$\begin{aligned} & \sup_{x\in{\mathbb{R}}} \left\Vert V_+(t,x) - {{\mathbf v}}_+ (x + ct + z_1) \right\Vert_{Y_+} \nonumber \\ & = \sup_{x\in{\mathbb{R}}} \Big\Vert e^{-{{\mathcal L}}_+ t} \big( {{\mathcal P}}_+ [ V_0(x) - {{\mathbf v}}_{z_1} (x) ] \big) + \int_0^t e^{-{{\mathcal L}}_+ (t-s)} \beta_+ \big[ U (s,x) - {{\mathbf u}}_{z_1} (x + cs) \big] {\,\mathrm{d}}s \Big\Vert_{Y_+} \nonumber \\ & \leq C_1 e^{-\sigma_+ t} \left\lbrace \sup_{x\in{\mathbb{R}}} \Vert {{\mathcal P}}_+ [ V_0(x) - {{\mathbf v}}_{z_0} (x) ] \Vert_{Y_+} + \sup_{x\in{\mathbb{R}}} \Vert {{\mathcal P}}_+ [ {{\mathbf v}}_{z_0} (x) - {{\mathbf v}}_{z_1} (x) ] \Vert _{Y_+} \right\rbrace \label{eq:estimate_1} \\ & \quad + C_1 \Vert \beta_+ \Vert_{Y_+} \int_0^t e^{-\sigma_+(t-s)} \sup_{x \in {\mathbb{R}}} \vert U(s, x) - {{\mathbf u}}_{z_1}(x + cs) \vert {\,\mathrm{d}}s . \label{eq:estimate_2}\end{aligned}$$ We estimate the first term in by and by . For the second term in , we exploit the Lipschitz continuity $\Vert {{\mathbf v}}_+(z_0) - {{\mathbf v}}_+(z_1) \Vert_{Y_+} \leq L |z_0 - z_1| \leq \delta L K_2$ for ${{\mathbf v}}_+ \in {\mathrm{C}}^1({\mathbb{R}}; Y_+)$. The Lipschitz constant $L := \sup_{z\in{\mathbb{R}}} \Vert ({{\mathbf v}}_+)_z (z, \cdot) \Vert_{Y_+}$ is bounded according to estimate . Choosing $\kappa_* = \min \lbrace \sigma_+, \kappa \rbrace $, we arrive at $$\begin{aligned} \label{eq:estimate_3} \sup_{x\in{\mathbb{R}}} \left\Vert V_+(t,x) - {{\mathbf v}}_+ (x + ct + z_1) \right\Vert_{Y_+} \leq \delta C_1 \big( 1 + L K_2 + K_3 \Vert \beta_+ \Vert_{Y_+} \big) e^{-\kappa_* t} .\end{aligned}$$ Hence, estimate follows immediately and the weak exponential stability of the family of pulse solutions $({{\mathbf u}},{{\mathbf v}})_{z_0\in{\mathbb{R}}}$ in is proven. If $\beta \in D({{\mathcal L}})$, then ${{\mathbf v}}_+$ belongs according to to the space ${\mathrm{C}}^1({\mathbb{R}}; D({{\mathcal L}}_+))$. With this higher regularity, the estimates , , and also hold with $D({{\mathcal L}}_+)$ instead of $Y_+$. Hence, the family of pulse solutions $({{\mathbf u}},{{\mathbf v}})_{z_0\in{\mathbb{R}}}$ in is also strongly exponentially stable. We point out that the constants $K_3$ and $\kappa$ in Definition \[defin:Stability\] are in general not the same for the guiding pulse $(u,v_1,..,v_m)_{z_0\in{\mathbb{R}}}$ and the two-scale pulse $({{\mathbf u}}, {{\mathbf v}})_{z_0\in{\mathbb{R}}}$. Numerical simulations {#sec:numerics} ===================== We provide numerical examples for three different parameter settings $(\alpha, \beta, b,d)$ and compare the solutions of the original system with those of the two-scale system . In the first two examples the spectrum of ${{\mathcal L}}$ is discrete and we know that stable two-scale pulses exist according to Section \[sec:pulses\]. In the third example ${{\mathcal L}}$ has only a continuous spectrum and our *guiding system* approach fails, because the two-scale system does not reduce to finitely many ODEs. However, we observe stable pulse solutions in our simulations. We numerically solve the FitzHugh–Nagumo equations on the bounded interval $x \in [-300, 300]$ with periodic boundary conditions. We emphasize at this point that the ${\varepsilon}$ that is chosen in the numerical simulations is in the range ${\varepsilon}\in [2, 30]$. At first glance, this is not a “small” number, however, recall that the characteristic length scale of the microstructure ${\varepsilon}_\mathrm{char}$ is given by the quotient of microscopic length scale divided by macroscopic length scale. The role of the macroscopic length scale of our system is played by the width of the activator spike, which is about $60$, cf. Figure \[fig:guide\]. With this, the characteristic ratio ${\varepsilon}_\mathrm{char} \in [0.03,0.5]$ is indeed small. To calculate the solutions, we implement a semi-implicit discretization scheme in MATLAB. Therein, the diffusion parts are solved via fast Fourier transform and the reaction terms are treated with the explicit Euler method. Therefore, we use the time step ${\,\mathrm{d}}t = 0.01$. For the spatial discretization we use for the ${\varepsilon}$-system the step size ${\,\mathrm{d}}x \approx 0.0366$, and for the two-scale system ${\,\mathrm{d}}x \approx 1.1742$ and ${\,\mathrm{d}}y \approx 0.0020$. Macroscopically vanishing inhibitor ${{\mathbf v}}$ {#subsec:two-alpha} --------------------------------------------------- We consider the case of a differential operator ${{\mathcal L}}$ with constant coefficients $b(y) \equiv d(y) \equiv \delta$ for $0 < \delta \ll 1$, i.e., $$\begin{aligned} ({{\mathcal L}}\varphi) (y) = -\delta (\varphi_{yy} - \varphi) \quad\text{and}\quad D({{\mathcal L}}) = {\mathrm{H}}^2({\mathbb{S}}) .\end{aligned}$$ The eigenfunctions of ${{\mathcal L}}$ are given via $\varphi^s_n(y) = \sin(2\pi n y)$, $\varphi^c_n(y) = \cos(2\pi n y)$, for $n \geq 1$, and $\varphi_0 (y) \equiv 1$. Therefore, Assumption \[assump:EllipticOrNot\] is satisfied. The corresponding eigenvalues $\lambda_n = \delta(1 + (2\pi n)^2)$ are isolated, real, positive, and have double geometric multiplicity for all $n \geq 1$, whereas $\lambda_0 = \delta$ is simple. In this example, $\alpha$ is the sum of two eigenfunctions, namely, $$\begin{aligned} \label{eq:param} \begin{array}{l} \alpha = \widetilde\alpha_1 +\widetilde\alpha_2 \quad\text{with}\quad \widetilde\alpha_1(y) = \sqrt{2} \sin(2\pi y) ,\quad \widetilde\alpha_2(y) = \sqrt{2} \sin(4\pi y) , \\ \beta(y) = 0.001 (\alpha(y) + \varphi(y)) , \quad \varphi(y) = \sqrt{2} \sin(8\pi y) , \quad\text{and}\quad \delta = 0.0001 . \end{array}\end{aligned}$$ Notice that $\varphi$ is orthogonal to $\alpha$ in ${\mathrm{L}}^2({\mathbb{S}})$. We emphasize that $\beta$ is not orthogonal to $\alpha$ but the signs of $\alpha$, $\beta$, and the product $\alpha(y) \beta(y)$ are not constant, cf. Remark \[rem:param\].\[rem:sign\]. ![Solution $(u,v_1,v_2,w)$ of the guiding system –. Here and in what follows, the pulse always propagates from the right to the left.[]{data-label="fig:guide"}](Ex-1_ii_guide.png){width="50.00000%"} For the choice of parameters in , the fully decomposed two-scale system of finitely many coupled ODEs as in reads $$\begin{aligned} \label{eq:guide_time} & \left\lbrace \begin{array}{ll} u_t & \hspace{-5pt} = u_{xx} + u(1 {-} u)(u {-} 0.15) - v_1 - v_2 , \\ (v_1)_t & \hspace{-5pt} = - \lambda_1 v_1 + 0.001 \!\cdot\! u , \\ (v_2)_t & \hspace{-5pt} = - \lambda_2 v_2 + 0.001 \!\cdot\! u , \end{array} \right. \\ \label{eq:guide_time_add} & \left\lbrace \begin{array}{ll} w_t & \hspace{-5pt} = - \lambda_3 w + \beta_+ u , \\ (v_+)_t & \hspace{-5pt} = - {{\mathcal L}}_+ v_+ . \end{array} \right.\end{aligned}$$ The three-component system is the guiding system, the $w$-equation in corresponds to the projection onto the eigenfunction $\varphi$, and the $v_+$-equation captures the remaining projections onto the complement of ${\mathrm{Span}}(\widetilde{\alpha}_1, \widetilde{\alpha}_2, \varphi)$. In view of , the parameters in the guiding system satisfy $\alpha_1 = \alpha_2 = 1$ and $\beta_1 = \beta_2 = \beta_+ = 0.001$. Recall that $\lambda_1 = 0.0001 (1 + 4\pi^2)$, $\lambda_2 = 0.0001 (1 + 16\pi^2)$, and $\lambda_3 = 0.0001 (1 + 64\pi^2)$. First, we solve the guiding system –, see Figure \[fig:guide\], so that we can use the pulse $(u,v_1,v_2)$ and the additional decoupled component $w$ to compute the initial conditions for the original system and the two-scale system . Secondly, we solve the original system for various ${\varepsilon}>0$, see Figure \[fig:eps\_1\], $$\label{eq:num_eps} u^{\varepsilon}_t = u^{\varepsilon}_{xx} + u^{\varepsilon}(1 - u^{\varepsilon})(u^{\varepsilon}- 0.15) - \alpha(\tfrac{x}{{\varepsilon}}) v^{\varepsilon}, \qquad v^{\varepsilon}_t = \delta \left( {\varepsilon}^2 v^{\varepsilon}_{xx} - v^{\varepsilon}\right) + \beta (\tfrac{x}{{\varepsilon}}) u^{\varepsilon},$$ supplemented with the initial condition $u^{\varepsilon}_0(x) = u(x)$ and $v^{\varepsilon}_0(x) = \widetilde\alpha_1(\frac{x}{{\varepsilon}}) v_1(x) + \widetilde\alpha_2(\frac{x}{{\varepsilon}}) v_2(x) + \varphi(\tfrac{x}{{\varepsilon}}) w(x)$. According to the homogenization results in Section \[sec:justify\], the solutions behave asymptotically like $u^{\varepsilon}(t,x) = U(t,x) + O({\varepsilon})$ and $v^{\varepsilon}(t,x) = V(t,x,\frac{x}{{\varepsilon}}) + O({\varepsilon})$. One can observe in Figure \[fig:eps\_1\] that the amplitude of the oscillations of $u^{\varepsilon}$ decrease as ${\varepsilon}$ decreases. However, the amplitude of oscillations of $v^{\varepsilon}$ does not vanish, while, smaller ${\varepsilon}$ lead to higher frequencies. In Figure \[fig:eps\_1\], we also observe oscillations of the inhibitor $v^{\varepsilon}$, which correspond to the different modes $\widetilde{\alpha}_1$, $\widetilde{\alpha}_2$, and $\varphi$. ![Solution $(u^{\varepsilon}, v^{\varepsilon})$ of the original system with parameters . Left: ${\varepsilon}= 10$. Right: ${\varepsilon}= 2$.[]{data-label="fig:eps_1"}](Ex-1_ii_eps_10.png "fig:"){width="50.00000%"} ![Solution $(u^{\varepsilon}, v^{\varepsilon})$ of the original system with parameters . Left: ${\varepsilon}= 10$. Right: ${\varepsilon}= 2$.[]{data-label="fig:eps_1"}](Ex-1_ii_eps_2.png "fig:"){width="50.00000%"} ![Solution $(U,V)$ of the two-scale system with $\alpha$ and $\beta$ as in and $b = d = \delta$. Left: the components $U$ and $\int_0^1 V(t,x,y) {\,\mathrm{d}}y$. Right: the $V$-component in $xy$-plane (rotated by $180^\circ$) and its average.[]{data-label="fig:limit"}](Ex-1_ii_limit_av.png "fig:"){width="50.00000%"} ![Solution $(U,V)$ of the two-scale system with $\alpha$ and $\beta$ as in and $b = d = \delta$. Left: the components $U$ and $\int_0^1 V(t,x,y) {\,\mathrm{d}}y$. Right: the $V$-component in $xy$-plane (rotated by $180^\circ$) and its average.[]{data-label="fig:limit"}](Ex-1_ii_limit_ts.png "fig:"){width="50.00000%"} Finally, we compare our results to the solution of the two-scale system , see Figure \[fig:limit\]. We choose the initial conditions $U_0(x) = u(x)$ and $V_0(x,y) = \widetilde\alpha_1(y) v_1(x) + \widetilde\alpha_2(y) v_2(x) + \varphi(y) w(x)$. In order to plot the one-scale component $U(t,x)$ in one diagram with the two-scale component $V(t,x,y)$, see Figure \[fig:limit\] (left), we average the solution $V$ over the periodicity cell ${\mathbb{S}}$. In our case $$\begin{aligned} \int_0^1 {{\mathbf v}}(x + ct ,y) {\,\mathrm{d}}y = 0 \qquad \text{for all } x \in{\mathbb{R}}, \, t \geq 0,\end{aligned}$$ since $\int_0^1 \beta (y) {\,\mathrm{d}}y = 0$, cf.  Remark \[rem:zero\_spectrum\].\[rem:v\_vanish\]. In this sense we actually found an exemplary pulse solution with macroscopically vanishing inhibitor. Generalized pulse solution for the original system {#subsec:jumps} --------------------------------------------------- In this example there is no inhibitor diffusion, $d(y) \equiv 0 $, and $b(y) \equiv b_0 > 0$ is constant such that Assumption \[assump:EllipticOrNot\] is satisfied and the spectrum of ${{\mathcal L}}= b_0 \, \mathrm{Id}$ consists of the only eigenvalue $b_0$. With this, any $\alpha \in {\mathrm{L}}^2({\mathbb{S}})$ is an eigenfunction of ${{\mathcal L}}$ and we choose $$\begin{aligned} \label{eq:param_2} b_0 = 0.00001, \qquad \alpha(y) = \left\lbrace \begin{array}{ll} +1 & \text{ if } y \in [0, 0.7) , \\ -1 & \text{ if } y \in [0.7 , 1) , \end{array} \right. \qquad \beta(y) = 0.003 \, \alpha(y) .\end{aligned}$$ According to Remark \[rem:zero\_spectrum\].\[rem:exact\_sol\], the inhibitor $v^{\varepsilon}$ of the generalized pulse solution $(u^{\varepsilon},v^{\varepsilon})$ of the original system $$\begin{aligned} \label{eq:num_eps_2} u^{\varepsilon}_t = u^{\varepsilon}_{xx} + u^{\varepsilon}(1 {-} u^{\varepsilon}) (u^{\varepsilon}{-} 0.15) - \alpha(\tfrac{x}{{\varepsilon}}) v^{\varepsilon}, \qquad v^{\varepsilon}_t = - b_0 v^{\varepsilon}+ \beta(\tfrac{x}{{\varepsilon}}) u^{\varepsilon}\end{aligned}$$ exhibits oscillations, whereas the activator $u^{\varepsilon}$ is independent of ${\varepsilon}$, see Figure \[fig:eps\_b0\]. ![Solution $(u^{\varepsilon}, v^{\varepsilon})$ of the original system with parameters . Left: ${\varepsilon}= 25$. Right: ${\varepsilon}= 5$.[]{data-label="fig:eps_b0"}](Ex-2_ii_eps_25.png "fig:"){width="50.00000%"} ![Solution $(u^{\varepsilon}, v^{\varepsilon})$ of the original system with parameters . Left: ${\varepsilon}= 25$. Right: ${\varepsilon}= 5$.[]{data-label="fig:eps_b0"}](Ex-2_ii_eps_5.png "fig:"){width="50.00000%"} ![Solution $(U, V)$ of the two-scale system with parameters and $d=0$. Left: the components $U$ and $\int_0^1 V(t,x,y) {\,\mathrm{d}}y$. Right: the $V$-component in $xy$-plane (rotated by $180^\circ$) and its average.[]{data-label="fig:limit_b0"}](Ex-2_ii_limit_av.png "fig:"){width="50.00000%"} ![Solution $(U, V)$ of the two-scale system with parameters and $d=0$. Left: the components $U$ and $\int_0^1 V(t,x,y) {\,\mathrm{d}}y$. Right: the $V$-component in $xy$-plane (rotated by $180^\circ$) and its average.[]{data-label="fig:limit_b0"}](Ex-2_ii_limit_ts.png "fig:"){width="50.00000%"} Again, we observe a nice agreement with the two-scale pulse solution of the limit system , see Figure \[fig:limit\_b0\]. In this case the average of $V$ does not vanish, since $\int_0^1 \beta (y) {\,\mathrm{d}}y \neq 0$. Due to and the relation $\int_0^1 \alpha(y) {\,\mathrm{d}}y = 0.4$, we have $$\begin{aligned} \int_0^1 V (t,x,y) {\,\mathrm{d}}y = 0.4 \, v_1 (x + ct) \qquad \text{for all } x \in {\mathbb{R}}, \, t \geq 0 ,\end{aligned}$$ where $v_1$ is given via the guiding system . Continuous spectrum of ${{\mathcal L}}$ {#subsec:contin_spec} --------------------------------------- Let us consider the case where ${{\mathcal L}}$ has only a continuous spectrum, which does not fit into the scope of our assumptions in Section \[sec:pulses\]. In this case Theorem \[thm:error-est\] still holds, but our method for the proof of two-scale pulses fails. However, we are able to present a numerical example which indicates that stable pulses also exist in this situation. Let us study the operator $({{\mathcal L}}\varphi) (y) = b(y) \varphi$, where $b(y)$ is a positive and bounded non-constant function. The data are $$\begin{aligned} \label{eq:param_3} b(y) = 0.001 (5 {+} 3\sin(2\pi y)), \qquad \alpha(y) \equiv 1, \qquad \beta(y) \equiv 0.003 .\end{aligned}$$ We solve the original system for various ${\varepsilon}$, see Figure \[fig:eps\_b\], $$\begin{aligned} \label{eq:num_eps_3} u^{\varepsilon}_t = u^{\varepsilon}_{xx} + u^{\varepsilon}(1 {-} u^{\varepsilon}) (u^{\varepsilon}{-} 0.15) - v^{\varepsilon}, \quad v^{\varepsilon}_t = - 0.001 \left( 5 {+} 3\sin(2\pi \tfrac{x}{{\varepsilon}}) \right) v^{\varepsilon}+ 0.003 \!\cdot\! u^{\varepsilon}.\end{aligned}$$ ![Solution $(u^{\varepsilon}, v^{\varepsilon})$ of the original system . Left: ${\varepsilon}= 30$. Right: ${\varepsilon}= 3$.[]{data-label="fig:eps_b"}](Ex-3_ii_eps_30_b0_001.png "fig:"){width="50.00000%"} ![Solution $(u^{\varepsilon}, v^{\varepsilon})$ of the original system . Left: ${\varepsilon}= 30$. Right: ${\varepsilon}= 3$.[]{data-label="fig:eps_b"}](Ex-3_ii_eps_3_b0_001.png "fig:"){width="50.00000%"} ![Solution $(U,V)$ of the two-scale system with parameters . Left: the components $U$ and $\int_0^1 V(t,x,y) {\,\mathrm{d}}y$. Right: the $V$-component in $xy$-plane and its average.[]{data-label="fig:limit_b"}](Ex-3_ii_limit_av.png "fig:"){width="50.00000%"} ![Solution $(U,V)$ of the two-scale system with parameters . Left: the components $U$ and $\int_0^1 V(t,x,y) {\,\mathrm{d}}y$. Right: the $V$-component in $xy$-plane and its average.[]{data-label="fig:limit_b"}](Ex-3_ii_limit_ts.png "fig:"){width="50.00000%"} The solution $(U,V)$ of the two-scale system reproduces the effective behavior of the pulse $(u^{\varepsilon}, v^{\varepsilon})$, see Figure \[fig:limit\_b\]. In this case we do not have a suitable guiding system at hand, however, we choose as initial condition the pulse solution of the guiding system with the parameters $\beta_1 = 0.003$ and $\lambda_1 = 0.005$. Since the pulse has to evolve from the non-matching initial condition, we solve this example on the bigger interval $x \in [-700,700]$. The step sizes are ${\,\mathrm{d}}x \approx 0.0427$ for the ${\varepsilon}$-system and ${\,\mathrm{d}}x \approx 1.3685$ for the limit system. Auxiliary estimates {#app:estimates} =================== The following lemma gives a standard proof for ${\mathrm{L}}^\infty$-boundedness for solutions of parabolic equations. \[lemma:max-bound\] Let Assumptions \[assump:coeff\] and \[assump:initial-1\] hold. Any solution $(u^{\varepsilon},v^{\varepsilon})$ of satisfies $$\begin{aligned} \Vert u^{\varepsilon}(t) \Vert_{{\mathrm{L}}^\infty({\mathbb{R}})} + \Vert v^{\varepsilon}(t) \Vert_{{\mathrm{L}}^\infty({\mathbb{R}})} \leq C e^{\kappa t} \qquad \text{for } t \geq 0 ,\end{aligned}$$ where the constants $C, \kappa \geq 0$ are independent of ${\varepsilon}$ and $t$. Indeed, $C$ depends on $\Vert u^{\varepsilon}_0 \Vert_{{\mathrm{L}}^\infty({\mathbb{R}})}$ and $\Vert u^{\varepsilon}_0 \Vert_{{\mathrm{L}}^\infty({\mathbb{R}})}$, and $\kappa$ depends on $\max \lbrace \Vert \alpha \Vert_{{\mathrm{L}}^\infty({\mathbb{S}})}, \Vert \beta \Vert_{{\mathrm{L}}^\infty({\mathbb{S}})}, \Vert b \Vert_{{\mathrm{L}}^\infty({\mathbb{S}})} \rbrace$ as well as the growth conditions of $f$ in Assumption \[assump:coeff\].\[assump:coeff2\]. For brevity we set $\alpha_{\varepsilon}(x) : = \alpha (\tfrac{x}{{\varepsilon}})$, etc., and define $$\begin{aligned} M(t):= \max \lbrace 1, \Vert u^{\varepsilon}_0 \Vert_{{\mathrm{L}}^\infty({\mathbb{R}})}, \Vert v^{\varepsilon}_0 \Vert_{{\mathrm{L}}^\infty({\mathbb{R}})} \rbrace e^{2\kappa t} ,\end{aligned}$$ where $\kappa \in {\mathbb{R}}$ is to be determined later. We prove the lower bound $\min \lbrace u^{\varepsilon}(t,x) , v^{\varepsilon}(t,x) \rbrace \geq - M(t)$ and the upper bound $\max \lbrace u^{\varepsilon}(t,x) , v^{\varepsilon}(t,x) \rbrace \leq M(t)$ simultaneously. First, we introduce the negative part for $\varphi \in C^0([0,T]; {\mathrm{L}}^2({\mathbb{R}}))$ $$\begin{aligned} (\varphi + M)_- (t,x) := \left\lbrace \begin{array}{ll} - (\varphi (t,x) + M(t)) & \text{ if } \varphi (t,x) \leq - M(t) \text{ for a.a.\ } x \in {\mathbb{R}}, \\ 0 & \text{ else} \end{array} \right.\end{aligned}$$ and test the $u^{\varepsilon}$- and $v^{\varepsilon}$-equations in with $- (u^{\varepsilon}+ M)_-$ and $-(v^{\varepsilon}+ M)_-$, respectively. Using $M_t = 2\kappa M$ and $M_x = 0$, integrating over ${\mathbb{R}}$, and applying partial integration gives $$\begin{aligned} \label{eq:bound-1} & \frac12 \frac{{\,\mathrm{d}}}{{\,\mathrm{d}}t} \left( \Vert (u^{\varepsilon}+ M)_- \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + \Vert (v^{\varepsilon}+ M)_- \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} \right) \nonumber \\ & \leq \int_{{\mathbb{R}}} \big\lbrace - \left( f(u^{\varepsilon}) + \kappa M \right) (u^{\varepsilon}+ M)_- - \left( -\alpha_{\varepsilon}v^{\varepsilon}+ \kappa M \right) (u^{\varepsilon}+ M)_- \nonumber \\ & \hspace{48pt} - \left( - b_{\varepsilon}v^{\varepsilon}+ \kappa M \right) (v^{\varepsilon}+ M)_- - \left( \beta_{\varepsilon}u^{\varepsilon}+ \kappa M \right) (v^{\varepsilon}+ M)_- \big\rbrace {\,\mathrm{d}}x .\end{aligned}$$ Secondly, we introduce the positive part $$\begin{aligned} (\varphi - M)_+ (t,x) := \left\lbrace \begin{array}{ll} \varphi(t,x) - M(t) & \text{ if } \varphi (t,x) \geq M(t) \text{ for a.a.\ } x \in {\mathbb{R}}, \\ 0 & \text{ else} \end{array} \right.\end{aligned}$$ and note that $(\varphi + M)_- \geq 0$ and $(\varphi - M)_+ \geq 0$ for all functions $\varphi \in C^0([0,T]; {\mathrm{L}}^2({\mathbb{R}}))$. Testing with $(u^{\varepsilon}- M)_+$ and $(v^{\varepsilon}- M)_+$ yields $$\begin{aligned} \label{eq:bound-2} & \frac12 \frac{{\,\mathrm{d}}}{{\,\mathrm{d}}t} \left( \Vert (u^{\varepsilon}- M)_+ \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + \Vert (v^{\varepsilon}- M)_+ \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} \right) \nonumber \\ & \leq \int_{{\mathbb{R}}} \big\lbrace \left( f(u^{\varepsilon}) - \kappa M \right) (u^{\varepsilon}- M)_+ + \left( -\alpha_{\varepsilon}v^{\varepsilon}- \kappa M \right) (u^{\varepsilon}- M)_+ \nonumber \\ & \hspace{35pt} + \left( - b_{\varepsilon}v^{\varepsilon}- \kappa M \right) (v^{\varepsilon}- M)_+ + \left( \beta_{\varepsilon}u^{\varepsilon}- \kappa M \right) (v^{\varepsilon}- M)_+ \big\rbrace {\,\mathrm{d}}x .\end{aligned}$$ Adding the estimates in and gives $$\begin{aligned} & \frac12 \frac{{\,\mathrm{d}}}{{\,\mathrm{d}}t} \left( \Vert (u^{\varepsilon}+ M)_- \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + \Vert (v^{\varepsilon}+ M)_- \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + \Vert (u^{\varepsilon}- M)_+ \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + \Vert (v^{\varepsilon}- M)_+ \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} \right) \nonumber \\ & \leq \int_{{\mathbb{R}}} \big\lbrace - \left( f(u^{\varepsilon}) + \kappa M \right) (u^{\varepsilon}+ M)_- + \left( f(u^{\varepsilon}) - \kappa M \right) (u^{\varepsilon}- M)_+ \label{eq:bound-3a} \\ & \hspace{37pt} - \left( -\alpha_{\varepsilon}v^{\varepsilon}+ \kappa M \right) (u^{\varepsilon}+ M)_- + \left( -\alpha_{\varepsilon}v^{\varepsilon}- \kappa M \right) (u^{\varepsilon}- M)_+ \label{eq:bound-3b} \\ & \hspace{37pt} - \left( - b_{\varepsilon}v^{\varepsilon}+ \kappa M \right) (v^{\varepsilon}+ M)_- + \left( - b_{\varepsilon}v^{\varepsilon}- \kappa M \right) (v^{\varepsilon}- M)_+ \label{eq:bound-3c} \\ & \hspace{37pt} - \left( \beta_{\varepsilon}u^{\varepsilon}+ \kappa M \right) (v^{\varepsilon}+ M)_- + \left( \beta_{\varepsilon}u^{\varepsilon}- \kappa M \right) (v^{\varepsilon}- M)_+ \big\rbrace {\,\mathrm{d}}x . \label{eq:bound-3d}\end{aligned}$$ The first term in is controlled as follows: $(u^{\varepsilon}+ M)_- > 0$ implies $u < 0$ and according to Assumption \[assump:coeff\].\[assump:coeff2\] $f(u) \geq c_1 u - c_2$ with $c_1,c_2 \geq 0$. If $\kappa \geq \max \lbrace 2c_1, 2c_2 \rbrace$, then we have $$\begin{aligned} - \left( f(u^{\varepsilon}) + \kappa M \right) (u^{\varepsilon}+ M)_- & \leq - (c_1 u + \tfrac{\kappa}{2} M) (u^{\varepsilon}+ M)_- + (c_2 - \tfrac{\kappa}{2}M)(u^{\varepsilon}+ M)_- \\ & \leq \kappa |(u^{\varepsilon}+ M)_-|^2 .\end{aligned}$$ Analogously, the second term in is bounded by $\kappa |(u^{\varepsilon}- M)_+|^2 $ for $\kappa \geq \max \lbrace 2c_3, 2c_4 \rbrace$. In the same manner we obtain that, if $\kappa \geq \Vert b \Vert_{{\mathrm{L}}^\infty({\mathbb{S}})}$, then the sum of both terms in is bounded by $\kappa |(v^{\varepsilon}+ M)_-|^2 + \kappa |(v^{\varepsilon}- M)_+|^2$. The mixed terms in can be controlled for $\kappa \geq \Vert \alpha \Vert_{{\mathrm{L}}^\infty({\mathbb{S}})}$ via $$\begin{aligned} & - \left( -\alpha_{\varepsilon}v^{\varepsilon}+ \kappa M \right) (u^{\varepsilon}+ M)_- + \left( -\alpha_{\varepsilon}v^{\varepsilon}- \kappa M \right) (u^{\varepsilon}- M)_+ \\ & \leq \kappa (|v^{\varepsilon}| - M) \big( (u^{\varepsilon}+ M)_- + (u^{\varepsilon}- M)_+ \big) \\ & \leq \left\lbrace \begin{array}{ll} 0 & \text{ if } |v^{\varepsilon}| < M ,\\ \kappa (v^{\varepsilon}+ M)_- \big( (u^{\varepsilon}+ M)_- + (u^{\varepsilon}- M)_+ \big) & \text{ if } v^{\varepsilon}\leq -M ,\\ \kappa (v^{\varepsilon}- M)_+ \big( (u^{\varepsilon}+ M)_- + (u^{\varepsilon}- M)_+ \big) & \text{ if } v^{\varepsilon}\geq M \\ \end{array} \right. \\ & \leq \kappa \left( |(v^{\varepsilon}+ M)_-|^2 + |(v^{\varepsilon}- M)_+|^2 + |(u^{\varepsilon}+ M)_-|^2 + |(u^{\varepsilon}- M)_+|^2 \right) .\end{aligned}$$ The mixed terms in are treated analogously. Overall, choosing $\kappa = \max \lbrace 2c_1, 2c_2, 2c_3, 2c_4, \Vert \alpha \Vert_{{\mathrm{L}}^\infty({\mathbb{S}})}, \Vert \beta \Vert_{{\mathrm{L}}^\infty({\mathbb{S}})}, \Vert b \Vert_{{\mathrm{L}}^\infty({\mathbb{S}})} \rbrace$ gives $$\begin{aligned} & \frac12 \frac{{\,\mathrm{d}}}{{\,\mathrm{d}}t} \left( \Vert (u^{\varepsilon}+ M)_- \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + \Vert (v^{\varepsilon}+ M)_- \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + \Vert (u^{\varepsilon}- M)_+ \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + \Vert (v^{\varepsilon}- M)_+ \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} \right) \\ & \leq 3 \kappa \left( \Vert (u^{\varepsilon}+ M)_- \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + \Vert (v^{\varepsilon}+ M)_- \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + \Vert (u^{\varepsilon}- M)_+ \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} + \Vert (v^{\varepsilon}- M)_+ \Vert^2_{{\mathrm{L}}^2({\mathbb{R}})} \right) .\end{aligned}$$ By construction, the initial conditions satisfy $(u^{\varepsilon}+ M)_-(0,x) = (u^{\varepsilon}- M)_+(0,x) = 0$ and $(v^{\varepsilon}+ M)_-(0,x) = (v^{\varepsilon}- M)_+(0,x) = 0$ almost everywhere in ${\mathbb{R}}$. Therefore, the application of Grönwall’s lemma implies $(u^{\varepsilon}+M)_-(t,x) = (u^{\varepsilon}- M)_+(t,x) = 0$ and $(v^{\varepsilon}+ M)_-(t,x) = (v^{\varepsilon}- M)_+(t,x) = 0$ for all $t\geq 0$ and almost all $x\in{\mathbb{R}}$. Hence, the desired ${\mathrm{L}}^\infty({\mathbb{R}})$-bound holds uniformly with respect to ${\varepsilon}$. \[rem:A1\] With the same argumentation as in the proof of Lemma \[lemma:max-bound\], we obtain that any solution $(U,V)$ of satisfies $$\begin{aligned} \Vert U(t) \Vert_{{\mathrm{L}}^\infty({\mathbb{R}})} + \Vert V(t) \Vert_{{\mathrm{L}}^\infty({\mathbb{R}}\times{\mathbb{S}})} \leq C e^{\kappa t} \qquad \text{for } t \geq 0 ,\end{aligned}$$ where $C$ depends on $\Vert U_0 \Vert_{{\mathrm{L}}^\infty({\mathbb{R}})}$ and $\Vert V_0 \Vert_{{\mathrm{L}}^\infty({\mathbb{R}}\times{\mathbb{S}})}$, and $\kappa$ is as in Lemma \[lemma:max-bound\]. For completeness, we give the proof of the next lemma, which follows along the lines of [@Eck04 Lem. 4.1]. \[lemma:eck\] For every $g \in {\mathrm{H}}^1({\mathbb{R}}; {\mathrm{L}}^2({\mathbb{S}}))$, we set $\bar g(x) : = \int_0^1 g(x,y) {\,\mathrm{d}}y $. Then, the dual norm of ${\mathcal{R}_{\varepsilon}}g - \bar{g}$ is bounded via $$\begin{aligned} \Vert {\mathcal{R}_{\varepsilon}}g - \bar g \Vert_{{\mathrm{H}}^1({\mathbb{R}})^*} \leq {\varepsilon}\Vert g \Vert_{{\mathrm{H}}^1({\mathbb{R}}; {\mathrm{L}}^2({\mathbb{S}}))} .\end{aligned}$$ We consider for arbitrary $\varphi \in {\mathrm{C}}^\infty_\mathrm{c} ({\mathbb{R}})$ $$\begin{aligned} \int_{\mathbb{R}}({\mathcal{R}_{\varepsilon}}g - \bar g) \varphi {\,\mathrm{d}}x = \sum_{n \in \mathbb{Z}} \int_{{\varepsilon}n}^{{\varepsilon}(n+1)} ({\mathcal{R}_{\varepsilon}}g - \bar g) \varphi {\,\mathrm{d}}x .\end{aligned}$$ Without loss of generality we set $n = 0$. Using the variable substitutions $x = {\varepsilon}y$ and $x = {\varepsilon}\tilde y$ gives $$\begin{aligned} & \int_0^{\varepsilon}g(x,\tfrac{x}{{\varepsilon}}) \varphi(x) {\,\mathrm{d}}x = {\varepsilon}\int_0^1 g({\varepsilon}y, y) \varphi({\varepsilon}y) {\,\mathrm{d}}y = {\varepsilon}\int_0^1 \int_0^1 g({\varepsilon}y, y) \varphi({\varepsilon}y) {\,\mathrm{d}}y {\,\mathrm{d}}\tilde y ,\\ & \int_0^{\varepsilon}\bar g (x) \varphi(x) {\,\mathrm{d}}x = \int_0^{\varepsilon}\int_0^1 g (x,y) \varphi(x) {\,\mathrm{d}}y {\,\mathrm{d}}x = {\varepsilon}\int_0^1 \int_0^1 g ({\varepsilon}\tilde y,y) \varphi({\varepsilon}\tilde y) {\,\mathrm{d}}y {\,\mathrm{d}}\tilde y .\end{aligned}$$ Subtracting both integrals and rearranging the integrands yields $$\begin{aligned} \int_{0}^{{\varepsilon}} ({\mathcal{R}_{\varepsilon}}g - \bar g) \varphi {\,\mathrm{d}}x & = {\varepsilon}\int_{(0,1)^2} \left( g({\varepsilon}y , y) - g({\varepsilon}\tilde y , y)\right) \varphi({\varepsilon}y) + g({\varepsilon}\tilde y, y)\left( \varphi({\varepsilon}y) - \varphi({\varepsilon}\tilde y) \right) {\,\mathrm{d}}y{\,\mathrm{d}}\tilde y .\end{aligned}$$ Exploiting the fundamental theorem of calculus $$\begin{aligned} g({\varepsilon}y , y) - g({\varepsilon}\tilde y , y) = {\varepsilon}\int_0^1 g_x({\varepsilon}y t + (1 - t) {\varepsilon}\tilde y , y) (y - \tilde y) {\,\mathrm{d}}t\end{aligned}$$ as well as the variable transform $$\begin{aligned} (t, \xi, \eta) = (t, t y + (1 - t) \tilde y , y - \tilde y) \quad\text{with}\quad \left\vert \mathrm{det} \left( \frac{\partial(t,\xi,\eta)}{\partial(t,y,\tilde y)} \right) \right\vert = 1 ,\end{aligned}$$ where $(t,\xi) \in (0,1)^2$ and $\eta \in (-1,1)$, yields with the Cauchy–Bunyakovsky–Schwarz inequality $$\begin{aligned} \left\vert \int_{0}^{{\varepsilon}} ({\mathcal{R}_{\varepsilon}}g - \bar g) \varphi {\,\mathrm{d}}x \right\vert & \leq {\varepsilon}^2 \left( \int_{(0,1)^2} \int_{-1}^1 |g_x({\varepsilon}\xi , \xi + (1 - t) \eta) \eta|^2 {\,\mathrm{d}}t {\,\mathrm{d}}\xi {\,\mathrm{d}}\eta \right)^{\frac12} \left( \int_0^1 |\varphi({\varepsilon}y)|^2 {\,\mathrm{d}}y \right)^{\frac12} \\ & \quad + {\varepsilon}^2 \left( \int_0^1 \int_{-1}^1 |\varphi_x({\varepsilon}\xi ) \eta|^2 {\,\mathrm{d}}\xi {\,\mathrm{d}}\eta \right)^{\frac12} \left( \int_{(0,1)^2} |g({\varepsilon}\tilde y , y)|^2 {\,\mathrm{d}}\tilde y {\,\mathrm{d}}y \right)^{\frac12} \\ & \leq {\varepsilon}^2 4 \Vert g \Vert_{{\mathrm{H}}^1((0,{\varepsilon}); {\mathrm{L}}^2({\mathbb{S}}))} \Vert \varphi \Vert_{{\mathrm{H}}^1(0,{\varepsilon})} .\end{aligned}$$ Summing up over all $n \in \mathbb{Z}$ and recalling the dense embedding of ${\mathrm{C}}^\infty_\mathrm{c} ({\mathbb{R}})$ into ${\mathrm{H}}^1({\mathbb{R}})$ gives the desired estimate. **Acknowledgment.** The authors thank Shalva Amiranashvili, Annegret Glitzky, Christian Kühn, and Alexander Mielke for helpful discussions and comments. The research of S.R. was supported by *Deutsche Forschungsgesellschaft* within SFB 910 *Control of self-organizing nonlinear systems: Theoretical methods and concepts of application* via the project A5 *Pattern formation in systems with multiple scales*. The research of P.G. was supported by the DFG Heisenberg Programme, DFG project SFB 910, and the Ministry of Education and Science of Russian Federation (agreement 02.a03.21.0008). [^1]: Freie Universität Berlin, Institute for Mathematics, Arnimallee 3, 14195 Berlin, Germany [^2]: RUDN University, Miklukho-Maklaya 6, 117198, Moscow, Russia [^3]: Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstrasse 39, 10117 Berlin, Germany
{ "pile_set_name": "ArXiv" }
--- abstract: 'Combining conservation of energy throughout nearly-spherical collapse of galaxy clusters with the virial theorem, we derive the mass-temperature relation for X-ray clusters of galaxies $T=CM^{2/3}$. The normalization factor $C$ and the scatter of the relation are determined from first principles with the additional assumption of initial Gaussian random field. We are also able to reproduce the recently observed break in the M-T relation at $T \sim 3 \keV$, based on the scatter in the underlying density field for a low density $\Lambda$CDM cosmology. Finally, by combining observational data of high redshift clusters with our theoretical formalism, we find a semi-empirical temperature-mass relation which is expected to hold at redshifts up to unity with less than $20\%$ error.' author: - Niayesh Afshordi and Renyue Cen date: ' January February March April May June July August September October November December , ' title: 'Mass-Temperature Relation of Galaxy Clusters: A Theoretical Study' --- Introduction ============ The abundance of clusters of galaxies provides one of the strongest constraints on cosmological models (Peebles, Daly, & Juszkiewicz 1989; Bahcall & Cen 1992; White, Efstathiou, & Frenk 1993; Viana & Liddle 1996; Eke, Cole, & Frenk 1996; Oukbir, Bartlett, & Blanchard 1997; Bahcall, Fan, & Cen 1997; Pen 1998; Cen 1998; Henry 2000; Wu 2000) with an uncertainty on the amplitude of density fluctuations of about 10% on $\sim 10h^{-1}$Mpc scale. Theoretically it is often desirable to translate the mass of a cluster, that is predicted by either analytic theories such as Press-Schechter (1974) theory or N-body simulations, to the temperature of the cluster, which is directly observed. Simple arguments based on virialization density suggest that $T\propto M^{2/3}$, where $T$ is the temperature of a cluster within a certain radius (e.g., the virial radius) and $M$ is the mass within the same radius. However, the proportionality coefficient has not been self-consistently determined from first principles, although numerical simulations have frequently been used to calibrate the relation (e.g., Evrard, Metzler, & Navarro 1996, hereafter EMN; Bryan & Norman 1998; Thomas et al. 2001). It is noted that the results from different observational methods of mass measurements are not consistent with one another and with the simulation results (e.g., Horner, Mushotzky, & Scharf 1999, hereafter HMS; Neumann, & Arnaud 1999; Nevalainen, Markevitch, & Forman 2000, Finoguenov, Reiprich, & Bohringer 2001, hereafter FRB ). In general, X-ray mass estimates are about $80\%$ lower than the predictions of hydro-simulations. Fig 1 compares X-ray cluster observational data with best fit line to EMN simulation results (FRB). On the other hand, mass estimates from galaxy velocity dispersion seem to be consistent with simulation results (HMS). The error in the gravitational lensing mass measurements is still too big to distinguish between these two (Hjorth, Oukbir & van Kampen 1998). Another recent observational finding is the possible existence of a break in the $T-M$ relation. By use of resolved temperature profile of X-ray clusters observed by ASCA, FRB have investigated $T-M$ relation in the low-mass end and find that $M\propto T^{\sim 2}$, compared to $M\propto T^{\sim 3/2}$ at the high mass end. Suggestions have been made to explain this behavior by attributing it the effect of formation redshift (FRB), cooling (Muanwong 2001) and heating (Bialek, Evrard & Mohr 2000) processes. In this paper we use conservation of energy for an almost spherically collapsing region to derive the M-T relation. In §2 we find the initial and final energy of the cluster. §3 constrains various factors which enter the normalization of M-T relation, via statistical methods, simulation and observational input. §4 considers predictions of our model and comparison with observational and simulation results. In §5, we discuss the limitations of our approach and justify some of the approximations. §6 concludes the paper. Conservation of Energy ====================== Initial Kinetic and Potential Energy of a Proto-Cluster ------------------------------------------------------- We begin by deriving the kinetic energy of the proto-cluster. We can write velocity as a function of gravitational potential $\phi_i$ (e.g. Padmanabhan 1993): $${\mathbf v} = H_i {\mathbf x}-\frac{2}{3H_i}{\mathbf \nabla}\phi_i,$$ at the initial time in the linear regime, where $H_i$ is the Hubble constant at the initial time. There is a small dependence on the initial density parameter $\Omega_i$ in equation (1) which we ignore since initially it is very close to unity and the difference will be in second order terms that we ignore for the proto-cluster. Then the kinetic energy is given by: $$K_i = \frac{1}{2}\int\rho v^2 d^3x = \frac{1}{2}\rho_{i}\int(1+\delta_i)|H_i{\mathbf x}-\frac{2}{3H_i}{\mathbf \nabla}\phi_i|^2 d^3x.$$ Keeping the terms up to the linear order we obtain $$K_i = \frac{1}{2}\rho_i \int(H_i^2 x^2 +\frac{2}{3}\nabla^2\phi_i x^2 - \frac{4}{3} {\mathbf x}.{\mathbf \nabla}\phi_i) d^3x.$$ In deriving equation (3), we have used the Poisson equation: $$\nabla^2 \phi_i = 4\pi G \rho_i \delta_i,$$ to substitute for $\delta_i$, where $\rho_i$ is the initial mean density of the universe. For equation (3) we then use Gauss theorem to make the third term similar to the second, at the expense of a surface term: $$K_i = \frac{1}{2}\rho_i\int(H_i^2 x^2+\frac{4}{3}x^2\nabla^2\phi_i)d^3x -\frac{1}{3}\rho_i\oint x^2 {\mathbf \nabla}\phi_i.d{\mathbf a}.$$ Assuming that the deviations from spherical symmetry is not important at the boundary of the proto-cluster, we find $\nabla\phi_i$ in the second term in equation (5) as a function of $\delta_i$: $${\mathbf \nabla}\phi_i= \hat{{\mathbf r}}\frac{G \delta M}{R_i^2} = \hat{{\mathbf r}}\frac{G \rho_i}{R_i^2}\int \delta_i d^3x,$$ where $G$ is the gravitational constant and $R_i$ is the boundary radius of the initial proto-cluster, which leads to $$K_i = \frac{4\pi G \rho_i^2}{3\Omega_i}\int [ (1+2\delta_i)x^2-R_i^2 \delta_i] d^3x,$$ where we have used equation (4) to substitute back for $\phi_i$ and also the definition of $\Omega_i \equiv 8\pi G \rho_i/(3 H_i^2)$. Let us now find an expression for the gravitational potential energy of the proto-cluster. Using its definition we have $$U_i = -\frac{G\rho_i^2}{2}\int\int[\frac{(1+\delta_i(x_1))(1+\delta_i(x_2))}{|x_1-x_2|}]d^3x_1 d^3x_2.$$ Keeping the terms to the first order and using the symmetry under interchange of $x_1$ and $x_2$, we arrive at $$U_i = -\frac{G\rho_i^2}{2}\int(1+2\delta_i(x_1))d^3x_1\int\frac{d^3x_2}{|x_1-x_2|},$$ which, taking the second integral in a spherical volume, gives $$U_i = -\frac{4\pi G}{3}\rho_i^2\int (1+2\delta_i)(\frac{3R_i^2-x^2}{4})d^3x.$$ Adding equations 7 and 10 gives the total initial energy[^1] $$E_i = \frac{4\pi G}{3}\rho_i^2[\frac{4\pi}{5}(1-\Omega_i)R_i^5-\frac{5}{2}\int\delta_i(x)(R_i^2-x^2)d^3x],$$ to the first order. Defining $\tilde{x}$, $\tilde{\delta}_i$ and $B$ as: $$\tilde{x}\equiv\frac{x}{R_i},\, \tilde{\delta}_i \equiv \delta_i + \frac{3}{5}(\Omega_i-1), B \equiv \int_0^1\tilde{\delta}_i(\tilde{x}) (1-\tilde{x}^2)d^3\tilde{x},$$ equation (11) is simplified: $$E_i = -\frac{10\pi G}{3} \rho_i^2 R_i^5 B.$$ The integral in the definition of $B$ is in fact a three dimensional integral and the limits denote that the integration domain is the unit sphere. Note that $\tilde{\delta}_i$ can be considered as the density perturbation to a flat (i.e. $\Omega_{tot} = 1$) universe for which energy vanishes. Another aspect of this statement is that both terms in the definition of $\tilde{\delta}_i$ scale as $a$ in the early (matter dominated) universe and so does $\tilde{\delta}_i$ itself. For a flat universe the first term dominates at high redshift since the second term scales as $a^3$. Energy of a Virialized Cluster ------------------------------ According to the virial theorem, the sum of the total energy $E_f$ of a virialized cluster and its kinetic energy $K_f$ vanishes. However, non-vanishing pressure at the boundary of the cluster can significantly modify the virial relation. Integrating the equation of hydrostatic equilibrium, we have $$K_f+E_f = 3P_{ext}V,$$ where $P_{ext}$ is the pressure on the outer boundary of virialized region (i.e. virial radius) and V is the volume. For now we assume that the surface term is related to the final potential energy $U_f$ by $$3P_{ext}V = -\nu U_f.$$ We will consider the coefficient $\nu$ and its possible mass dependence in §3.4. For a system of fully ionized gas plus dark matter, the virial relation (14) with equation (15) leads to $$-(\frac{1+\nu}{1-\nu})E_f = K_f = \frac{3}{2} M_{DM} \sigma_v^2+ \frac{3M_{gas} k T}{2\mu m_p},$$ where $\sigma_v$ is the mass-weighted mean one-dimensional velocity dispersion of dark matter particles, $M_{DM}$ is the total dark matter mass, $k$ is the Boltzmann constant, $\mu = 0.59$ is the mean molecular weight and $m_p$ is the proton mass. Assuming that the ratio of gas to dark matter mass in the cluster is the same as that of the universe as a whole and $f$ is the fraction of the baryonic matter in the hot gas, we get $$K_f = \frac{3\beta_{spec} M k T}{2\mu m_p}[1 + (f \beta_{spec}^{-1}-1)\frac{\Omega_b}{\Omega_m}].$$ with $ \beta_{spec} \equiv \sigma_v^2/(kT/\mu m_p)$. Hydrodynamic simulations indicate that $\beta_{spec} \simeq 1$. For simplicity we define $\tilde{\beta}_{spec}$ as $$\tilde{\beta}_{spec} = \beta_{spec} [1 + (f \beta_{spec}^{-1}-1)\frac{\Omega_b}{\Omega_m}].$$ So equation (16) reduces to: $$K_f = \frac{3\tilde{\beta}_{spec} M k T}{2\mu m_p}$$ Assuming energy conservation (i.e., $E_i=E_f$) and combining this result with equations (13, 16) lead to the temperature as a function of initial density distribution: $$k T = \frac{5 \mu m_p}{8 \pi \tilde{\beta}_{spec}} (\frac{1+\nu}{1-\nu}) \, H_i^2 R_i^2 B.$$ In the next subsection §2.3 we will find an expression for $H_i^2 R_i^2$ in terms of cluster mass $M$ and the initial density fluctuation spectrum. Virialization Time ------------------ Defining $e$ as the energy of a test particle with unit mass, which is at the boundary of the cluster $R_i$ of mass $M$ initially: $$e = \frac{\mathbf{v}^2_i}{2}-\frac{GM}{R_i}.$$ The collapse time $t$ can be written as $$t = \frac{2\pi G M}{(-2e)^{\frac{3}{2}}}.$$ Following the top-hat model, we assume that the collapse time of the particle is approximately the same as the time necessary for the particle to be virialized. Choosing $t$ to be the time of observation and assuming the mass $M$, interior to the test particle is virialized at $t$, and by combing equations (1, 6, 22), we find $e$ as function of initial density distribution and relate it to the collapse time: $$-2e = \frac{5}{4 \pi}H_i^2 R_i^2\int_0^1 \tilde{\delta}_i(\tilde{x}) d^3\tilde{x} = (\frac{2\pi G M}{t})^{\frac{2}{3}}.$$ Using $M = \frac{4}{3} \pi R_i^3 \rho_i$ and the Friedmann equations we obtain $$A \equiv \int_0^1 \tilde{\delta}_i(\tilde{x}) d^3\tilde{x} = \frac{2}{5}(\frac{3\pi^4}{t^2 G \rho_i})^{\frac{1}{3}}.$$ M-T Relation ------------ Combining equations (20) and (13,24), we arrive at the cluster temperature-mass relation: $$kT = (\frac{\mu m_p}{2\tilde{\beta}_{spec}})(\frac{1+\nu}{1-\nu})(\frac{2\pi G M}{t})^{\frac{2}{3}}(\frac{B}{A}).$$ Notice that although $B$ and $A$ are both functions of the initial moment $t_i$, since both are proportional to the scale factor $a$, the ratio is a constant; the derived T-M relation (equation 25) does not depend on the adopted initial time, as expected. As a specific example, in the spherical top-hat model in which the density contrast is assumed to be constant, this ratio $B/A$ is $\frac{2}{5}$. Let us gather all the unknown dimensionless factors in $\tilde{Q}$: $$\tilde{Q} \equiv (\frac{\tilde{\beta}_{spec}}{0.9})^{-1}(\frac{1+\nu}{1-\nu})(\frac{B}{A}) (Ht)^{-2/3}.$$ Then, inserting in the numerical values, equation (25) reduces to: $$kT = (6.62 \, \keV)\tilde{Q}(\frac{M}{10^{15} h^{-1} M_{\odot}})^{2/3},$$ or equivalently: $$M = 5.88\times 10^{13} \tilde{Q}^{-1.5}(\frac{kT}{1 \,\, \keV})^{1.5} h^{-1} M_{\odot}$$ where $H = 100 h~$km/s/Mpc is the Hubble constant. This result can be compared with the EMN simulation results: $$M_{200} = (4.42 \pm 0.56)\times 10^{13} (\frac{kT}{1\,\, \keV})^{1.5} h^{-1} M_{\odot}.$$ To convert the $M_{500}$ masses of EMN to $M_{200}$ we have used the observed scaling of the mass with density contrast $M_{\delta} \propto \delta ^{-0.266}$ (HMS), which is consistent with the NFW profile (Navarro, Frenk, & White 1997) for simulated dark matter halos as well as observations (e.g., Tyson, Kochanski, & dell’Antonio 1998), in the relevant range of radius. Numerical Factors, Scatter and Uncertainties ============================================ In this section we try to use different analytical methods as well as the results of dark matter simulations and observed gas properties, available in the literature, to constrain the numerical factors which appear in $\tilde{Q}$ (equation 26). $\beta_{spec}$ and the gas fraction ----------------------------------- So far we have made no particular assumption about the gas dynamics or its history, [^2] and so we are going to rely on the available observational results to constrain gas properties. $\beta_{spec}$ is defined as the ratio of kinetic energy per unit mass of dark matter to the thermal energy of gas particles. This ratio is typically of the order of unity, though different observational and theoretical methods lead to different values. The hydrodynamic simulation results usually point to a larger value of $\beta_{spec}$. For example, Thomas (2001) find $\beta_{spec} = 0.94 \pm 0.03$. On the other hand, observational data point to a slightly lower value of $\beta_{spec}$. Observationally there is yet no direct way of accurately measuring the velocity dispersion of dark matter particles in the cluster and one is required to assume that the velocity distribution of galaxies follows that of dark matter or adopt a velocity bias. Under the assumption of no velocity bias Girardi 1998 find it to be $0.88 \pm 0.04$. Girardi 2000 study $\beta_{spec}$ for a sample of high redshift clusters and do not find any evidence for redshift dependence. From the theoretical point of view, the actual value of $\beta_{spec}$ might be substantially different from the observed number, because both the velocity and density of galaxies do not necessarily follow those of the dark matter, which could have resulted in some non-negligible selection effects. Unknown sources of heating such as due to gravitational energy on small scales which is often substantially underestimated in simulations due to limited resolutions or baryonic processes like supernova feedback may affect the value of $\beta_{spec}$ as well. Hydrodynamic simulations show that only a small fraction of the baryons contribute to galaxy formation in large clusters (e.g., Blanton 2000) and so $f$ is close to one. Observationally we quote Bryan (2000) who compiled different observations for the cluster mass fraction in gas and galactic component with $$f = 1- 0.26(T/10\keV)^{-0.35},$$ albeit with a large scatter in the relation. Inserting equation (30) into equation (18), we see that the correction to $\beta_{spec}\sim 0.9$ is less than $5\%$ for all of feasible cosmological models which are dominated by non-baryonic dark matter. In what follows, unless mentioned otherwise, we adopt the value $\tilde{\beta}_{spec} = 0.9$ and absorb any correction into the overall normalization of the T-M relation. $B/A$: Single Central Peak Approximation and Freeze-Out time ------------------------------------------------------------ In the original top-hat approximation (Gunn & Gott 1972) which has been extensively used in the literature, the initial density distribution is assumed to be constant inside $R_i$, which leads to $B/A=2/5$. Shapiro, Iliev, & Raga (1999) and Iliev & Shapiro (2001) have extended the original treatment of top-hat spherical perturbation to a more self-consistent case with a trancated isothermal sphere final density distribution including the surface pressure term (see §3.4). Here we consider the general case with an arbitrary density profile of a single density peak. But before going further, let us separate out the term due to space curvature in the definition of $B/A$. Using the definitions of $B$ and $A$ in equations (12) and (24), and Friedmann equations to insert for $\rho_i$ in terms of the present day cosmological parameters, we get: $$\frac{B}{A} = \frac{b}{a}+(\frac{b}{a}-\frac{2}{5})(1-\Omega_m-\Omega_{\Lambda})(\frac{Ht}{\Omega_m \pi})^{\frac{2}{3}},$$ where $\Omega_m$ and $\Omega_\Lambda$ are the density parameters due to non relativistic matter and cosmological constant, respectively, at the time of observation, $a$ and $b$ are[^3] the same as $A$ and $B$ with $\tilde{\delta_i}$ replaced by $\delta_i$ in their definitions, equations (24) and (12). Assuming that the initial density profile has a single, spherically symmetric peak and assuming a power law for the initial linear correlation function at the cluster scale, we can replace $\delta(x)$ by $\frac{\xi(x)}{\delta{(0)}}$, $$\xi(x)= (\frac{r_{0i}}{x})^{3+n},$$ where $n$ is the index of the density power spectrum (Peebles 1981) and $r_{0i}$ is the correlation length. This gives $$\frac{B}{A} = (\frac{1}{1-\frac{n}{2}})[1+ \frac{ (3+n)(1-\Omega_m-\Omega_{\Lambda})}{5}(\frac{Ht}{\pi \Omega_{m}})^{\frac{2}{3}}],$$ where all the quantities are evaluated at the age of the observed cluster. Note that for models of interest the physically plausible range for $n$ is $(-3,0)$. One can see that the second term in equation (33) is indeed proportional to $t^{\frac{2}{3}}$ and so for an open universe it dominates for large time (after curvature domination). Noting that in equation (25), the temperature is proportional to $t^{-\frac{2}{3}}\frac{B}{A}$, so, when the second term dominates, the T-M relation will no longer evolve with time. This indicates that in an open universe the cluster formation freezes out after a certain time. The presence of freeze-out time is independent of the central peak approximation since the ratio $b/a$ only depends on the statistics of the initial fluctuations at high redshifts where there is very little dependence on cosmology. It is interesting to note that in the case $n=-3$, the ratio has no dependence on cosmology and there is no freeze-out even in low density universes. This is an interesting case where linear theory does not apply, because all scales become nonlinear at the same time and [*the universe is inhomogeneous on all scales*]{}. Voit (2000) uses a different method to obtain exactly the same result. As we argue next, both treatments ignore cluster mergers. $B/A$: Multiple Peaks and Scatter --------------------------------- The single peak approximation discussed in §3.2 ignores the presence of other peaks in the initial density distribution. In hierarchical structure formation models, the mass of a cluster grows with time through mergers as well as accretion. This means that multiple peaks may be present within $R_i$ and suppress the effect of the central peak. Assuming Gaussian statistics for initial density fluctuations, we can find the statistics of $b/a$. Note that using equation (24), we can fix the value $A$ (and hence $a$) for a given mass and virialization time. So the problem reduces to finding the statistics of $b$ (or $B$) for fixed $a$. Under the assumption of a power law spectrum (see Appendix A) calculations give $$\frac{<b>}{a} = \frac{4(1-n)}{(n-5)(n-2)},$$ with $$\Delta{b} = \frac{16 \pi 2^{-n/2}}{(5-n)(2-n)}[\frac{n+3}{n(7-n)(n-3)}]^{\frac{1}{2}}(\frac{r_{0i}}{R_i})^{\frac{n+3}{2}},$$ which, inserting into equation (31), yields $$\begin{aligned} <\frac{B}{A}> &=& \frac{4(1-n)}{(n-5)(n-2)}[1-\frac{n(n+3)}{10(1-n)}(1-\Omega_m-\Omega_{\Lambda}) (\frac{Ht}{\pi\Omega_m})^{\frac{2}{3}}], \\ \frac{\Delta B}{A} &=& \tau(n)(\frac{M}{M_{0L}})^{-\frac{n+3}{6}} D^{-1}(t) (\frac{\sqrt{\Omega_m} Ht}{\pi})^{\frac{2}{3}}, \end{aligned}$$ where $D(t)$ is the growth factor of linear perturbations, normalized to $(1+z)^{-1}$ for large redshift, and $$\begin{aligned} M_{0L} = \frac{4}{3} \pi \rho_0 r^3_{0L}, \\ \xi_{L}(r) = (\frac{r_{0L}}{r})^{n+3}, \nonumber \end{aligned}$$ where $\xi_L(r)$ is the linearly evolved correlation function at the present time with $r_{0L}$ being the correlation length, and $$\tau(n) \equiv \frac{20 \times 2^{-n/2}}{(5-n)(2-n)}[\frac{n+3}{n(7-n)(n-3)}]^{\frac{1}{2}}.$$ Fig 2 shows the dependence of $B/A$ on $n$. The upper dotted curve shows the result of the single peak approximation (equation 33), while three lower curves show the multiple peak calculation described above (equation 36) and its $\pm 1\sigma$ dispersion (equation 37). All the curves are for an Einstein-de Sitter universe and the dispersion is calculated for mass $10 M_{0L}$. Numerically, for $r_{0L}=5h^{-1}$Mpc we have $M_{0L}=1.4\times 10^{14}\Omega_mh^{-1}\msun$, resulting in $10M_{0L}=4.3\times 10^{14}h^{-1}\msun$ for $\Omega_m=0.3$. We note that $n \sim -3$ the density distribution is dominated by the central peak and corresponds to the top-hat case, and so two methods give similar results. Interestingly, as $n$ approaches zero, small peaks dominate and the distribution becomes close to homogeneous (top hat) on large scales. This implies that clusters undergo a large number of mergers for large values of $n$. Interestingly, in this case the ratio $B/A$ again approaches $2/5$, the value for the top-hat case. We will use the multiple-peak approximation in our subsequent calculations. It is worth mentioning that the ratio of the cosmology dependent term in the average value of $<B/A>$ to the constant term, in the multiple-peak calculation, is small. For example, for $\Omega_m = 0.3$ and $n = -1.5$, this ratio is about $0.07$. This implies that the freeze-out time is large comparing to the current age of the universe for feasible open cosmological models and consequently $<B/A>$ largely is determined by the spectral index of the underlying linear power spectrum $n$. $\nu$: Surface Term and its Dependence on the final equilibrium density profile ------------------------------------------------------------------------------- As discussed in §2, corrections to the virial relation due to finite surface pressure changes the M-T relation (equation 25). Shapiro, Iliev, & Raga (1999) and Iliev & Shapiro (2001) have previously also taken into account the surface pressure term in their treatment of the trancated isothermal sphere equilibrium structure with a top-hat initial density perturbation and have found results in good agreement with simulations. Voit (2000) uses NFW profile for the final density distribution to constrain the extra factor and finds that for typical concentration parameters $c \equiv r_{200}/r_{s}\sim 5$, $(\nu+1)/(\nu-1)$ is $\sim 2$. We will investigate this correction, $\nu$, for a given concentration parameter $c$. Let us assume that the density profile is given by: $$\rho(r) = \rho_s f(r/r_s),$$ where $\rho_s$ is a characteristic density, $r_s$ the scale radius, and $f$ is the density profile. For the NFW profile $$f_{NFW}(x) = \frac{1}{x(1+x)^2},$$ Moore et al. (2000) have used simulations with higher resolutions to show that the central density profile is steeper than the one already probed by low-resolution simulations such as those used by NFW, yielding the Moore profile $$f_{M}(x) = \frac{1}{x^{1.5}(1+x^{1.5})}.$$ For a given $f(x)$, the mass of the cluster is: $$M = 4\pi \rho_s r^3_s g(x),\, g(x) = \int_0^c f(x) x^2 dx.$$ The gravitational energy of the cluster is given by: $$U = -\int \frac{G M dM}{r} = -16 \pi^2 G \rho^2_s r^5_s \int_0^c f(x) g(x) x dx.$$ To find the surface pressure, we integrate the equation of hydrostatic equilibrium[^4] $${\mathbf \nabla}P = \rho {\mathbf g} = -\frac{GM\rho {\mathbf r}}{r^3}.$$ This leads to $$P_{ext} = 4 \pi G \rho^2_s r^2_s \int_c^{\infty} f(x)g(x)x^{-2}dx,$$ which, by the definition of $\nu$ (equation 15), gives: $$\nu(c,f) \equiv -\frac{3P_{ext}V}{U}= \frac{c^3\int_c^{\infty} f(x)g(x)x^{-2}dx}{\int_0^c f(x) g(x) x dx}.$$ Note that $\nu(c,f)$ is a function of both $c$ and density profile $f$. If we define $Q(c,f)$ as $$Q(c,f) \equiv (\frac{1+\nu}{1-\nu}) y = (\frac{1+\nu}{1-\nu}) \frac{B}{A(Ht)^{\frac{2}{3}}},$$ equation (25) can be written as: $$kT = (\frac{\mu m_p}{2\tilde{\beta}_{spec}})(2\pi G H M)^{\frac{2}{3}}Q(c,f).$$ or in numerical terms: $$kT = (6.62\,\keV) Q(c,f) (\frac{M}{10^{15} h^{-1} M_{\odot}})^{2/3}$$ for $\tilde{\beta}_{spec} = 0.9$. Note that with this choice of $\tilde{\beta}_{spec}$, the definition of $Q$ is equivalent to that of $\tilde{Q}$ in equation (26). Fig (3) shows the average value of $y$ for different cosmologies, using equations 36 and 51, for $n =-1.5$. Since all three parameters, $\nu(c,f)$, $y(c,f)$ and $Q(c,f)$, are functions of both $c$ and $f$, one can express any one of them as a function of another, for fixed $f$. Fig (4) shows normalization factor $Q$ as functions of $y$. The dashed curves are for the NFW profile and the solid curves for the Moore et al. (2000) profile. The x’s with error bars show various simulation and observational results (see Table 1). Note that even if we relax the conservation of energy, one can still use $Q(y)$ to find the T-M relation, using the value $y$ obtained from its definition, equation (49), with the corrected energy. An important feature of the behavior of $Q(y)$ is the presence of a minimum in or close to the region of physical interest. As a result, $Q(y)$ has very weak dependence on the history of the cluster, for example, the largest variation in $Q$ is about $3\% $. This is probably why simulations do not show significant cosmology dependence $\sim 5 \%$ (e.g. EMN, Mathiesen 2000). Another way of stating this property is that the heat capacity of the cluster is very small. It is well known that the heat capacity of gravitationally bound systems like stars is negative. Yet we know that if non-gravitating gas is bound by an external pressure, its heat capacity is positive. In the case of clusters, the interplay of external accretion pressure and gravitational binding energy causes it to vanish. It is only after the freeze-out time in an extreme open universe (low $\Omega_m$, no cosmological constant) where the heat capacity becomes negative similar to an ordinary gravitationally bound system. Concentration Parameter $c$ --------------------------- We point out that, by using the conservation of energy, one can constrain the concentration parameter and subsequently the surface correction for a given density profile. A typical density profile is specified by two parameters: a characteristic density $\rho_s$ and scale radius $r_s$. If we know mass (e.g. $M_{200}$) and total energy of the cluster, we can fix these two parameters. The concentration parameter is then fixed by $\rho_s$ and the critical density of the universe. To show the precedure, let us re-derive the T-M relation for a known density profile. Combining equations (16) and (19) gives: $$kT = -\frac{2\mu m_p}{3\tilde{\beta}_{spec}M}(\frac{1+\nu}{1-\nu})E_f.$$ Note that equation (51) only depends on the properties of the virialized cluster and is independent of its history. Defining $y$ as $$y \equiv \frac{4E}{3M}(2\pi G M H_0)^{-2/3},$$ equation (51) reduces to: $$kT = \frac{\mu m_p}{2\tilde{\beta}_{spec}}(\frac{1+\nu}{1-\nu})(\frac{2\pi G M}{t_0})^{2/3}[(H_0t_0)^{2/3}y].$$ Comparing this result with equation (25), we see that $$y = \frac{B}{A(Ht)^{2/3}}, %(Conserved\, Energy),$$ [*only if the energy is conserved*]{} \[i.e., assuming $E_f$ in equation (49) is equal to $E_i$\]. On the other hand, by combining equation (52) with equations (43), (44) and (47) and the virial theorem (14), $y$ can be written as a function of $c$, for a fixed density profile $f$: $$y(c,f) = \frac{\Delta_c^{1/3}(1-\nu)c \int_0^c f(x) g(x) x dx}{3\pi^{2/3} g^2(c)},$$ where we have assumed the boundary of the virialized region to be the radius at which the average density is $\Delta_c$ times the critical density of the universe (which is usually chosen to be 200), and $\nu$ is a function of $c$ and $f$ in equation (47). Equation (55) fixes the concentration parameter $c$ as a function of $y$ for a fixed density profile $f$, which in turn is determined by equation (54). The concentration parameter is fixed by the cosmology ($y$ parameter) as shown in Fig.(5). This relation can be well fit by: $$\log_{10}c = -0.17+ 1.2 \, y,$$ for NFW profile, accurate to $5 \%$ in the range $0 < y < 1$. Let us now consider the evolution of $c$. We know that in an expanding universe $\Omega_m$ decreases with time. Comparing with Fig.(4) we see that in a flat $\Lambda$CDM universe $y$ is decreasing with time, while in an open/Einstein-de Sitter universe, it is almost constant. Then equation (56) implies that $c$ is a decreasing function of time (increasing function of redshift) in a $\Lambda$CDM universe, while it does not significantly evolve in an OCDM universe. As an example, the concentration parameter in an Einstein-de Sitter universe is about $40\%$ larger than that of a flat $\Lambda$CDM universe with $\Omega_m = 0.25$. This is consistent with the NFW results who find an increase of about $35 \%$ for $c$ as a function of mass in units of non-linear mass scale. NFW simulations show a weak dependence of the concentration parameter on mass $ c \propto M^{-0.1}$. We see that our concentration parameter does not depend on mass. However its scatter is larger for small masses and so is marginally consistent with the simulation results (see Fig 6). Assuming that this discrepancy is only a consequence of non-spherical shape of original proto-cluster, in the next section, we attempt to modify the value of $y$ to match the simulation results. Corrections for Initial Non-Sphericity -------------------------------------- In this section we try to incorporate the effects due to the non-spherical shape of the initial proto-cluster into our formalism. Unlike previous sections, the calculations of this section are not very rigorous and should be considered as an estimate of the actual corrections. In particular, these approximations lose accuracy if there are large deviations from sphericity which, as we see, is the case for low mass end of the M-T diagram. We are going to assume that non-sphericity comes in through a modifying factor $1+\cal{N}$ that only depends on the initial geometry of the collapsing domain, $$y_{\cal{N}} = y (1+\cal{N}),$$ where $y_{\cal{N}}$ is the modified value of $y$. Next, let us assume that $R_i(\theta, \varphi)$ is the distance of the surface of our collapsing domain from its center. We can expand its deviation from the average in terms of spherical harmonics $Y_{lm}(\theta,\varphi)$, $$\delta R_i(\theta, \varphi) = \sum_{l,m} a_{lm} Y_{lm}(\theta,\varphi).$$ If we try to write down a perturbative expansion for $\cal{N}$, the lowest order terms will be quadratic, since there is no rotationally invariant first order term. Moreover, having in mind that the gravitational dynamics is dominated by the large scale structure of the object, as an approximation, we are going to keep the lowest $l$ value. Since $l=1$ is only a translation of the sphere and does not change its geometry, the lowest non-vanishing multipoles are for $l=2$, and the only rotation invariant expression is $${\cal N} \approx \sum^{2}_{m=-2} |a_{2m}|^2,$$ where we absorbed any constant factor in the definition of $a_{2m}$’s. The next simplifying assumption is that $a_{2m}$’s are Gaussian variables, with amplitudes proportional to the amplitude of the density fluctuations at the cluster mass scale. Mathematically, this is motivated by the fact that the concentration parameter predicted by simulations is closer to our prediction for spherical proto-clusters, at large mass end, where amplitude is smaller. The physical motivation is that since the density fluctuations decrease with scale, more massive clusters tend to deviate less from sphericity. Choosing Gaussian statistics for $a_{2m}$’s is only a simplifying assumption to carry out the calculations. Then it is easy to see that $$\Delta {\cal N}^2 = \frac{6}{25}<{\cal N}>^2,$$ and then using the definition of $\cal{N}$, assuming that it is a small correction we get $$(\frac{\Delta y_{\cal N}}{y_{\cal N}})^2 \approx \frac{6}{25}(1-\frac{y}{y_{\cal N}})^2+(\frac{\Delta y}{y})^2.$$ Note from equation (54), $\Delta y = (Ht)^{-2/3}(\Delta B/A)$ and $\Delta B/A$ is given in equation (37). We have also assumed that $\cal{N}$ and $y$ are statistically independent variables. In the next step, we define the amplitude of $\cal{N}$ $$<{\cal N}> \approx \omega (\frac{\Delta B}{A})^2, \omega \approx 64$$ The numerical value of $\omega$ is fixed by plugging $y_{\cal{N}}$ into equation (56) to get the modified concentration parameter and comparing this result with the simulations of Thomas et al. (Fig 6). Fig 7 shows the modified concentration parameter as a function of mass in an Einstein-de Sitter universe. We see that the introduction of non-sphericity results in the cluster concentration parameter being a decreasing function of cluster mass with a scatter that also decreases with mass, in accord with simulations. Fig 8 shows the same comparison for a $\Lambda$CDM cosmology with Eke et al. (2001) fitting formula. We see that, although $\omega$ was obtained by fitting the Einstein-de Sitter simulations, our prediction is marginally consistent with the $\Lambda$CDM simulations as well.[^5] Scatter in M-T relation ----------------------- Since $y$ is linear in $B/A$, it also has a Gaussian probability distribution function (PDF): $$P(y)dy = \frac{dy}{\sqrt{2\pi \Delta y^2}} \exp[-\frac{(y-\bar{y})^2}{2\Delta y^2}],$$ where $\bar{y}$ and $\Delta y$ are related to equations (36) and (37), by equation (54). As we mentioned above, the variation of $Q$ for the average values of $y$ is negligible. However, the scatter in the value of $y$ can be large, especially in the low mass end (see equation 37) and so the scatter in $Q$ might become significant. In what follows we only consider the NFW profile, since it is extensively considered in the literature. We find that the behavior of $Q$ can be fitted by: $$Q(y)^2 = Q^2_0+N(y-y_0)^2,$$ where $$Q_0 = 1.11,\, N = 1.8,\, y_0 = 0.538.$$ The error in this fitting formula is less than $3\%$ in the range $-1<y<2$. Inserting this into equation (63) leads to the PDF of $Q$: $$\begin{aligned} P(Q)dQ &= \frac{2 Q dQ}{\sqrt{2\pi \Delta y^2 N (Q^2 -Q^2_0)}} \nonumber \\ &\exp[-\frac{(y_0-\bar{y})^2 +(Q^2-Q^2_0)/N}{2\Delta y^2}]\cosh[\frac{y_0-\bar{y}}{\Delta y^2} \sqrt{\frac{Q^2-Q^2_0}{N}}].\end{aligned}$$ Fig 9 shows three different examples of the PDF obtained here. It is clear that the scatter in $Q$ is asymmetric. In fact, since the average value of $y$ is close to the minimum of equation (57), the scatter in $y$ shifts the average value of $Q$ upwards systematically. In the limit of large $\Delta y$, where this shift is significant, $P(Q)$ is approximately: $$P(Q)dQ = \frac{2 Q dQ}{\sqrt{2\pi \Delta y^2 N (Q^2 -Q^2_0)}}\exp[-\frac{(Q^2-Q^2_0)}{2N\Delta y^2}]\, \Theta(Q-Q_0),$$ where $\Theta$ is the Heaviside step function. Although the assumption of Gaussianity is not strictly valid for $y_{\cal{N}}$ which includes non-spherical corrections, we still can, as an approximation, use the above expressions for the PDF by replacing $\bar{y}$ and $\Delta{y}$ by $\bar{y}_{\cal{N}}$ and $\Delta y_{\cal{N}}$. It is also easy to find the average of $Q^2$ $$<Q^2> = Q^2_0 + N[(\bar{y}-y_0)^2+\Delta y^2].$$ As an example, for $\bar{y} \sim y_0$, while $\Delta y = 0.5$ gives only about $16 \%$ systematic increase in $Q$, $\Delta y = 1.0$ leads to$\sim 60 \%$ increase. Observed and Average Temperatures --------------------------------- The temperature found in §2.4 is the density weighted temperature of the cluster averaged over the entire cluster. However, the observed temperature, $T_f$, can be considered as a flux-weighted spectral temperature averaged over a smaller, central region of the cluster. The two temperatures may be different, due to presence of inhomogeneities in temperature. We use the simulation results by Mathiesen & Evrard (2001) to relate these two temperatures and refer the reader to their paper for their exact definitions and the discussion of the effects which lead to this difference: $$T = T_f [ 1+ (0.22\pm 0.05)\log_{10} T_f(\keV) - (0.11\pm 0.03)].$$ This correction changes the mass-temperature relation from $M \propto T^{1.5}$ to $M \propto T^{1.64}$ for the observed X-ray temperatures. We use this correction in converting the observed X-ray temperature to virial temperature in Figures 1 and 10-12. Predictions vs. Observations ============================ Power Index ----------- It is clear from equation (49) that we arrive at the usual $M \propto T^{1.5}$ relation that is expected from simple scaling arguments and is consistent with the numerical simulations (e.g. EMN or Bryan & Norman 1998). On the other hand, the observational $\beta$-model mass estimates lead to a steeper power index in the range $1.7-1.8$. Although originally interpreted as an artifact of the $\beta$-model (HMS), the same behavior was seen for masses estimated from resolved temperature profile (FRB). FRB carefully analyzed the data and interpret this behavior as a bent in the M-T at low temperatures. This was confirmed by Xu, Jin & Wu (2001) who found the break at $T_X = 3-4\,\keV$. As discussed in §3.7, the asymmetric scatter in $Q$ introduces a systematic shift in the M-T relation. For large values of $\Delta y$ all of the temperatures are larger than the value given by the scaling relation (54) for average value of $y$. This scatter increases for smaller masses (see equation 37), hence smaller clusters are hotter than the scaling prediction. As a result, the M-T relation becomes steeper in the low mass range as $Q$ increases, while the intrinsic scatter of the data is also getting larger. Indeed, increased scatter is also observed in the FRB data(Fig 1 and Fig’s 10-12 to compare with our prediction) but they interpret it as the effect of different formation redshifts. We will address this interpretation in §5. Normalization ------------- As we discussed in §3.4, our normalization (i.e., $Q$ in equation 50) is rather stable with respect to variations of cosmology and the equilibrium density profile. Table 1 compares this value with various observational and simulation results. $Q$ Method Reference ----------------- ------------------------------ ------------------------------ $1.12 \pm 0.02$ Analytic This paper $1.21 \pm 0.06$ Hydro-Simulation Thomas et al. 2001 $1.05 \pm 0.13$ Hydro-Simulation Bryan & Norman 1998 (BN) $1.22 \pm 0.11$ Hydro-Simulation Evrard et al. 1996 (EMN) $1.32 \pm 0.17$ Optical mass estimate Horner et al. 1999 (HMS1) $1.70 \pm 0.20$ Resolved temperature profile Horner et al. 1999 (HMS2) $1.70 \pm 0.18$ Resolved temperature profile Finoguenov et al. 2001 (FRB) Table 1. Comparison of normalizations from different methods. The last two rows only include clusters hotter than 3 keV. We see that our analytical method is consistent with the hydro-simulation results, indicating validity of our method, since both have a similar physics input and the value of $\beta_{spec}$ used here was obtained from simulations. On the other hand, X-ray mass estimates lead to normalizations about $50\%$ higher than our result and simulations. The result of the optical mass estimates quoted is marginally consistent with our result. Assuming that this is true would imply that optical masses are systematically higher than X-ray masses by $\sim 80\%$. Aaron (1999) have compared optical and X-ray masses for a sample of 14 clusters and found, on the contrary, a systematic difference less than $10\%$. The optical masses used in the work of Aaron (1999) were all derived by CNOC group (Carlberg et al. 1996) while the Girardi et al. 1998 data, used in HMS analysis quoted above, was compiled from different sources and has larger scatter and unknown systematic errors. In fact HMS excluded a number of outliers to get the correct slope and original data had even larger scatter (see their Fig 1). Therefore it may be that the systematic error in the optical result of Table 1, be much larger and so in agreement with other observations. As discussed in §3.1, one possible source for difference between theoretical and observational normalizations is that the values for $\beta_{spec}$ are different in the two cases due to systematic selection effects. Also, intriguingly, Bryan & Norman (1998) show that there is a systematic increase in the obtained value of $\beta_{spec}$ by increasing the resolution of the simulations. Whether this is a significant and/or real effect for even higher resolutions is not clear to us. However, the fact that the slope is unchanged indicates that the missing process is, probably, happening at small scales and so relates the intermediate-scale temperature to the small-scale flux-weighted spectral temperature by a constant factor, independent of the large-scale structure of the cluster. In this case the actual value of $\beta_{spec}$ must be $\sim 0.6$. Figures 10-12 show the prediction of our model, shifted downwards to fit the observational data in the massive end, versus the observational data of FRB using resolved temperature profile and corrected as discussed in §3.8. The correction due to initial non-sphericity (§3.6) is included in the theoretical plot. The value of $\sigma_8$ which enters $\Delta B/A$ (equation 37) through $M_{0L}$, is fixed by cluster abundance observations (e.g. Bahcal & Fan 1998). We see that while an Einstein-de Sitter cosmology under-estimates the scatter in the low mass end, a typical low density OCDM cosmology overestimates it. On the other hand, a typical $\Lambda$CDM cosmology is consistent with the observed scatter. Interestingly, this is consistent with various other methods, in particular CMB+SNe Ia result which point to a low-density flat universe (de Bernardis 2000; Balbi 2000; Riess 1998). Evolution of M-T Relation ------------------------- As we discussed in §3.4, the value of our normalization has a weak dependence on cosmology. Going back to equation (49), we see that the time dependence of the M-T relation is simply: $M \propto H^{-1} T^{1.5}$. Assuming a constant value of $\beta_{spec}$, this formula can be potentially used to measure the value of $H$ at high redshifts and so constrain the cosmology. Schindler (1999) has compiled a sample of 11 high redshift clusters ($0.3 < z < 1.1 $) from the literature with measured isothermal $\beta$-model masses. In these estimates, the gas is assumed to be isothermal and have the density profile: $$\rho_g(r)= \rho_g(0) (1+(\frac{r}{r_c})^2)^{-3\beta_{fit}/2}$$ Then, the mass in overdensity $\Delta_c$ is given by: $$M \simeq (\frac{3 H^2 \Delta_c}{2 G})^{-1/2}(\frac{3 \beta_{fit} k T}{G \mu m_p})^{3/2}.$$ Comparing this with equation (49), and neglecting the difference between virial and X-ray temperatures, we get: $$Q \simeq (3 \pi)^{-2/3}(\tilde{\beta}_{spec}/\beta_{fit}) \Delta_c^{1/3}$$ In the last two equations, we have ignored $r_c$ with respect to the radius of the virialized region, which introduces less than $3 \%$ error. This allows us to find the normalization $Q$ from the value of $\beta_{fit}$ (independent of cosmology in this case). Assuming $\tilde{\beta}_{spec} = 0.9$, equation (71) gives the value of $Q$ for a given $\beta_{fit}$. Fig 13 shows the value of $Q$ versus redshift for Schindler (1999) sample and also FRB resolved temperature method for low redshift clusters. This result is consistent with no redshift dependence and the best fit is: $$\log Q = 0.23 \pm 0.01 (systematic) \pm 0.04 (random) + (0.09 \pm 0.04)z.$$ The combination of this result and equation (50) gives $$kT = (11.2 \pm 1.1 \keV) e^{(0.21\pm 0.09)z}(\frac{M}{10^{15} h^{-1} M_{\odot}})^{2/3}$$ which relates X-ray temperature of galaxy clusters to their masses that does not depend on the theoretical uncertainties with regard to the normalization coefficient of the M-T relation, in this range of redshifts. The systematic error in this result is less than $5\%$ while the random scatter can be as large as $20\%$. This result is valid for $ T_X > 4 \keV$ since below this temperature the systematic shift due to random scatter becomes significant (§3.7). It is easy to see that this threshold moves to lower temperatures in high redshifts in our formalism. Note that the more realistic interpretation of a possible evolution in observed $Q$, obtained above, is that in equation (71), $Q$ remains constant (Fig 5) and, instead, $\tilde{\beta}_{spec}$ varies with time. However there will be virtually no difference with respect to the M-T relation and, moreover, the weak redshift dependence in equation (73) shows that $\tilde{\beta}_{spec}$ is indeed almost constant. Discussion ========== In this section we discuss the validity of different approximations which were adopted throughout this paper. In the calculation of initial and final energy of the cluster, we ignored any contribution from the vacuum energy. In fact we know that cosmological constant in the Newtonian limit can be considered as ordinary matter with a constant density. If the cosmological constant does not change with time, then its effect can be considered as a conservative force and so energy is conserved. However, in both initial state and final equilibrium state of the cluster the density of cosmological constant is much smaller than the density of matter and hence its contribution is negligible. This does not hold for quintessential models of the vacuum energy since $\Lambda$ changes with time and energy is not conserved. This may be used as a potential method to distinguish these models from a simple cosmological constant. Let us make a simple estimate of the importance of this effect. We expect the relative contribution of a varying cosmological constant be maximum when the density of the proto-cluster is minimum. This happens at the turn-over radius which is almost twice $r_v$, the virial radius, in the top-hat approximation. Then the contribution to the energy due to the vacuum energy would be: $$\frac{\delta E}{E} \sim (\frac{4 \pi}{3} \Lambda (t_0/2) (2 r_v)^3)/M = (\frac{8}{200})(\frac{\Lambda (t_0/2)}{\rho_c(t_0)}) = (\frac{8}{200})(\frac{H^2(t_0/2)}{H^2(t_0)}) \Omega_{\Lambda}(t_0/2) \sim 0.1.$$ This gives about $10 \%$ correction to $y$ or about $15 \%$ correction to $c$. We see that still the effect is small but, in principle observable if we have a large sample of clusters with measured concentration parameters. A systematic error in our results might have been introduced by replacing the density distribution by an averaged radial profile which under-estimates the magnitude of gravitational energy and so the temperature. Also, as shown by Thomas et al. (2000), most of the clusters in simulations are either steeper or shallower than the NFW profile at large radii. Since our normalization is consistent with simulation results, we think that these effects do not significantly alter our predictions. Let us now compare our results with that of Voit (2000), who made the first such analytic calculation (but we were not aware of his elegant work until the current work was near completion) which is not based on the top-hat initial density perturbation and uses the same ingredients to obtain M-T relation. First of all, as noted in §3.3, his result, which is equivalent to the central peak approximation to find $B/A$, ignores the possibility of mergers. As we see in Fig (2) the value of $B/A$ and so $y$ is about $50\%$ larger in the single peak case than the multiple peak case. Although it does not change the normalization very much, it overestimates the concentration parameter (Fig (5)) in our formalism. However, as pointed out by the referee, our formaism for finding the value of $c$ is based on the assumption of isotropic velocity dispersion (equation (45)) and hence may not be directly compared to Voit (2000) who does not make such assumption. Also Voit (2000) has neglected the cosmology dependence of the surface correction, $\frac{1+\nu}{1-\nu}$, which gives a false cosmology dependence to the normalization. This is inconsistent with hydro-simulations (e.g. EMN, Mathiesen 2000) and does not give the systematic shift in the lower mass end, if we assume it does not have a non-gravitational origin. Finally, we comment on the interpretation of the scatter/bent in the lower mass end as being merely due to different formation redshifts which is suggested by FRB. Although different formation redshifts can certainly produce this scatter/bent, it is not possible to distinguish it from scatter in initial energy of the cluster or its initial non-sphericity, in our formalism. On the other hand, the FRB prescription, which assumes constant temperature after the formation time, is not strictly true because of the on-going accretion of matter even after the cluster is formed. So, as argued by Mathiesen 2000 using simulation results, the formation time might not be an important factor, whereas the effect can be produced by scatter in initial conditions of the proto-cluster. Conclusion ========== We combine conservation of energy with the virial theorem to derive the mass-temperature relation of the clusters of galaxies and obtain the following results: - Simple spherical model gives the usual relation $T=CM^{2/3}$, with the normalization factor $C$ being in excellent agreement with hydro simulations. However, both our normalization and that from hydro simulations are about $50 \%$ higher than the X-ray mass estimate results. This is probably due to our poor understanding of the history of the cluster gas. - Non-sphericity introduces an asymmetric, mass-dependent scatter (the lower the mass, the larger the scatter) for the $M-T$ relation thus alters the slope at the low mass end ($T \sim 3 \keV$). We can reproduce the recently observed scatter/bent in the M-T relation in the lower mass end for a low density $\Lambda$CDM cosmology, while Einstein- de Sitter/OCDM cosmologies under/overestimate this scatter/bent. We conclude that the behavior at the low mass end of the M-T digram can be used to constrain cosmological models. - We point out that the concentration parameter of the cluster and its scatter can be determined by our formalism. The concentration parameter determined using this method is marginally consistent with simulation results, which provides a way to find non-spherical corrections to M-T relation by fitting our concentration parameter to the simulation results. - Our normalization has a very weak dependence on cosmology and formation history. This is consistent with simulation results. - We find mass-temperature relation (73), for clusters of galaxies, based on the observations calibrated by our formalism, which can be used to find masses of galaxy clusters from their X-ray temperature in the range of redshift $ 0< z < 1.1$ with the accuracy of $20\%$. This is a powerful tool to find the evolution of mass function of clusters, using their temperature function. This research is supported in part by grants NAG5-8365. N.A. wishes to thank Ian dell’Antonio, Licia Verde and Eiichiro Komatsu for useful discussions. Aaron, L.D., Ellingson, E., Morris, S.L., Carlberg,R.G. 1999, ApJ, 517, 2, 587 Bahcall, N.A., & Cen, R. 1992, ApJ, 398, L81 Bahcall, N.A., Fan, X., & Cen, R. 1997, ApJ, 485, L53 Bahcall, N.A., & Fan, X. 1998, ApJ, 504, 1 Balbi, A., 2000, ApJ, 545, L1 Bialek, J.J., Evrard, A.E., Mohr, J.J., astro-ph/0010584 Blanton, M., Cen, R., Ostriker, J.P., Strauss, M.A., & Tegmark, M. 2000, ApJ, 531, 1 Bryan, G.L., & Norman, M.L. 1998, ApJ, 495, 80 Bryan, G.L., astro-ph/0009286 Carlberg, R.G. et al. 1996, ApJ, 462, 32 Cen 1998, ApJ, 509, 494 de Bernardis, P. 2000, Nature, 404, 955 Eke, V.R., Cole, S., & Frenk, C.S. 1996, MNRAS, 282, 263 Eke, V.R., Navarro, J.F., Steinmetz, M. 2001, ApJ, 554, 114 Evrard, A.E., Metzler, C.A., & Navarro, J.F. 1996, ApJ, 469, 494 (EMN) Finoguenov, A., Reiprich, T. H.,& Bohringer, H., astro-ph/0010190 (FRB) Girardi, M.,Giuricin, G., Mardirossian, F., Mezzetti, M., Boschin, W. 1998, ApJ, 505, 74 Girardi, M., Mezzetti, M. 2000, ApJ, 548, 79 Gunn, J., Gott, J. 1972, 176, 1 Henry, J.P. 2000, ApJ, 565, 580 Hjorth, J., Oukbir, J. & van Kampen, E., 1998,MNRAS, 298, L1 Horner, D.J., Mushotzky, R.F., & Scharf, C.A. 1999, ApJ, 520, 78(HMS) Iliev, I.T., & Shapiro, P.R. 2001, MNRAS, in press (astro-ph/0101067) Mathiesen, B.F., astro-ph/0012117 Mathiesen, B.F., Evrard, A.E. 2001, ApJ 546, 1, 100 Muanwong et al., astro-ph/0102048 Navarro, Frenk & White 1997, ApJ 490, 493(NFW) Nevalainen, J., Markevitch, M., & Forman, W. 2000, ApJ, 532, 694 Neumann, D.M., & Arnaud, M. 1999, A& A, 348, 711 Oukbir, J., Bartlett, J.G., & Blanchard, A. 1997, A&A, 320, 365 Padmanabhan, T., Structure Formation in the Universe, 1993, Cambridge University Press Peebles, P.J.E., Daly, R.A., & Juszkiewicz, R. 1989, , 347, 563 Pen, U. 1998, ApJ, 498, 60 Riess, A.G., 1998, AJ, 116, 1009 Schindler, S. 1999, A&A, 349, 435 Shapiro, P.R., Iliev, I.T., & Raga, A.C. 1999, MNRAS, 307, 203 Thomas, P.A. et al., astro-ph/0007348 Tyson, J.A., Kochanski, G.P., & dell’Antonio, I.P. 1998, ApJ, 498, L107 Voit, M. 2000, ApJ 543, 1, 113 Viana, P.T.P, & Liddle, A.R. 1996, MNRAS, 281, 323 White, S.D.M., Efstathiou, G. ,& Frenk, C.S. 1993, MNRAS, 262, 102 Wu, J.-H. P. 2000, astro-ph/0012207 Xu, H., Jin, G., Xiang-Ping, W., astro-ph/ 0101564 Statistics of $b/a$ =================== $a$ and $b$ are defined as: $$a = \int_0^1\delta_i(\tilde{x}) d^3x,\, b = \int_0^1 (1-x^2) \delta_i(\tilde{x}) d^3x$$ Assuming a Gaussian statistics for the linear density field, The Probability Distribution Function for $a$ and $b$, takes a Gaussian form: $$\begin{aligned} P(a,b)\, da\, db &=& \frac{1}{2\pi\sqrt{L}}\exp[-\frac{1}{2L}(<b^2>a^2+<a^2>b^2+2<ab>ab ) ]\, da\, db, \nonumber \\ L &=& <a^2><b^2>-<ab>^2.\end{aligned}$$ Then, for fixed $a$, we have: $$\begin{aligned} \frac{<b>}{a} &=& \frac{<ab>}{<a^2>} \\ \Delta b &=& \sqrt{\frac{L}{<a^2>}}.\end{aligned}$$ To find the quadratic moments of $a$ and $b$, we assume a power-law linear correlation function: $$\xi_i({\mathbf r}) = <\delta_i({\mathbf x})\delta_i({\mathbf x + r})> = (\frac{r_{0i}}{r})^{3+n}.$$ The moments become: $$\begin{aligned} <a^2> &=& 8 \pi^2 (\frac{r_{0i}}{R_i})^{3+n} F_{00}(n), \\ <ab> &=& 8 \pi^2 (\frac{r_{0i}}{R_i})^{3+n} (F_{00}(n)-F_{02}(n)), \\ <b^2> &=& 8 \pi^2 (\frac{r_{0i}}{R_i})^{3+n}(F_{00}(n)-2F_{02}(n)+F_{22}(n)),\end{aligned}$$ where $F_{ml}(n)$ is defined as: $$F_{ml}(n) \equiv \frac{1}{8 \pi^2}\int x^m_1 x^l_2 |{\mathbf x_1 -x_2}|^{-(n+3)} d^3 x_1 d^3 x_2,$$ where the integral is taken inside the unit sphere. Taking the angular parts of the integral, this reduces to: $$F_{ml} = \frac{1}{n+1}\int_0^1 \int_0^1 dx_1 dx_2 x_1^{m+1} x_2^{l+1} (|x_1-x_2|^{-1-n}-|x_1+x_2|^{-1-n}).$$ Then, taking this integral for the relevant values of $m$ and $l$, and inserting the result into (A6-A8) and subsequently (A3-A4) gives: $$\frac{<b>}{a} = \frac{4(1-n)}{(n-5)(n-2)},$$ with $$\Delta{b} = \frac{16 \pi 2^{-n/2}}{(5-n)(2-n)}[\frac{n+3}{n(7-n)(n-3)}]^{\frac{1}{2}}(\frac{r_{0i}}{R_i})^{\frac{n+3}{2}}.$$ [^1]: It is easy to see that $1-\Omega_i = O(\delta_i)$ and so we have neglected it in the first order terms. [^2]: With the exception of any heating/cooling of the gas being negligible with respect to the gravitational energy of the cluster. [^3]: $a$ should not to be confused with the cosmological scale factor. [^4]: This is of course valid in the case of isotropic velocity dispersion profile. As an approximation, we are going to neglect any correction due to this possible anisotropy [^5]: Eke et al. (2001) fitting formula was made for a lower mass range and its systematic error at the cluster mass range is indeed comparable to its difference with our prediction.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The differential equations with piecewise constant argument (DEPCAs, for short) is a class of hybrid dynamical systems (combining continuous and discrete). In this paper, under the assumption that the nonlinear term is partially unbounded, we study the bounded solution and global topological linearisation of a class of DEPCAs of general type. One of the purpose of this paper is to obtain a new criterion for the existence of a unique bounded solution, which improved the previous results. The other aim of this paper is to establish a generalized Grobman-Hartman result for the topological conjugacy between a nonlinear perturbation system and its linear system. The method is based on the new obtained criterion for bounded solution. The obtained results generalized and improved some previous papers. Some novel techniques are employed.' author: - | Changwu Zou$^{1}$[^1], Yonghui Xia$^{2,3}$ [^2], Manuel Pinto$^{4}$[^3],Jinlin Shi$^{1}$,Yuzhen Bai$^5$\ [1.College of Mathematics and Computer Science, Fuzhou University, Fuzhou, 350108 , China]{}\ [*zouchw@126.com (C. Zou)*]{}\ [2. School of Mathematical Sciences, Huaqiao University, 362021, Quanzhou, Fujian, China.]{}\ [*xiadoc@163.com; xiadoc@hqu.edu.cn (Y.H.Xia)*]{}\ [3. Department of Mathematics, Zhejiang Normal University, Jinhua, 321004, China]{}\ [4. Departamento de Matematica, Universidad de Chile, Santiago, Chile]{}\ [*pintoj.uchile@gmail.com(M. Pinto)* ]{}\ [5. School of Mathematical Sciences, Qufu Normal University£¬ Qufu, 273165, P.R.China]{}\ title: Boundness and Linearisation of a class of differential equations with piecewise constant argument --- [**Keywords:**]{}  differential equation; bounded solution; piecewise constant argument [**2000 Mathematics Subject Classification:**]{} 34D09; 93B18; 39A12; 34D30; 37C60 **Introduction and Motivation** =============================== In this paper, we study the boundness and linearisation of a differential equations with piecewise constant argument of generalized type (for short, DEPCAGs). It takes the form $$\begin{aligned} \label{sysnlz} z'(t)=M(t)z(t)+M_0(t)z(\gamma(t))+h(t,z(t),z(\gamma(t))),\end{aligned}$$ where $t\in \mathbb{R}, z(t)\in \mathbb{R}^{n}$, $M(t)$ and ${M_0}(t)$ are $n\times n$ matrices, $h:\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}^n$ and $\gamma(t): \mathbb{R}\rightarrow\mathbb{R}$. Usually, the authors studied the bounded solutions of perturbation nonlinear system under assumption that the perturbation nonlinear term is [*bounded*]{}. When the perturbation term is [*unbounded*]{}, it is difficult to study. In this paper, under the assumption that the nonlinear term $h(t,z(t),z(\gamma(t)))$ is partially unbounded, we study the bounded solution of (\[sysnlz\]). One of the purpose of this paper is to obtain a new criterion for the existence of a unique bounded solution, which improved the previous results (Theorem 5.3 in [@Coronel15]). Based on the new obtained criterion for bounded solution, we prove a generalized Grobamn-Hartman theorem to guarantee the conjugacy between the nonlinear system (\[sysnlz\]) and its linear system. This is another main purpose of this paper. The obtained results generalized and improved some previous papers. In fact, some novel techniques are employed in the proof. Throughout this paper, we assume that the condition **(A)** holds: There exist two constant sequences $\{t_i\}_{i\in\mathbb{Z}}$ and $\{\zeta_i\}_{i\in\mathbb{Z}}$ such that\ **(A1)** $t_i<t_{i+1}$ and $t_i\leq\zeta_i\leq t_{i+1}$, $\forall i\in\mathbb{Z}$,\ **(A2)** $t_i\rightarrow\pm\infty$ as $i\rightarrow\pm\infty,$\ **(A3)** $\gamma(t)=\zeta_i$ for $t\in[t_i,t_{i+1})$,\ **(A4)** there exists a constant $\theta>0$ such that $t_{i+1}-t_i\leq \theta, \forall i\in\mathbb{Z}.$ In particular, when $\gamma(t)=[t]$ or $\gamma(t)=2[\frac{(t+1)}{2}]$, system is called the differential equations with piecewise constant argument (DEPCAs). For DEPCAs and DEPCAGs, many scholars study the continuity, boundedness, stability, existence of periodic or almost periodic solutions. One can refer to [@Aftabizadeh87; @Akhmet08; @Dai08; @CastilloPinto15; @ChiuPinto14; @Pinto-Robledo15; @VelozPinto15; @Yuan02; @Yuan97]. In particular, the bounded solutions of DEPCAs and DEPCAGs were obtained in [@Akhmet07; @ChiuPinto10; @Coronel15; @PintoRobledo15; @PapaschinopoulosJMAA96]. Among these works, Akhmet [@Akhmet07] obtained a set of sufficient conditions to guarantee the existence of a unique bounded solution by assuming that the linear system $z'(t)=M(t)z(t)$ in system has an exponential dichotomy. But if $M(t)=0$, then $z'(t)=M(t)z(t)$ can not admit an exponential dichotomy. It is possible that $z'(t)=M_0(t)z(\gamma(t))$ admits an exponential dichotomy, even if $M(t)=0$. In this case, the result in [@Akhmet07] is invalid. Later, Akhmet [@Akhmet12; @Akhmet14] introduced the condition that the linear system with piecewise constant argument $$\begin{aligned} \label{syslz} z'(t)=M(t)z(t)+M_0(t)z(\gamma(t))\end{aligned}$$ admits an exponential dichotomy. Under the assumption that linear system (\[syslz\]) admits an exponential dichotomy and the nonlinear term $h(t,z(t),z(\gamma(t)))$ is bounded, Coronel et al [@Coronel15] proved that there exists a unique bounded solution of system (\[sysnlz\]) (see [@Coronel15 Th. 5.3]). What happens if the nonlinear perturbed term $h(t,z(t),z(\gamma(t)))$ is unbounded? Does there exist a unique bounded solution? This paper is devoted to answering this question. We prove that even if $h(t,z(t),z(\gamma(t)))$ is unbounded, system has a unique bounded solution under some suitable conditions. We briefly summarize our result on bounded solution as follows: [**Result 1**]{} [*Assume that system admits an exponential dichotomy and the nonlinear term $h(t,z(t),z(\gamma(t)))$ is Lipschitzian. If we further assume that there exist constants $r>0$ and $\mu>0$, such that $$|h(t,z(t),z(\gamma(t)))|\leq r(|z(t)|+ |z(\gamma(t))|)+\mu,$$ then system has a unique bounded solution under some conditions*]{}. [**Remark 1**]{} We point out that $h(t,z(t),z(\gamma(t)))$ can be a polynomial of order one in $z(t)$ and $z(\gamma(t))$, which can be unbounded. For example, taking $h(t,z(t),z(\gamma(t)))= z(t)\sin t + z(\gamma(t))\cos t$. Thus our result improves Theorem 5.3 in [@Coronel15]. Certainly, we also generalize the results of Akhmet [@Akhmet07]. Another purpose in this paper is to apply Result 1 to study the linearization of system when the nonlinear term $h(t,z(t),z(\gamma(t)))$ is unbounded. Topological linearization is one of the most important research topics in the ordinary differential equations. A brief survey on topological linearization is presented as follows: Since the Grobman-Hartman theorem was established by Hartman and Grobman [@Grobman62; @Hartman63] in 1960’s, many mathematician made contribution to this topic and made great progress in this theme. Most of the works focused on the autonomous systems. On the other hand, some mathematicians emphasized on the non-autonomous systems. Palmer [@Palmer73] proposed a version of Grobman-Hartman theorem for the non-autonomous ordinary differential equations in 1973. For the ordinary differential equations, Barreira and Valls [@Barreira06; @Barreira11], Jiang [@Jiang06; @Jiang07], Shi and Xiong [@Shi95] extended Palmer’s result in various directions. For example, Shi and Xiong [@Shi95] reduced the conditions by assuming that the linear system partially admits an exponential dichotomy. Jiang [@Jiang06; @Jiang07] reduced the condition by assuming that the linear system admits a generalized dichotomy. Barreira and Valls [@Barreira06; @Barreira11] reduced the condition by assuming that the linear system admits a nonuniform exponential dichotomy. In addition, topological linearization of difference equations, functional differential equations and scalar reaction diffusion equations have been extensively studied. For examples, see [@Castaneda15; @Farkas; @Kurzweil91; @Lopez99; @Lu; @Papaschinopoulos94; @Potzche08]. Another important work is the smooth linearization of $C^1$ hyperbolic mappings. One can refer to [@Bel2; @ElBialy; @Pugh; @Sell; @Sternberg1; @Sternberg2; @RS-JDE; @RS-JDDE; @ZWN-JDE; @ZWN-JFA; @ZWN-MA]. In this paper, we focus on the topological linearization ($C^0$ linearization) of non-autonomous systems. As mentioned above, the topological linearization was extensively studied. However, seldom did the authors study the linearization problem of DEPCAGs. In 1996, Papaschinopoulos generalized the topological linearization theorem to DEPCAs. And nineteen years later, Pinto and Robledo [@PintoRobledo15] generalized the work of Papaschinopoulos to DEPCAGs. Under suitable conditions, they proved that the above nonlinear system is topologically conjugated to its linear system . They studied the linearization problem based on that the nonlinear terms in the systems are bounded. More specifically, the results in [@Papaschinopoulos96] and [@PintoRobledo15] require that $h(t,z(t),z(\gamma(t)))$ is bounded, ie., there exists a constant $\mu>0$ such that $$|h(t,z(t),z(\gamma(t)))|\leqslant \mu.$$ However, in general, $h(t,z(t),z(\gamma(t)))$ can be unbounded. For example, taking $h(t,z(t),z(\gamma(t)))= z(t)\sin t + z(\gamma(t))\cos t$. In this case, the results in [@Papaschinopoulos96] and [@PintoRobledo15] are not valid. In this paper, we prove that if $h(t,z(t),z(\gamma(t)))$ is unbounded, system can also be topologically conjugated to system as long as it has a proper structure. More precisely, we consider the following system with some proper structure $$\label{sysnlxy-phi} \left\{\ \begin{array}{c} x'(t)=A(t)x(t)+A_0(t)x(\gamma(t))+f(t,x(t),x(\gamma(t)))+\phi(t,y(t),y(\gamma(t))),\\ y'(t)=B(t)y(t)+B_0(t)y(\gamma(t))+g(t,x(t),x(\gamma(t)))+\psi(t,y(t),y(\gamma(t))), \end{array} \right.$$ where $t\in \mathbb{R}, x(t)\in \mathbb{R}^{n_1}, y(t)\in \mathbb{R}^{n_2}$, $n_1+n_2=n$, $A(t)$, $A_0(t)$ are $n_1\times n_1$ matrices, $B(t)$, $B_0(t)$ are $n_2\times n_2$ matrices, $f:\mathbb{R}\times\mathbb{R}^{n_1}\times\mathbb{R}^{n_1}\rightarrow\mathbb{R}^{n_1}$, $g:\mathbb{R}\times\mathbb{R}^{n_1}\times\mathbb{R}^{n_1}\rightarrow\mathbb{R}^{n_2}$, $\phi:\mathbb{R}\times\mathbb{R}^{n_1}\times\mathbb{R}^{n_1}\rightarrow\mathbb{R}^{n_1}$, and $\psi:\mathbb{R}\times\mathbb{R}^{n_2}\times\mathbb{R}^{n_2}\rightarrow\mathbb{R}^{n_2}$. In this paper, we assume that the nonlinear term is partially unbounded. We briefly summarize our second result on the global topological linearization of a class of DEPCAGs (\[sysnlxy-phi\]) as follows: [**Result 2**]{} [ *Suppose that the linear system $$\label{syslxy} \left\{\ \begin{array}{c} x'(t)=A(t)x(t)+A_0(t)x(\gamma(t)),\\ y'(t)=B(t)y(t)+B_0(t)y(\gamma(t)), \end{array} \right.$$ admits an exponential dichotomy. Assume that the nonlinear terms $f(t,x(t),x(\gamma(t))),\\ \,g(t,x(t),x(\gamma(t)))$ are Lipschitzian. If we further assume that there exist constants $\lambda$ and $\delta>0$ such that $$|f(t,x(t),x(\gamma(t)))|\leq\lambda(|x(t)|+|x(\gamma(t))|), \quad |g(t,x(t),x(\gamma(t)))|\leq\lambda(|x(t)|+|x(\gamma(t))|),$$ $$|\phi(t,y(t),y(\gamma(t)))|\leq \delta, \quad |\psi(t,y(t),y(\gamma(t)))|\leq \delta.$$ Then system is topologically conjugated to system under proper conditions.* ]{} [**Remark 2**]{} As you will see, the nonlinear terms $f(t,x(t),x(\gamma(t)))$ and $g(t,x(t),x(\gamma(t)))$ can be possibly unbounded. For example, $f(t,x(t),x(\gamma(t)))$ and $g(t,x(t),x(\gamma(t)))$ can be polynomials of order one in $x(t)$. In this case, the nonlinear term of system is unbounded in $x(t)$. For example, taking $f(t,x(t),x(\gamma(t)))=x(t)\sin t+x(\gamma(t))\cos t$. We see that the topological linearization can be realized. However, the results in and [@PintoRobledo15] can not be applied to this case. In this sense, we extended the results in and [@PintoRobledo15]. It should be noted that if $\theta=0$, $A_0(t)=0$ and $B_0(t)=0$, reduces to the ODE as follows. $$\label{sysnlxy-cor} \left\{\ \begin{array}{c} x'(t)=A(t)x(t)+f(t,x(t))+\phi(t,y(t)),\\ y'(t)=B(t)y(t)+g(t,x(t))+\psi(t,y(t)). \end{array} \right.$$ Its linear system is $$\label{syslxy-cor} \left\{\ \begin{array}{c} x'(t)=A(t)x(t),\\ y'(t)=B(t)y(t). \end{array} \right.$$ Notice that if $\theta=0$, we can prove that if $|f(t,x(t)))|\leq\lambda(|x(t)|), \quad |g(t,x(t))|\leq\lambda(|x(t)|)$, $|\phi(t,y(t))|\leq \delta, \quad |\psi(t,y(t))|\leq \delta$, then system is topologically conjugated to system . Palmer [@Palmer73] proved that is topologically conjugated to its linear part under the assumption that the nonlinear term is bounded. As you will see, the nonlinear terms $f$ and $g$ in system can be unbounded. In this sense, we generalize and improve the main results in Palmer [@Palmer73]. [**Remark 4**]{} Some novel techniques are employed to prove our main result. Due to the unbounedness of the nonlinear terms, it is difficult to directly prove that the nonlinear system (\[sysnlxy-phi\]) is topologically conjugated to the linear system (\[syslxy\]). To overcome such difficulty, we should introduce the auxiliary system as follows $$\label{sysnlxy-fg} \left\{\ \begin{array}{c} x'(t)=A(t)x(t)+A_0(t)x(\gamma(t))+f(t,x(t),x(\gamma(t))),\\ y'(t)=B(t)y(t)+B_0(t)y(\gamma(t))+g(t,x(t),x(\gamma(t))). \end{array} \right.$$ We first prove that system is topologically conjugated to system . Secondly, we prove that system and system are topologically conjugated. Then, by transition of topological conjugacy, system and system are topologically conjugated. The rest of this paper is organized as follows: In Section 2, we give some definitions, notation and preliminary lemmas. Our main results, Theorem 1 and Theorem 2, are stated in Section 3. The proof of Theorem 1 is given in Section 4. The proof of Theorem 2 is very long and we divide the proofs into several Sections (see Sections 5-8). Preliminaries ============= General assumptions ------------------- We introduce two groups of assumptions for Theorem 1 and Theorem 2, respectively. The first group is conditions **(B, C)** and the second group is conditions **($\mathfrak{B}, \mathfrak{C}$)**. In this paper, $|\cdot|$ denotes a vector norm or matrix norm. We assume that system and satisfy the following conditions. Condition **(B)**: **(B1)** The functions $M(t)$, $M_0(t)$ and $h(t,z(t),z(\gamma(t)))$ are locally integrable in $\mathbb{R}$.\ **(B2)** There exists constants $r>0$, $\mu>0$ and $\ell>0$ such that for any $t\in \mathbb{R}$, $(t,z(t),z(\gamma(t)))$ and $(t,\hat{z}(t),\hat{z}(\gamma(t)))$ $\in\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n}$, $$|h(t,z(t),z(\gamma(t)))|\leq r(|z(t)|+|z(\gamma(t))|)+\mu,$$ and $$|h(t,z(t),z(\gamma(t)))-h(t,\hat{z}(t),\hat{z}(\gamma(t)))|\leq \ell\Big{(}|z(t)-\hat{z}(t)|+|z(\gamma(t))-\hat{z}(\gamma(t))|\Big{)}.$$ We remark that if we further assume that $\ell\leq r$, $ |h(t,0,0)|\leq \mu $, the Lipschitz condition in (B2) implies the first estimation $|h(t,z,y)|\leq r(|z|+|y|)+\mu$ in (B2). Moreover, we introduce the following notation and condition **(C)**. (i) : We define $I_i=[t_i,t_{i+1})$ for any $i\in\mathbb{Z}.$ (ii) : For any $i\in\mathbb{Z}$ and $k\times k$ matrix $Q(t),$ we define $$\rho_i^+(Q)=\exp(\int_{t_i}^{\zeta_i}|Q(s)|ds) \quad \text{and} \quad \rho_i^-(Q)=\exp(\int_{\zeta_i}^{t_{i+1}}|Q(s)|ds).$$ Condition **(C)**: There exists $0<\nu^+<1$ and $0<\nu^-<1$ such that the matrices $M(t)$ and $M_0(t)$ satisfy properties: $$\sup\limits_{i\in\mathbb{Z}}\rho_i^+(M)\ln\rho_i^+(M_0)\leq\nu^+, \quad \sup\limits_{i\in\mathbb{Z}}\rho_i^-(M)\ln\rho_i^-(M_0)\leq\nu^-,$$ and $$\label{defrho} \begin{array}{c} 1\leq\rho(M)\triangleq\sup\limits_{i\in\mathbb{Z}}\rho_i^+(M)\rho_i^-(M)<+\infty. \end{array}$$ Therefore, $$\label{defalpha_0} \begin{array}{c} \rho_0(M)\triangleq\rho(M)^2(\frac{1+\nu^-}{1-\nu^+})>1. \end{array}$$ Now, we introduce conditions **($\mathfrak{ B}, \mathfrak{C}$)** for systems and . Condition **($\mathfrak{B}$)**: **($\mathfrak{B}$1)** There exist constants $\beta>0$ and $\beta_0>0$ such that $$\sup\limits_{t\in\mathbb{R}}|A(t)|\leq \beta, \quad \sup\limits_{t\in\mathbb{R}}|B(t)|\leq \beta,$$ $$\sup\limits_{t\in\mathbb{R}}|A_0(t)|\leq \beta_0, \quad \sup\limits_{t\in\mathbb{R}}|B_0(t)|\leq \beta_0.$$ **($\mathfrak{B}$2)** There exist constants $\delta>0$ and $\lambda>0$ such that for any $(t,x(t),x(\gamma(t)))\in\mathbb{R}\times\mathbb{R}^{n_1}\times\mathbb{R}^{n_1}$ and $(t,y(t),y(\gamma(t)))\in\mathbb{R}\times\mathbb{R}^{n_2}\times\mathbb{R}^{n_2},$ $$|f(t,x(t),x(\gamma(t)))|\leq \lambda(|x(t)|+|x(\gamma(t))|),$$ $$|g(t,x(t),x(\gamma(t)))|\leq \lambda(|x(t)|+|x(\gamma(t))|),$$ $$|\phi(t,y(t),y(\gamma(t)))|\leq\delta,$$ $$|\psi(t,y(t),y(\gamma(t)))|\leq\delta.$$ **($\mathfrak{B}$3)** There exists constant $\omega>0$ such that for any $(t,x_1(t),x_1(\gamma(t)))$, $(t,x_2(t),x_2(\gamma(t)))$ $\in\mathbb{R}\times\mathbb{R}^{n_1}\times\mathbb{R}^{n_1}$ and $(t,y_1(t),y_1(\gamma(t)))$, $(t,y_2(t),y_2(\gamma(t)))$ $\in\mathbb{R}\times\mathbb{R}^{n_2}\times\mathbb{R}^{n_2}$, $$\begin{array}{ccc} & &|f(t,x_{1}(t),x_{1}(\gamma(t)))-f(t,x_{2}(t),x_{2}(\gamma(t)))|\\ &\leq& \omega\Big{(}|x_{1}(t)-x_{2}(t)|+|x_{1}(\gamma(t))-x_{2}(\gamma(t))|\Big{)}, \end{array}$$ $$\begin{array}{ccc} & &|g(t,x_{1}(t),x_{1}(\gamma(t)))-g(t,x_{2}(t),x_{2}(\gamma(t)))|\\ &\leq& \omega\Big{(}|x_{1}(t)-x_{2}(t)|+|x_{1}(\gamma(t))-x_{2}(\gamma(t))|\Big{)}, \end{array}$$ $$\begin{array}{ccc} & &|\phi(t,y_{1}(t),y_{1}(\gamma(t)))-\phi(t,y_{2}(t),y_{2}(\gamma(t)))|\\ &\leq& \omega\Big{(}|y_{1}(t)-y_{2}(t)|+|y_{1}(\gamma(t))-y_{2}(\gamma(t))|\Big{)}, \end{array}$$ and $$\begin{array}{ccc} & &|\psi(t,y_{1}(t),y_{1}(\gamma(t)))-\psi(t,y_{2}(t),y_{2}(\gamma(t)))|\\ &\leq& \omega\Big{(}|y_{1}(t)-y_{2}(t)|+|y_{1}(\gamma(t))-y_{2}(\gamma(t))|\Big{)}. \end{array}$$ Condition **($\mathfrak{C}$)**: There exist $0<\nu^+<1$ and $0<\nu^-<1$ such that matrices $A(t)$, $A_0(t)$, $B(t)$ and $B_0(t)$ satisfy following properties: $$\sup\limits_{i\in\mathbb{Z}}\rho_i^+(A)\ln\rho_i^+(A_0)\leq\nu^+, \quad \sup\limits_{i\in\mathbb{Z}}\rho_i^-(A)\ln\rho_i^-(A_0)\leq\nu^-,$$ $$\sup\limits_{i\in\mathbb{Z}}\rho_i^+(B)\ln\rho_i^+(B_0)\leq\nu^+, \quad \sup\limits_{i\in\mathbb{Z}}\rho_i^-(B)\ln\rho_i^-(B_0)\leq\nu^-.$$ Note that **($\mathfrak{B}$1)** and **(A4)** imply that $$\label{defrho_AB} \begin{array}{c} 1\leq\rho(A)\triangleq\sup\limits_{i\in\mathbb{Z}}\rho_i^+(A)\rho_i^-(A)<+\infty \quad \textrm{and} \quad 1\leq\rho(B)\triangleq\sup\limits_{i\in\mathbb{Z}}\rho_i^+(B)\rho_i^-(B)<+\infty. \end{array}$$ Thus, $$\label{defalpha_0AB} \begin{array}{c} \rho_0(A)\triangleq\rho^2(A)(\frac{1+\nu^-}{1-\nu^+})>1 \quad \textrm{and}\quad \rho_0(B)\triangleq\rho^2(B)(\frac{1+\nu^-}{1-\nu^+})>1. \end{array}$$ Throughout the rest of the paper, we assume that conditions **(A, B, C, $\mathfrak{B}, \mathfrak{C}$)** hold. Notation of solutions for DEPCAGs --------------------------------- The notion of solutions for DEPCAGs was introduced in [@Aftabizadeh87; @Akhmet14; @ChiuPinto10; @CookeWiener89; @Coronel15; @Wiener93]. A continuous function $z(t)$ is a solution of system or system on $\mathbb{R}$ if: (i) : The derivative $z'(t)$ exists at each point $t\in\mathbb{R}$ with the possible exception of points $t_i,i\in\mathbb{Z}$, where the one side derivative exists; (ii) : The equation is satisfied for $z(t)$ on each interval $(t_i,t_{i+1})$ and it holds for the right derivative of $z(t)$ at $t_i$. Transition matrices ------------------- In this subsection, we introduce some notation associated with solutions of a class of DEPCAGs. Let $\Phi(t)$ be the fundamental matrix of system $x'=M(t)x$ with $\Phi(0)=I$. For any $t\in I_j$, $\tau\in I_i$, $s\in \mathbb{R}$, we introduce the following notations adopting from [@Coronel15; @PintoJDEQ11; @PintoRobledo15]: $$\Phi(t,s)=\Phi(t)\Phi^{-1}(s),$$ $$J(t,\tau)=I+\int_\tau^t\Phi(\tau,s)M_0(s)ds,$$ $$E(t,\tau)=\Phi(t,\tau)+\int_\tau^t\Phi(t,s)M_0(s)ds=\Phi(t,\tau)J(t,\tau).$$ We define backward and forward products of a set of $k\times k$ matrices $\mathcal{Q}_i (i=1,\ldots,m)$ as follows: $$\prod\limits_{i=1}^{\leftarrow m}\mathcal{Q}_i=\left\{ \begin{array}{cc} \mathcal{Q}_m\cdots\mathcal{Q}_2\mathcal{Q}_1, &\quad\text{if} \quad m\geqslant 1,\\ I, &\quad \text{if} \quad m<1, \end{array} \right.$$ and $$\prod\limits_{i=1}^{\rightarrow m}\mathcal{Q}_i=\left\{ \begin{array}{cc} \mathcal{Q}_1\mathcal{Q}_2\cdots\mathcal{Q}_m, &\quad\text{if} \quad m\geqslant 1,\\ I, &\quad \text{if} \quad m<1. \end{array} \right.$$ If $J(t,s)$ is nonsingular, we could define the transition matrix $Z(t,s)$ of system as follows:\ if $t>\tau$, $$\begin{aligned} &&Z(t,\tau)\\ &=&E(t,\zeta_j)E(t_j,\zeta_j)^{-1}\prod\limits_{r=i+2}^{\leftarrow j}\Big{(}E(t_r,\gamma(t_{r-1}))E(t_{r-1},\gamma(t_{r-1}))^{-1}\Big{)}E(t_{i+1},\gamma(\tau))E(\tau,\gamma(\tau))^{-1},\nonumber\end{aligned}$$ if $t<\tau$, $$\begin{aligned} &&Z(t,\tau)\\ &=&E(t,\zeta_j)E(t_{j+1},\zeta_j)^{-1}\prod\limits_{r=j+1}^{\rightarrow i-1}\Big{(}E(t_{r},\gamma(t_{r}))E(t_{r+1},\gamma(t_{r}))^{-1}\Big{)}E(t_{i},\gamma(\tau))E(\tau,\gamma(\tau))^{-1}.\nonumber\end{aligned}$$ Through simple calculations, we obtain $Z(t,\tau)Z(\tau,s)=Z(t,s)$ and $Z(t,s)=Z(s,t)^{-1}$. Since $E(\tau,\tau)=I$ and $\frac{\partial E(t,\tau)}{\partial t}=M(t)E(t,\tau)+M_0(t)$, we have $$\frac{\partial Z(t,\tau)}{\partial t}=M(t)Z(t,\tau)+M_0(t)Z(\gamma(t),\tau).$$ Thus, $Z(t,\tau)$ is a solution of system . Formulas of solutions for DEPCAGs --------------------------------- To introduce the formulas of solutions, we first state the following important lemma. *Assume that conditions **(A,B,C)** are fulfilled, then $J(t,s)$ is nonsingular for any $t,s\in \bar{I}_r$ and the matrices $Z(t,s)$ and $Z(t,s)^{-1}$ are well defined for any $t, s\in\mathbb{R}$. If $t,s\in \bar{I}_r$, then $$|\Phi(t,s)|\leq\rho(M),$$ $$|Z(t,s)|\leq\rho_0(M),$$ where $\rho(\cdot)$ is defined in and $\rho_0(\cdot)$ is defined in .* We remark that Lemma 2.1 ensures the continuity of solutions of system on $\mathbb{R}$. We introduce the following formulas for DEPCAGs. *For any $t\in I_j$, $\tau\in I_i,$ the solution of system with $x(\tau)=\xi$ is defined on $\mathbb{R}$ and is given by $$\label{sollz} z(t)=Z(t,\tau)\xi.$$* *For any $t\in I_j$, $\tau\in I_i$ and $t>\tau$, the solution of system with $z(\tau)=\xi$ is defined on $\mathbb{R}$ and is given by $$\begin{aligned} \label{solnlz} z(t)&=&Z(t,\tau)\xi+\int_\tau^{\zeta_i}Z(t,\tau)\Phi(\tau,s)h(s)ds +\sum\limits_{r=i+1}^j\int_{t_r}^{\zeta_r}Z(t,t_r)\Phi(t_r,s)h(s)ds \nonumber\\ & & +\sum\limits_{r=i}^{j-1}\int_{\zeta_r}^{t_{r+1}}Z(t,t_{r+1})\Phi(t_{r+1},s)h(s)ds +\int_{\zeta_j}^{t}\Phi(t,s)h(s)ds, $$ where $h(s)=h(s,z(s),z(\gamma(s)))$.* If $t<\tau$, one could obtain the solution formula by replacing $\sum\limits_{r=i+1}^j$ and $\sum\limits_{r=i}^{j-1}$ by $\sum\limits_{r=j+1}^i$ and $\sum\limits_{r=j}^{i-1}$, respectively. Subsystems of System --------------------- For convenience, consider the following subsystems of system : $$\label{sysnlx} x'(t)=A(t)x(t)+A_0(t)x(\gamma(t))+f(t,x(t),x(\gamma(t)))+\phi(t,y(t),y(\gamma(t))),$$ $$\label{sysnly} y'(t)=B(t)y(t)+B_0(t)y(\gamma(t))+g(t,x(t),x(\gamma(t)))+\psi(t,y(t),y(\gamma(t))),$$ and subsystems of system : $$\label{syslx} x'(t)=A(t)x(t)+A_{0}(t)x(\gamma(t)),$$ $$\label{sysly} y'(t)=B(t)y(t)+B_{0}(t)y(\gamma(t)).$$ Let $\Phi_1(t)$ be the fundamental matrix of system $x'=A(t)x$ with $\Phi_1(0)=I$, and $\Phi_2(t)$ be the fundamental matrix of system $y'=B(t)y$ with $\Phi_2(0)=I$. For any $t\in I_j$, $\tau\in I_i$, $s\in \mathbb{R}$, similar to $\Phi(t,s), J(t,\tau)$, and $E(t,\tau)$ in subsection 2.3, we could define $$\Phi_k(t,s), \quad J_k(t,\tau)\quad \text{and} \quad E_k(t,\tau),\quad k=1,2.$$ If $J_k(t,s)$ $(k=1,2)$ is nonsingular, we could define the transition matrices $Z_1(t,s)$ and $Z_2(t,s)$ of subsystems and , respectively. Moreover, we could verify that $Z_1(t,\tau)$ and $Z_2(t,\tau)$ are solutions of subsystems and , respectively. $\alpha$-exponential dichotomy and Green function ------------------------------------------------- Now we introduce the definition of exponential dichotomy for a DEPCAG. In this paper, we adopt the following definition from Akhmet [@Akhmet12; @Akhmet14]. The linear system has an $\alpha$-exponential dichotomy on $\mathbb{R}$ if there exist a projection $P$, constants $K\geqslant 1$ and $\alpha>0$ such that the transition matrix $Z(t,s)$ of system satisfies $$|Z_P(t,s)|\leq Ke^{-\alpha|t-s|},$$ where $Z_P(t,s)$ is defined by $$Z_P(t,s)=\left\{ \begin{array}{cc} Z(t,0)PZ(0,s), & t\geqslant s,\\ -Z(t,0)(I-P)Z(0,s), & s>t. \end{array} \right.$$ For convenience, we define the Green function corresponding to system which was introduced in [@PintoRobledo15; @Coronel15]. Given $t\in(\zeta_j,t_{j+1})$, $$\tilde{G}(t,s)= \left\{ \begin{array}{ccc} Z_p(t,t_r)\Phi(t_r,s), & \textrm{ if } & s\in[t_r,\zeta_r) \textrm{ for any } r\in\mathbb{Z},\\ Z_p(t,t_{r+1})\Phi(t_{r+1},s) & \textrm{ if } & s\in[\zeta_r,t_{r+1}) \textrm{ for any } r\in\mathbb{Z}\setminus\{j\},\\ \Phi(t,s) & \textrm{ if } & s\in[\zeta_j,t),\\ 0 & \textrm{ if } & s\in[t,t_{j+1}], \end{array} \right.$$ and if $t\in[t_j,\zeta_j],$ $$\tilde{G}(t,s)= \left\{ \begin{array}{ccc} Z_p(t,t_r)\Phi(t_r,s) & \textrm{ if } & s\in[t_r,\zeta_r) \textrm{ for any } r\in\mathbb{Z}\setminus\{j\},\\ Z_p(t,t_{r+1})\Phi(t_{r+1},s) & \textrm{ if } & s\in[\zeta_r,t_{r+1}) \textrm{ for any } r\in\mathbb{Z},\\ 0 & \textrm{ if } & s\in[t_j,t),\\ -\Phi(t,s) & \textrm{ if } & s\in[t,\zeta_j), \end{array} \right.$$ We denote $\tilde{G}_1(t,s)=\tilde{G}(t,s)$ for $t\geqslant s$ and $\tilde{G}_2(t,s)=-\tilde{G}(t,s)$ for $t<s$. Condition **($\mathfrak{D}$)** ------------------------------ For convenience, we apply the following condition in our second result to replace the condition that system has an $\alpha$-exponential dichotomy. Condition **($\mathfrak{D}$)**: There exist constants $K\geqslant 1$ and $\alpha>0$ such that $$|Z_1(t,s)|\leq e^{-\alpha(t-s)}, \quad t\geqslant s \quad \textrm{and} \quad |Z_2(t,s)|\leq Ke^{\alpha(t-s)}, \quad s>t.$$ It is clear that condition **($\mathfrak{D}$)** is equivalent to assume that system has an $\alpha$-exponential dichotomy by taking $K=1$ in the first inequality. We point out that this assumption is natural. In fact, we can get the inequality by taking another equivalent norm or supposing the following conditions: $$\frac{d|x(t)|'}{dt}|_{\eqref{syslxy}}\leq -2\alpha|x(t)|^2, \quad |f(t,x(t),x(\gamma(t))|\leq \frac{\alpha}{2}|x|.$$ Topological conjugacy --------------------- The notion of topological equivalence and topological conjugacy can be found in [@Palmer73; @Palmer75; @PintoRobledo15; @XiaLi13]. A continuous function $H:\mathbb{R}\times\mathbb{R}^n\rightarrow\mathbb{R}^n$ is topological equivalence between system and if following conditions hold: (i) : for each $t\in\mathbb{R}$, $H(t,z)$ is a homeomorphism of $\mathbb{R}^n$, (ii) : $H(t,z) \rightarrow \infty$ as $z \rightarrow \infty$ uniformly with respect to t, (iii) : if $z(t)$ is a solution of system , then $H(t,z(t))$ is a solution of system . In addition, the function $L(t,z)=H^{-1}(t,z)$ has properties (i)-(iii) also. If such a map $H$ exists, then system and are called topologically conjugated. Some lemmas ----------- *If system has an $\alpha$-exponential dichotomy on $\mathbb{R}$, then $\tilde{G}$ satisfies $$|\tilde{G}(t,s)|\leq K\rho^\ast(M) e^{-\alpha|t-s|},$$ where $\rho^\ast(M)=\rho(M)e^{\alpha\theta}$, $\rho(M)$ is defined in and $\theta$ is in **(A4)**.* From Lemma 2.2, we have that $$\label{tilG12} |\tilde{G}_1(t,s)|\leq K\rho^\ast(M) e^{-\alpha(t-s)} \quad \text{for} \quad t\geqslant s,\quad |\tilde{G}_2(t,s)|\leq K\rho^\ast(M) e^{-\alpha(s-t)}\quad \text{for} \quad t< s.$$ *If system has an $\alpha$-exponential dichotomy on $\mathbb{R}$, then the unique solution bounded on $\mathbb{R}$ is the null solution.* Main Results ============ Now we are in a position to state our main results. *If conditions **(A,B,C)** hold and system has an $\alpha$-exponential dichotomy with constant $K\geqslant1$ and $\alpha>0$, further assume that $$\label{eq3.1} 8Kl\rho^{\ast}(M)\alpha^{-1}\leq 1, \quad 4Kr\rho^{\ast}(M)\alpha^{-1}\leq 1,$$ where $\rho^{\ast}(M)$ is defined in Lemma 2.2, then system has a unique solution bounded on $\mathbb{R}$ which can be represented as follows $$z(t) = \int_{-\infty}^t\tilde{G}_1(t,s)h(s,z(s),z(\gamma(s)))ds - \int_t^{+\infty}\tilde{G}_2(t,s)h(s,z(s),z(\gamma(s)))ds,$$ and $$|z(t)| \leq 2 K\mu\tilde{\rho}(M)(\alpha-4rK\tilde{\rho}(M))^{-1} \triangleq \sigma.$$* We remark that if system reduces to ODE, that is, $$z'(t)=M(t)z(t) +h(t,z(t)),$$ Theorem 1 is valid for ODE. *If conditions **(A,$\mathfrak{B}$,$\mathfrak{C}$,$\mathfrak{D}$)** hold, further assume that $$\label{eq3.2} 8K\tilde{\rho}(A)\omega\alpha^{-1} < 1, \quad 8K\tilde{\rho}(B)\omega\alpha^{-1} <1,$$ $$\label{eq3.2+1} 16K\tilde{\rho}(A)\lambda\alpha^{-1} < 1, \quad 16K\tilde{\rho}(B)\lambda\alpha^{-1} <1,$$ $$\label{eq3.2+2} \alpha_0=\alpha-2\omega\tilde{\rho}(A)e^{\alpha\theta}>0,$$ $$\label{eq3.3} F(\ell,\theta)(\beta_0+\ell)\theta = \upsilon < 1,$$ where $F(\ell,\theta) = \frac{e^{(\beta+\ell)\theta}-1}{(\beta+\ell)\theta}$, $\tilde{\rho}(\cdot)=\max(\rho(\cdot)\rho_0(\cdot), \rho(\cdot)e^{\alpha\theta})$, $\rho(\cdot)$ is defined in , $\rho_0(\cdot)$ is defined in , then system is topologically conjugated to system .* If system reduces to ODE, that is, $$\label{ODE} z'(t)=M(t)z(t) +h(t,z(t)).$$ And reduces to the ODE system . System is its linear system.Notice that for the ODE system, $\theta=0$, then $\rho^{\ast}(A)=1$, $\rho^{\ast}(B)=1$. Thus, Theorem 2 reduces to *Assume that system has an $\alpha$-exponential dichotomy with constant $K\geqslant1$ and $\alpha>0$. If $|f(t,x(t)))|\leq\lambda(|x(t)|), \quad |g(t,x(t))|\leq\lambda(|x(t)|)$, $|\phi(t,y(t))|\leq \delta, \quad |\psi(t,y(t))|\leq \delta$, and further assume that $$8K\omega\alpha^{-1} < 1, \quad 8K\lambda\alpha^{-1} <1,$$ then system is topologically conjugated to system .* The Proof of Theorem 1 ======================= To prove Theorem 1, we first introduce the following lemma. *If $t>\zeta_i$ and $z(t)$ is a bounded solution of system , then $$\begin{aligned} I&\triangleq&\sum_{r=-\infty}^{i}\int_{t_r}^{\zeta_r}Z(t,0)P Z(0,t_r)\Phi(t_r,s)h(s,z(s),z(\gamma(s)))ds\end{aligned}$$ is convergent.* From $t_r\leq\zeta_r$, $t>\zeta_i$, **(B)** and , we have $$\begin{aligned} |I|&\leq&\int_{-\infty}^{t}|Z(t,0)P Z(0,t_r)\Phi(t_r,s)h(s,z(s),z(\gamma(s)))|ds \\ &\leq&\int_{-\infty}^{t}|\tilde{G}_1(t,s)|{r(|z(s)|+|z(\gamma(s))|)+\mu}ds \\ &\leq& \int_{-\infty}^{t}K\rho^\ast(M) e^{-\alpha(t-s)} (2r |z|+\mu)ds \\ &=& K\rho^\ast(M)\alpha^{-1}(2r |z|+\mu).\end{aligned}$$ Since $z(s)$ is a bounded solution, $I$ is convergent. $\Box$ For $\sigma$ defined in Theorem 1, denote $$\Omega=\{\varphi(t)|\varphi:\mathbb{R}\rightarrow \mathbb{R}^n \text{ is continuous and } |\varphi(t)|\leq\sigma\},$$ and $$W=\{\varphi(t)|\varphi:\mathbb{R}\rightarrow\mathbb{R}^n\text{ is continuous and }\|\varphi\| < \infty\}.$$ It is easy to see that $W$ is a Banach space and $\Omega$ is a closed subset of $W$. Suppose that $t\in{[\zeta_j,t_{j+1})}, 0\in{[t_i,\zeta_i]},j>i.$ For any $\varphi(t)\in\Omega$, define the map $T:\Omega\rightarrow W$ as follows $$\begin{aligned} T\varphi(t) \ &= \sum_{r=-\infty}^{j}\int_{t_r}^{\zeta_r}Z(t,0)PZ(0,t_r)\Phi(t_r,s)h(s,\varphi(s),\varphi(\gamma(s)))ds \notag\\ &+ \sum_{r=-\infty}^{j-1}\int_{\zeta_r}^{t_{r+1}}Z(t,0)PZ(0,t_{r+1})\Phi(t_{r+1},s)h(s,\varphi(s),\varphi(\gamma(s)))ds \notag\\ &- \sum_{r=j+1}^{+\infty}\int_{t_r}^{\zeta_r}Z(t,0)(I-P)Z(0,t_r)\Phi(t_r,s)h(s,\varphi(s),\varphi(\gamma(s)))ds \notag\\ &- \sum_{r=j}^{+\infty}\int_{\zeta_r}^{t_{r+1}}Z(t,0)(I-P)Z(0,t_{r+1})\Phi(t_{r+1},s)h(s,\varphi(s),\varphi(\gamma(s)))ds \notag\\ &+ \int_{\zeta_j}^{t}\Phi(t,s)h(s,\varphi(s),\varphi(\gamma(s)))ds \notag \\ &=\int_{-\infty}^t\tilde{G}_1(t,s)h(s,\varphi(s),\varphi(\gamma(s)))ds - \int_t^{+\infty}\tilde{G}_2(t,s)h(s,\varphi(s),\varphi(\gamma(s)))ds. \notag\end{aligned}$$ To prove the existence and uniqueness of bounded solution, we divide it into two steps. [**Step 1**]{} We prove that the map $T$ has a unique fixed point by contraction principle. Due to and **(B2)**, we get $$\begin{aligned} |T\varphi(t)| \ &\leq \int_{-\infty}^{t}Ke^{-\alpha(t-s)}\tilde{\rho}(M)(r|z(t)|+r|z(\gamma(t))|+\mu)ds \notag\\ &\quad + \int_{t}^{+\infty}Ke^{\alpha(t-s)}\tilde{\rho}(M)(r|z(t)|+r|z(\gamma(t))|+\mu)ds \notag\\ &\leq [K\tilde{\rho}(M)(\mu+2r\sigma)+K\tilde{\rho}(M)(\mu+2r\sigma)]\alpha^{-1} \notag\\ &= 2K\tilde{\rho}(M)\alpha^{-1}(\mu+2r\sigma) \notag\\ &=\sigma. \notag\end{aligned}$$ Therefore $T\varphi\in\Omega$ and $T$ is a map from $\Omega$ to $\Omega$. For any $\varphi_1(t),\varphi_2(t)\in\Omega$, from and **(B2)** we have $$\begin{aligned} |T\varphi_1(t)-T\varphi_2(t)| \ &=\Big{|}\int_{-\infty}^{t}\tilde{G}_1(t,s)[h(s,\varphi_1(s),\varphi_1(\gamma(s)))-h(s,\varphi_2(s),\varphi_2(\gamma(s)))]ds\notag\\ &\quad+\int_{t}^{+\infty}\tilde{G}_2(t,s)[h(s,\varphi_1(s),\varphi_1(\gamma(s)))-h(s,\varphi_2(s),\varphi_2(\gamma(s)))]ds \Big{|}\notag\\ &\leq \int_{-\infty}^{t}K\tilde{\rho}(M)e^{-\alpha(t-s)}l(|\varphi_1(s)-\varphi_2(s)|+|\varphi_1(\gamma(s))-\varphi_2(\gamma(s))|)\notag\\ &\quad+\int_{t}^{+\infty}K\tilde{\rho}(M)e^{\alpha(t-s)}l(|\varphi_1(s)-\varphi_2(s)|+|\varphi_1(\gamma(s))-\varphi_2(\gamma(s))|)ds\notag\\ &\leq 2Kl\tilde{\rho}(M)\alpha^{-1}\|\varphi_1-\varphi_2\| + 2Kl\tilde{\rho}(M)\alpha^{-1}\|\varphi_1-\varphi_2\| \notag\\ &\leq \frac{1}{2}\|\varphi_1-\varphi_2\|. \notag\end{aligned}$$ Thus $T$ is a contraction map in $\Omega$. By the contraction map principle, there exists a unique $\varphi_0(t) \in \Omega$ such that $$\begin{aligned} \varphi_0(t)&= T\varphi_0(t) \notag\\ &= \sum_{r=-\infty}^{j}\int_{t_r}^{\zeta_r}Z(t,0)PZ(0,t_r)\Phi(t_r,s)h(s,\varphi_0(s),\varphi_0(\gamma(s)))ds \notag\\ &+ \sum_{r=-\infty}^{j-1}\int_{\zeta_r}^{t_{r+1}}Z(t,0)PZ(0,t_{r+1})\Phi(t_{r+1},s)h(s,\varphi_0(s),\varphi_0(\gamma(s)))ds \notag\\ &- \sum_{r=j+1}^{+\infty}\int_{t_r}^{\zeta_r}Z(t,0)(I-P)Z(0,t_r)\Phi(t_r,s)h(s,\varphi_0(s),\varphi_0(\gamma(s)))ds \notag\\ &- \sum_{r=j}^{+\infty}\int_{\zeta_r}^{t_{r+1}}Z(t,0)(I-P)Z(0,t_{r+1})\Phi(t_{r+1},s)h(s,\varphi_0(s),\varphi_0(\gamma(s)))ds \notag\\ &+ \int_{\zeta_j}^{t}\Phi(t,s)h(s,\varphi_0(s),\varphi_0(\gamma(s)))ds. \notag\end{aligned}$$ Furthermore, it is easy to check that $\varphi_0(t)$ is a solution of system . [**Step 2**]{} We prove the uniqueness of the bounded solution. That is, we prove that $\varphi_0(t)$ is the unique bounded solution of system . In fact, suppose that $\varphi_1(t)$ is another bounded solution of system , by Proposition 2.2, we get $$\begin{aligned} \varphi_1(t) \ &=Z(t,0)\varphi_1(0)+\int_{0}^{\zeta_i}Z(t,0)\Phi(0,s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds\notag\\ &\quad+\sum_{r=i+1}^{j}\int_{t_r}^{\zeta_r}Z(t,t_r)\Phi(t_r,s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds\notag\\ &\quad+\sum_{r=i}^{j-1}\int_{\zeta_r}^{t_{r+1}}Z(t,t_{r+1})\Phi(t_{r+1},s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds\notag\\ &\quad+\int_{\zeta_j}^{t}\Phi(t,s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds\notag\\ &=Z(t,0)\{\varphi_1(0)+\int_{0}^{\zeta_i}\Phi(0,s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds\notag\\ &\quad+\sum_{r=i+1}^{j}P\int_{t_r}^{\zeta_r}Z(0,t_r)\Phi(t_r,s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds\notag\\ &\quad+\sum_{r=i}^{j-1}P\int_{\zeta_r}^{t_{r+1}}Z(0,t_{r+1})\Phi(t_{r+1},s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds\}\notag\\ &\quad+\int_{\zeta_j}^{t}\Phi(t,s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds \notag\\ &\quad+Z(t,0)\{\sum_{r=i+1}^{j}(I-P)\int_{t_r}^{\zeta_r}Z(0,t_r)\Phi(t_r,s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds\notag\\ &\quad+\sum_{r=i}^{j-1}(I-P)\int_{\zeta_r}^{t_{r+1}}Z(0,t_{r+1})\Phi(t_{r+1},s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds\}\notag\end{aligned}$$ By Lemma 4.1, we have that $$\begin{aligned} \varphi_1(t)&= Z(t,0)(\varphi_1(0)+c_0) \notag\\ &+ \sum_{r=-\infty}^{j}\int_{t_r}^{\zeta_r}Z(t,0)PZ(0,t_r)\Phi(t_r,s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds \notag\\ &+ \sum_{r=-\infty}^{j-1}\int_{\zeta_r}^{t_{r+1}}Z(t,0)PZ(0,t_{r+1})\Phi(t_{r+1},s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds \notag\\ &- \sum_{r=j+1}^{+\infty}\int_{t_r}^{\zeta_r}Z(t,0)(I-P)Z(0,t_r)\Phi(t_r,s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds \notag\\ &- \sum_{r=j}^{+\infty}\int_{\zeta_r}^{t_{r+1}}Z(t,0)(I-P)Z(0,t_{r+1})\Phi(t_{r+1},s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds \notag\\ &+ \int_{\zeta_j}^{t}\Phi(t,s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds \notag \\ &\triangleq Z(t,0)(\varphi_1(0)+c_0)+J, \notag\end{aligned}$$ where $$\begin{aligned} c_0 &=\int_{0}^{\zeta_i}\Phi(0,s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds \notag\\ &- \sum_{r=-\infty}^{i}\int_{t_r}^{\zeta_r}PZ(0,t_r)\Phi(t_r,s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds \notag\\ &- \sum_{r=-\infty}^{i-1}\int_{\zeta_r}^{t_{r+1}}PZ(0,t_{r+1})\Phi(t_{r+1},s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds, \notag \\ &+\sum_{r=i+1}^{+\infty}\int_{t_r}^{\zeta_r}(I-P)Z(0,t_r)\Phi(t_r,s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds \notag\\ &+ \sum_{r=i}^{+\infty}\int_{\zeta_r}^{t_{r+1}}(I-P)Z(0,t_{r+1})\Phi(t_{r+1},s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds. \notag\end{aligned}$$ Similar to the computation of $|T\varphi(t)|$, we could prove that $J$ is bounded. Thus $Z(t,0)(\varphi_1(0)+c_0)$ is a bounded solution of system . From Lemma 2.3, we have $$\varphi_1(0)+c_0=0.$$ Thus $$\varphi_1(t)=\int_{-\infty}^{t}\tilde{G}_1(t,s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds-\int_{t}^{+\infty}\tilde{G}_2(t,s)h(s,\varphi_1(s),\varphi_1(\gamma(s)))ds.$$ Furthermore, $$\begin{aligned} &|\varphi_1(t)-\varphi_0(t)| \notag \\ &\leq|\int_{-\infty}^{t}\tilde{G}_1(t,s)[h(s,\varphi_1(s),\varphi_1(\gamma(s)))-h(s,\varphi_0(s),\varphi_0(\gamma(s)))]ds| \notag \\ &+|\int_{t}^{+\infty}\tilde{G}_2(t,s)[h(s,\varphi_1(s),\varphi_1(\gamma(s)))-h(s,\varphi_0(s),\varphi_0(\gamma(s)))]ds| \notag\\ &\leq\int_{-\infty}^{t}K l\tilde{\rho}(M)e^{-\alpha(t-s)}(|\varphi_1(s)-\varphi_0(s)|+|\varphi_1(\gamma(s))-\varphi_0(\gamma(s))|)ds\notag\\ &+\int_{t}^{+\infty}Kl\tilde{\rho}(M)e^{\alpha(t-s)}(|\varphi_1(s)-\varphi_0(s)|+|\varphi_1(\gamma(s))-\varphi_0(\gamma(s))|)ds\notag\\ &\leq 4Kl\tilde{\rho}(M)\alpha^{-1}\|\varphi_1-\varphi_0\| \notag\\ &\leq \frac{1}{2}\|\varphi_1-\varphi_0\|.\notag\end{aligned}$$ Therefore $$\|\varphi_1-\varphi_0\| \leq \frac{1}{2}\|\varphi_1-\varphi_0\|,$$ which implies that $\varphi_1(t)=\varphi_0(t)$. This completes the proof. $\Box$ The preliminaries for the proof of Theorem 2 ============================================ In this section, we give some preliminaries for the proof of Theorem 2. The solutions of subsystems ---------------------------- From Lemma 2.1, we have the following lemma. *Assume that conditions **(A, $\mathfrak{B}$, $\mathfrak{C}$)** are fulfilled, then $J_k(t,s)(k=1,2)$ is nonsingular for any $t,s\in \bar{I}_r$ and the matrices $Z_k(t,s)$ and $Z_k(t,s)^{-1}(k=1,2)$ are well defined for any $t, s\in\mathbb{R}$. If $t,s\in \bar{I}_r$, then $$|\Phi_1(t,s)|\leq\rho(A),\quad |\Phi_2(t,s)|\leq\rho(B),$$ $$|Z_1(t,s)|\leq\rho_0(A), \quad |Z_2(t,s)|\leq\rho_0(B),$$ where $\rho(\cdot)$ is defined in and $\rho_0(\cdot)$ is defined in .* Lemma 5.1 ensures the continuity of solutions of subsystems and on $\mathbb{R}$. Moreover, we give the following remark. The fundamental matrix $\Phi(t)$ of system $\left(\begin{array}{c}x'(t)\\ y'(t)\end{array}\right)=\left(\begin{array}{c}A(t)x(t)\\ B(t)y(t)\end{array}\right) $ with $\Phi(0)=I$ , and the transition matrix $Z(t,s)$ of system have the following form $$\Phi(t,s)=\left(\begin{array}{c}\Phi_1(t,s)\\ 0\end{array}\begin{array}{c}0\\ \Phi_2(t,s)\end{array}\right),\quad Z(t,s)=\left(\begin{array}{c}Z_1(t,s)\\ 0\end{array}\begin{array}{c}0\\ Z_2(t,s)\end{array}\right).$$ From Proposition 2.1, for any $t\in I_j$, $\tau\in I_i,$ the solution of subsystem with $x(\tau)=\xi$ is defined on $\mathbb{R}$ and is given by $$\label{sollx} x(t)=Z_1(t,\tau)\xi,$$ and the solution of subsystem with $y(\tau)=\eta$ can be represented as $$\label{solly} y(t)=Z_2(t,\tau)\eta.$$ From Proposition 2.2, for any $t\in I_j$, $\tau\in I_i$ and $t>\tau$, the solution of subsystem with $x(\tau)=\xi$ is defined on $\mathbb{R}$ and is given by $$\begin{aligned} \label{solnlx} x(t)&=&Z_1(t,\tau)\xi+\int_\tau^{\zeta_i}Z_1(t,\tau)\Phi_1(\tau,s)(f(s)+\phi(s))ds +\sum\limits_{r=i+1}^j\int_{t_r}^{\zeta_r}Z_1(t,t_r)\Phi_1(t_r,s)(f(s)+\phi(s))ds \nonumber\\ & & +\sum\limits_{r=i}^{j-1}\int_{\zeta_r}^{t_{r+1}}Z_1(t,t_{r+1})\Phi_1(t_{r+1},s)(f(s)+\phi(s))ds +\int_{\zeta_j}^{t}\Phi_1(t,s)(f(s)+\phi(s))ds \nonumber\\ &\triangleq&Z_1(t,\tau)\xi+\int_\tau^{t}G_1(t,s)(f(s)+\phi(s))ds,\end{aligned}$$ where $f(s)=f(s,x(s),x(\gamma(s)))$, $\phi(s)=\phi(s,y(s),y(\gamma(s)))$ and $$G_1(t,s,\tau)= \left\{ \begin{array}{ccc} Z_1(t,\tau)\Phi_1(\tau,s), & \textrm{ if } & s\in[\tau,\zeta_i] \textrm{ or } s\in[\zeta_i,\tau],\\ Z_1(t,t_r)\Phi_1(t_r,s), & \textrm{ if } & s\in[t_r,\zeta_r) \textrm{ for } r=i+1,\cdots,j,\\ Z_1(t,t_{r+1})\Phi_1(t_{r+1},s) & \textrm{ if } & s\in[\zeta_r,t_{r+1}) \textrm{ for } r=i,\cdots,j-1,\\ \Phi_1(t,s) & \textrm{ if } & s\in[\zeta_j,t] \textrm{ or } s\in[t,\zeta_j]. \end{array} \right.$$. Similarly, if $t>\tau$, the solution of subsystem with $y(\tau)=\eta$ can be represented as $$\begin{aligned} \label{solnly} y(t)&=&Z_2(t,\tau)\eta+\int_\tau^{\zeta_i}Z_2(t,\tau)\Phi_2(\tau,s)(g(s)+\psi(s))ds +\sum\limits_{r=i+1}^j\int_{t_r}^{\zeta_r}Z_2(t,t_r)\Phi_2(t_r,s)(g(s)+\psi(s))ds\nonumber\\ & & +\sum\limits_{r=i}^{j-1}\int_{\zeta_r}^{t_{r+1}}Z_2(t,t_{r+1})\Phi_2(t_{r+1},s)(g(s)+\psi(s)))ds +\int_{\zeta_j}^{t}\Phi_2(t,s)(g(s)+\psi(s))ds\nonumber\\ &=&Z_2(t,\tau)\eta+\int_\tau^{t}G_2(t,s)(g(s)+\psi(s))ds,\end{aligned}$$ where $g(s)=g(s,x(s),x(\gamma(s)))$, $\psi(s)=\psi(s,y(s),y(\gamma(s)))$ and $G_2(t,s)$ can be defined in the same way as $G_1(t,s)$. We could obtain $G_k(t,s)$ $(k=1,2)$ for $t<\tau$ by replacing $r=i+1,\cdots,j,$ and $r=i,\cdots,j-1,$ with $r=j+1,\cdots,i,$ and $r=j,\cdots,i-1$, in the definitions of $G_k(t,s)$ $(t>s, k=1,2)$, respectively. From Remark 2.1, one could obtain the solution formulas of subsystems and for the case $t<\tau$. Some lemmas ----------- *If condition **($\mathfrak{D}$)** holds, for $t\in \mathbb{R}$ and $s\in \mathbb{R}$, then $$|G_1(t,s)|\leq K\tilde{\rho}(A) e^{-\alpha(t-s)}, \quad t\geqslant s, \qquad |G_2(t,s)|\leq K\tilde{\rho}(B) e^{\alpha(t-s)},\quad t<s,\\$$ where $\tilde{\rho}(\cdot)$ is defined in Theorem 1, $\alpha$ is in **($\mathfrak{D}$)** and $\theta$ is in **(A4)**.* We just prove the first inequality. Suppose that $t\in I_j$, $\tau\in I_i$ and $t\geqslant s$. [**Case $1$.**]{} $t\geqslant\tau$. Without loss of generality, we assume that $t_i\leq\tau\leq\zeta_i\leq t_{i+1}\leq\cdots t_j\leq\zeta_j\leq t$. If $s \in [\tau, \zeta_i]$, due to **(A4)**, we have $s-\tau\leq\theta$. It follows from **($\mathfrak{D}$)** and Lemma 5.1 that $$|G_1(t,s)|=|Z_1(t,\tau)\Phi_1(\tau,s)|\leq Ke^{-\alpha(t-\tau)}\rho(A)\leq Ke^{-\alpha(t-s)}e^{\alpha\theta}\rho(A).$$ If $s \in [t_r,\zeta_r]$ ($r=i+1,\cdots,j$), then $s-t_r\leq\theta$. In view of **($\mathfrak{D}$)** and Lemma 5.1, we have $$|G_1(t,s)|=|Z_1(t,t_r)\Phi_1(t_r,s)|\leq Ke^{-\alpha(t-t_r)}\rho(A)\leq Ke^{-\alpha(t-s)}e^{\alpha\theta}\rho(A).$$ If $s \in [\zeta_r, t_{r+1}]$ ($r=i,\cdots,j-1$), similar to the above inequality, we have the same conclusion. If $s \in [\zeta_j, t]$, owing to **(A4)**, we have $t-s\leq\theta$. It follows from Lemma 5.1 and $K\geqslant 1$ that $$\label{temp5.2-1} |G_1(t,s)|=|\Phi_1(t,s)|\leq\rho(A)\leq Ke^{-\alpha(t-s)}e^{\alpha\theta}\rho(A).$$ [**Case $2$.**]{} $t\leq\tau$. By the definition of $G_1(t,s)$ we have $s\in [\min(t,\zeta_j), \max(\tau, \zeta_i)]$. If $t\leq\zeta_j$, then $t<s$ which contradicts to our assumption that $t\geqslant s$. Thus, we just consider the case that $\zeta_j\leq t$. We divide the discussion into two subcases. [**Subcase $2.1$.**]{} $\zeta_j\leq t\leq t_{j+1}\leq\tau$. For $t\geqslant s$, the only possibility is that $s\in [\zeta_j,t]$. Similar to , we have $$|G_1(t,s)|=|\Phi_1(t,s)|\leq Ke^{-\alpha(t-s)}e^{\alpha\theta}\rho(A).$$ [**Subcase $2.2$.**]{} $\zeta_j\leq t\leq\tau\leq t_{j+1}$. If $t\geqslant s$, then $s\in [\zeta_j, t]$ or $s \in [\zeta_j, \tau]$. When $s\in [\zeta_j, t]$, similar to , we get $$|G_1(t,s)|\leq Ke^{-\alpha(t-s)}e^{\alpha\theta}\rho(A).$$ When $s\in [\zeta_j, \tau]$, we have $s\in \bar{I}_j$. Since $t\geqslant s$, following **($\mathfrak{D}$)** and Lemma 5.1, we obtain $$|G_1(t,s)|=|Z_1(t,\tau)\Phi_1(\tau,s)|=|Z_1(t,s)Z_1(s,\tau)\Phi_1(\tau,s)|\leq Ke^{-\alpha(t-s)}\rho_0(A)\rho(A).$$ Note that $\tilde{\rho}(A)=\max(\rho(A)\rho_0(A), \rho(A)e^{\alpha\theta})$, we complete the proof. $\Box$ Similar to Lemma 2.3, we have the following: *Assume that condition **($\mathfrak{D}$)** holds, then $$\lim_{t\rightarrow -\infty}|Z_1(t,\tau)|=+\infty, \quad \lim_{t\rightarrow +\infty}|Z_2(t,\tau)|=+\infty, \quad \forall \tau\in \mathbb{R}.$$ Moreover, the unique bounded solution in $\mathbb{R}$ of subsystem (subsystem ) is trivial.* The proof is similar to that of Lemma 2.3 and so it is omitted. $\Box$ *Let $t \mapsto z(t,\tau,\xi)$ and $t \mapsto z(t,\tau,\xi')$ be the solutions of system passing respectively through $\xi$ and $\xi'$ at $t = \tau$. If is valid, then it follows that $$|z(t,\tau,\xi')-z(t,\tau,\xi)| \leq |\xi-\xi'|e^{p(\ell)|t-\tau|}$$ where $z(t,\cdot)=(x(t,\cdot), y(t,\cdot))^T$ and $p(\ell)$ is defined by $$p(\ell)= \eta_1 + \frac{\eta_2e^{\eta_1\theta}}{1-\upsilon} \quad with \quad \eta_1 = M + \ell, \quad \eta_2 = M_0 + \ell,$$ and $\upsilon \in [0,1)$ is defined by .* If $h(t,z(t),z(\gamma(t)))=0$, take $\ell=0$, Lemma 5.4 reduces to Lemma 5.2 in [@PintoRobledo15]. Moreover, since $p(\ell)>p(0)$ and $F(\ell,\theta)\geqslant F(0,\theta)$ in , Lemma 5.4 is also valid for system . *Let $\varrho,\eta: \mathbb{R} \rightarrow [0, \infty)$ be two functions such that $u$ is continuous and $¦Ç$ is locally integrable satisfying $$\bar{\theta} = \sup_{i\in\mathbb{Z}}\left\{\theta_i:\theta_i:=2\int_{I_i}\eta(s)ds\right\} < 1$$ Suppose that for $\tau \leq t$ or $t \leq \tau$, we have the inequality $$\varrho(t) \leq \varrho(\tau) + \left|\int_{\tau}^{t}\eta(s)[\varrho(s)+\varrho(\gamma(s))]ds\right|.$$ Then $$\varrho(t) \leq \varrho(\tau)\exp\left\{\tilde{\theta}\int_{\tau}^{t}\eta(s)ds\right\},$$ $$\varrho(\gamma(t)) \leq (1-\bar{\theta})^{-1}\varrho(\tau)\exp\left\{\tilde{\theta}\int_{\tau}^{t}\eta(s)ds\right\},$$ where $\tilde{\theta}=\frac{2-\bar{\theta}}{1-\bar{\theta}}$.* System is topologically conjugate to system ============================================ Suppose that $ {\left( \begin{array}{c} X(t,t_0,x_0) \\ Y(t,t_0,x_0,y_0) \end{array} \right)} $ is the solution of system satisfying that $ {\left( \begin{array}{c} X(t_0) \\ Y(t_0) \end{array} \right)} = {\left( \begin{array}{c} x_0 \\ y_0 \end{array} \right)} $ and $ {\left( \begin{array}{c} u(t,t_0,\xi) \\ v(t,t_0,\eta) \end{array} \right)} $ is the solution of system satisfying that $ {\left( \begin{array}{c} u(t_0) \\ v(t_0) \end{array} \right)} = {\left( \begin{array}{c} \xi \\ \eta \end{array} \right)}, $ where $t_0\in\mathbb{R}$, $x_0,\xi\in\mathbb{R}^{n_1}$, $y_0, \eta\in\mathbb{R}^{n_2}.$ *For any $t\geq t_0$, the following inequalities hold: $$|X(t,t_0,x_0)|\leq |x_0|e^{-\alpha_0(t-t_0)},$$ $$|X(\gamma(t),t_0,x_0)|\leq (1-\bar{\theta})e^{\alpha_0\theta}|x_0|e^{-\alpha_0(t-t_0)},$$ where $\alpha_0$ is defined in .* From we get $$X(t,t_0,x_0) =Z_1(t,t_0)x_0+\int_{t_0}^{t}G_1(t,s)f(s,X(s,t_0,x_0), X(\gamma(s),t_0,x_0))ds.$$ It follows from condition **(**$\mathfrak{D}$) and Lemma 5.2 that $$|X(t,t_0,x_0)|\leq e^{-\alpha(t-t_0)}|x_0|+l\tilde{\rho}(A)\int_{t_0}^{t}e^{-\alpha(t-s)}(|X(s)|+|X(\gamma(s))|)ds.$$ Thus $$\begin{aligned} &\quad e^{\alpha t}|X(t,t_0,x_0)| \notag \\ &\leq e^{\alpha t_0}|x_0|+ l\tilde{\rho}(A)\int_{t_0}^{t}(e^{\alpha s}|X(s)|+e^{\alpha\theta}e^{\alpha \gamma(s)}|X(\gamma(s))|)ds \notag \\ &\leq e^{\alpha t_0}|x_0|+ l\tilde{\rho}(A)e^{\alpha\theta}\int_{t_0}^{t}(e^{\alpha s}|X(s)|+e^{\alpha \gamma(s)}|X(\gamma(s))|)ds. \notag\end{aligned}$$ Applying Lemma 5.5 to $\varrho(t)=e^{\alpha t}|X(t,t_0,x_0)|$ and $\eta(t)=1$, we obtain that $$|X(t,t_0,x_0)| \leq |x_0|e^{-\alpha(t-t_0)+2 l\tilde{\rho}(A)e^{\alpha\theta}(t-t_0)}$$ and $$|X(\gamma(t),t_0,x_0)| \leq (1-\bar{\theta})|x_0|e^{-\alpha(\gamma(t)-t_0)+2 l\tilde{\rho}(A)e^{\alpha\theta}(\gamma(t)-t_0)}.$$ Thus $$|X(t,t_0,x_0)| \leq e^{-\alpha_0(t-t_0)}|x_0|,$$ and $$|X(\gamma(t),t_0,x_0)| \leq (1-\bar{\theta})e^{-\alpha_0(\gamma(t)-t_0)}|x_0| \leq (1-\bar{\theta})e^{\alpha_0\theta}e^{-\alpha_0(t-t_0)}|x_0|.\Box$$ *For any fixed $t_0\in\mathbb{R}$, $x_0, \xi\in\mathbb{R}^{n_1}$, there exists a unique $T(t_0,x_0)$ and $S(t_0,\xi)\in\mathbb{R}$, such that $$|X(T(t_0,x_0),t_0,x_0)|=1, \quad T(t_0,x_0) \rightarrow -\infty \quad \text{when} \quad x_0 \rightarrow 0,$$ $$|u(S(t_0,\xi),t_0,\xi|=1, \quad S(t_0,\xi) \rightarrow -\infty \quad \text{when} \quad \xi \rightarrow 0.$$* From Lemma 6.1, we have that $|X(t,t_0,x_0)| \leq |x_0|e^{-\alpha_0 (t-t_0)}$ when $t\geqslant t_0$, where $\alpha_0$ is defined in Lemma 6.1. If $x_0 \neq 0$ and $t \rightarrow +\infty,$ then $$\quad |X(t,t_0,x_0)| \rightarrow 0.$$ If $t\geqslant\tau$, $$\label{Lemma4.1-temp1} |X(t,t_0,x_0)|=|X(t,\tau,X(\tau,t_0,x_0))| \leq |X(\tau,t_0,x_0)|e^{-\alpha_0(t-\tau)}.$$ Thus, for the fixed $t_0$ and $x_0$, $|X(t,t_0,x_0)|$ is a strictly monotonous decreasing function about $t$. If $t$ is fixed and $\tau \rightarrow -\infty$, then $$e^{-\alpha_0(t-\tau)} \rightarrow 0.$$ Thus $$|X(\tau,t_0,x_0)| \rightarrow +\infty \quad \text{when} \quad \tau \rightarrow -\infty.$$ Therefore, there exists a unique time $T(t_0,x_0)$ such that $|X(T(t_0,x_0),t_0,x_0)|=1$. Moreover, when $x_0 \rightarrow 0$, $T(t_0,x_0) \rightarrow -\infty.$ By condition **$\mathfrak{D}$**, for $t>t_0$, we have $$|u(t,t_0,\xi)|=|Z(t,t_0)\xi| \leq e^{-\alpha (t-t_0)}|\xi|.$$ Thus when $t \rightarrow +\infty$, $$|Z(t,t_0)\xi| \rightarrow 0.$$ Similar to , we could obtain that for fixed $t_0$ and $\xi$, $|Z(t,t_0)\xi|$ is a strictly monotonous decreasing function about $t$. Moreover, when $t \rightarrow -\infty$, $$|Z(t,t_0)\xi| \rightarrow +\infty.$$ Therefore, for a fixed $\xi\in\mathbb{R}^{n_1},\xi \neq 0$, there exists a unique time $S(t_0,\xi)$ such that $$|Z(S(t_0,\xi),t_0,\xi|=1,$$ and $$S(t_0,\xi) \rightarrow -\infty \quad \text{when} \quad \xi \rightarrow 0. \quad \Box$$ *For any $x_0 \neq 0$, $\xi \neq 0$ and $t\in\mathbb{R}$, we have $$T(t,X(t,t_0,x_0)) = T(t_0,x_0),$$ $$S(t,u(t,\tau,\xi)) = S(\tau,\xi).$$* It follows from Lemma 6.2 that $$1=|X(T(t,X(t,t_0,x_0)),t,X(t,t_0,x_0))|=|X(T(t,X(t,t_0,x_0)),t_0,x_0)|.$$ From $|X(T(t_0,x_0),t_0,x_0))|=1$ and Lemma 6.2, we get $$T(t,X(t,t_0,x_0)) = T(t_0,x_0).$$ The second equality can be proved in a similar way. $\Box$ *For any $t_0\in\mathbb{R}$, $x_0\in\mathbb{R}^{n_1}$, the following inequality holds. $$|\int_{t_0}^{+\infty}G_2(t_0,s)g(s,X(s,t_0,x_0),X(\gamma(s),t_0,x_0))ds|\leq K\lambda \tilde{\rho}(B)((\alpha+\rho_0)^{-1}+\alpha^{-1})|x_0|.$$* From Lemmas 5.2 and 6.1, we get $$\begin{aligned} &|\int_{t_0}^{+\infty}G_2(t_0,s)g(s,X(s,t_0,x_0),X(\gamma(s),t_0,x_0))ds| \notag \\ & \leq \int_{t_0}^{+\infty}K\lambda\tilde{\rho}(B)e^{\alpha (t_0-s)}(|X(s,t_0,x_0)|+|X(\gamma(s),t_0,x_0)|) ds \notag \\ &\leq \int_{t_0}^{+\infty}K\lambda\tilde{\rho}(B)e^{\alpha (t_0-s)}e^{-\alpha_0(s-t_0)}(|x_0|+ (1-\bar{\theta})e^{\alpha_0\theta}|x_0|) ds \notag \\ &\leq K\lambda(\alpha+\alpha_0)^{-1}\tilde{\rho}(B)(1+(1-\bar{\theta})e^{\alpha_0\theta})|x_0|. \notag\end{aligned}$$ For any $t \in \mathbb{R}$, $\xi \in \mathbb{R}^{n_1}$ and $\eta \in \mathbb{R}^{n_2}$, we define $L_1:\mathbb{R}\times\mathbb{R}^{n_1} \rightarrow \mathbb{R}^{n_1}$, $L_2:\mathbb{R}\times\mathbb{R}^{n_1}\times\mathbb{R}^{n_2} \rightarrow \mathbb{R}^{n_2}$ and $L:\mathbb{R}\times\mathbb{R}^{n_1}\times\mathbb{R}^{n_2} \rightarrow \mathbb{R}^{n}$ as follows: $$L_1(t,\xi)= \left\{ \begin{array}{c} X(t,S(t,\xi),u(S(t,\xi),t,\xi)) \quad \xi\neq 0, \\ 0 \qquad \qquad \qquad \qquad \qquad \qquad \xi=0, \end{array} \right.$$ $$L_2(t,\xi,\eta) = \eta - \int_{t}^{+\infty}G_2(t,s)g\Big{(}s,X(s,t,L_1(t,\xi)),X(\gamma(s),t,L_1(t,\xi))\Big{)}ds,$$ and $$L(t,\xi,\eta)=\ {\left( \begin{array}{c} L_1(t,\xi) \\ L_2(t,\xi,\eta) \end{array} \right)}.$$ *$L_1(t,\xi)$ is a continuous function of $\xi$ and $L_1(t,u(t,\tau,\xi))=X(t,\tau,L_1(\tau,\xi)).$* By Lemma 6.2, we have $$S(t,\xi) \rightarrow -\infty \quad \text{when} \quad \xi \rightarrow 0.$$ When $\xi \rightarrow 0$, it follows from Lemma 6.1 that $$|X(t,S(t,\xi),u(S(t,\xi),t,\xi))| \leq |u(S(t,\xi),t,\xi))|e^{-\alpha_0(t-S(t,\xi))}=e^{-\alpha_0(t-S(t,\xi))} \rightarrow 0.$$ Hence, $L_1(t,\xi)$ is a continuous function of $\xi$. Furthermore, from Lemma 6.3, we have that $$\begin{aligned} L_1(t,u(t,\tau,\xi)) \ &=X(t,S(t,u(t,\tau,\xi)),u(S(t,u(t,\tau,\xi)),t,u(t,\tau,\xi))) \notag\\ &=X(t,S(\tau,\xi),u(S(\tau,\xi),\tau,\xi)) \notag\\ &=X(t,\tau,X(\tau,S(\tau,\xi),u(S(\tau,\xi),\tau,\xi))) \notag\\ &=X(t,\tau,L_1(\tau,\xi)). \quad \Box \notag\end{aligned}$$ *$ {\left( \begin{array}{c} L_1(t,u(t,\tau,\xi)) \\ L_2(t,u(t,\tau,\xi),v(t,\tau,\eta)) \end{array} \right)} = \ {\left( \begin{array}{c} X(t,\tau,L_1(\tau,\xi)) \\ Y(t,\tau,L_1(\tau,\xi),L_2(\tau,\xi,\eta)) \end{array} \right).} $* Due to Lemma 6.5, we get $$L_1(t,u(t,\tau,\xi))=X(t,\tau,L_1(\tau,\xi)).$$ $$\begin{aligned} \label{le6.5-temp} &L_2(t,u(t,\tau,\xi),v(t,\tau,\eta)) \notag\\ &= v(t,\tau,\eta) - \int_{t}^{+\infty}G_2(t,s)g\Big{(}s,X(s,t,L_1(t,u(t,\tau,\xi))),X(\gamma(s),t,L_1(t,u(t,\tau,\xi)))\Big{)}ds \notag\\ &= v(t,\tau,\eta) - \int_{t}^{+\infty}G_2(t,s)g\Big{(}s,X(s,t,X(t,\tau,L_1(\tau,\xi))),X(\gamma(s),t,X(t,\tau,L_1(\tau,\xi)))\Big{)}ds \notag\\ &= v(t,\tau,\eta) - \int_{t}^{+\infty}G_2(t,s)g\Big{(}s,X(s,\tau,L_1(\tau,\xi)),X(\gamma(s),\tau,L_1(\tau,\xi))\Big{)}ds.\end{aligned}$$ Denote $J(t)=- \int_{t}^{+\infty}G_2(t,s)g\Big{(}s,X(s,\tau,L_1(\tau,\xi)),X(\gamma(s),\tau,L_1(\tau,\xi))\Big{)}ds.$ Suppose $t\in I_j$, we obtain $$\begin{aligned} J'(t)&= - B(t)\int_{t}^{+\infty}G_2(t,s)g\Big{(}s,X(s,\tau,L_1(\tau,\xi)),X(\gamma(s),\tau,L_1(\tau,\xi))\Big{)}ds \notag\\ & - B_0(t)\int_{\gamma(t)}^{+\infty}G_2(t,s)g\Big{(}s,X(s,\tau,L_1(\tau,\xi)),X(\gamma(s),\tau,L_1(\tau,\xi))\Big{)}ds \notag\\ &+ g(t,X(t,\tau,L_1(\tau,\xi)),X(\gamma(t),\tau,L_1(\tau,\xi))). \notag\end{aligned}$$ Furthermore, from , we have $$\begin{aligned} &L_2'(t,u(t,\tau,\xi),v(t,\tau,\eta))\notag\\ =& B(t)L_2(t,u(t,\tau,\xi),v(t,\tau,\eta)) + B_0(t)L_2(\gamma(t),u(t,\tau,\xi),v(t,\tau,\eta)) \notag \\ &+ g(t,X(t,\tau,L_1(\tau,\xi)),X(\gamma(t),\tau,L_1(\tau,\xi))). \notag\end{aligned}$$ Thus $ {\left( \begin{array}{c} L_1(t,u(t,\tau,\xi),v(t,\tau,\eta)) \\ L_2(t,u(t,\tau,\xi),v(t,\tau,\eta)) \end{array} \right)} $ is a solution of system . From $$\begin{aligned} &{\left( \begin{array}{c} L_1(t,u(t,\tau,\xi),v(t,\tau,\eta)) \\ L_2(t,u(t,\tau,\xi),v(t,\tau,\eta)) \end{array} \right)}|_{t=\tau} = \ {\left( \begin{array}{c} L_1(\tau,\xi) \\ L_2(\tau,\xi,\eta) \end{array} \right)} \notag\end{aligned}$$ and $${\left( \begin{array}{c} X(t,\tau,L_1(\tau,\xi)) \\ Y(t,\tau,L_1(\tau,\xi),L_2(\tau,\xi,\eta)) \end{array} \right)}|_{t=\tau} = \ {\left( \begin{array}{c} L_1(\tau,\xi) \\ L_2(\tau,\xi,\eta)) \end{array} \right),}$$ we get the conclusion of the lemma. $\Box$ For any $t\in \mathbb{R}$, $x\in \mathbb{R}^{n_1}$ and $y \in \mathbb{R}^{n_2}$, we denote $ H(t,x,y) = \ {\left( \begin{array}{c} H_1(t,x) \\ H_2(t,x,y) \end{array} \right),} $ where $H_1(t,x)$ and $H_2(t,x,y)$ are defined as $$H_1(t,x) = \ {\left\{ \begin{array}{c} u(t,T(t,x),X(T(t,x),t,x)) \quad x \neq 0, \\ 0 \qquad \qquad \qquad \qquad \qquad \qquad x = 0, \end{array} \right.}$$ and $$H_2(t,x,y) = y + \int_{t}^{+\infty}G_2(t,s)g(s,X(s,t,x),X(\gamma(s),t,x))ds.$$ *$H_1(t,x)$ is a continuous function of $x$.* From , we get $$u(t,T(t,x),X(T(t,x),t,x))=Z_1(t,T(t,x))X(T(t,x),t,x),$$ which together with condition **$(\mathfrak{D})$** implies that $$|u(t,T(t,x),X(T(t,x),t,x))|\leq e^{-\alpha(t-T(t,x))}|X(T(t,x),t,x)|\leq e^{-\alpha(t-T(t,x))}, \quad t\geqslant T(t,x).$$ From Lemma 6.2, we have that $$T(t,x)\rightarrow -\infty, \quad \text{when} \quad x \rightarrow 0.$$ Thus $H_1(t,x)$ is a continuous function of $x$. $\Box$ *$${\left( \begin{array}{c} H_1(t,X(t,t_0,x_0)) \\ H_2(t,X(t,t_0,x_0),Y(t,t_0,x_0,y_0)) \end{array} \right)} = \ {\left( \begin{array}{c} u(t,t_0,H_1(t_0,x_0)) \\ v(t,t_0,H_2(t,x_0,y_0)) \end{array} \right).}$$* From Lemma 6.3, we have $$\begin{aligned} H_1(t,X(t,t_0,x_0)) \ &= u(t, T(t, X(t,t_0,x_0)), X(T(t,X(t,t_0,x_0)), t, X(t,t_0,x_0))) \notag\\ &= u(t, T(t_0,x_0), X(T(t_0,x_0),t_0,x_0)) \notag\\ &= u(t, t_0, u(t_0, T(t_0,x_0), X(T(t_0,x_0), t_0, x_0))) \notag\\ &= u(t, t_0, H_1(t_0,x_0)). \notag\end{aligned}$$ $$\begin{aligned} &H_2(t, X(t,t_0,x_0), Y(t,t_0,x_0,y_0)) \notag \\ &= Y(t,t_0,x_0,y_0) + \int_{t}^{+\infty}G_2(t,s)g\Big{(}s, X(s, t, X(t,t_0,x_0)), X(\gamma(s), t, X(t,t_0,x_0))\Big{)}ds \notag\\ &= Y(t,t_0,x_0,y_0) + \int_{t}^{+\infty}G_2(t,s)g(s, X(s,t_0,x_0), X(\gamma(s),t_0,x_0))ds \notag\end{aligned}$$ Since $$\begin{aligned} &H_2'(t, X(t,t_0,x_0)) \notag \\ =& B(t)H_2(t, X(t,t_0,x_0), Y(t,t_0,x_0,y_0)) + B_0(t)H_2(\gamma(t), X(\gamma(t),t_0,x_0), Y(\gamma(t),t_0,x_0,y_0)), \notag\end{aligned}$$ $H_2(t, X(t,t_0,x_0), Y(t,t_0,x_0,y_0))$ is a solution of system . Moreover, $$H_2(t, X(t,t_0,x_0), Y(t,t_0,x_0,y_0))|_{t=t_0} = H_2(t_0,x_0,y_0).$$ Thus $H_2(t, X(t,t_0,x_0), Y(t,t_0,x_0,y_0))=v(t,t_0,H_2(t_0,x_0,y_0)).$ $\Box$ *For any $t_0 \in \mathbb{R}$, $x_0 \in \mathbb{R}^{n_1}$, $\tau \in \mathbb{R}$ and $\xi \in \mathbb{R}^{n_1}$, we have $$S(t_0, H_1(t_0,x_0)) = T(t_0,x_0), \quad T(\tau, L_1(\tau,\xi)) = S(\tau,\xi).$$* From the definition of $H_1$, we have $$\begin{aligned} &1=|u(S(t_0, H_1(t_0,x_0)), t_0, H_1(t_0,x_0))| \notag \\ &=|u(S(t_0, H_1(t_0,x_0)), t_0, u(t_0, T(t_0,x_0), X(T(t_0,x_0), t_0, x_0))) | \notag \\ &=|u(S(t_0, H_1(t_0,x_0)), T(t_0,x_0), X(T(t_0,x_0), t_0, x_0))|, \notag\end{aligned}$$ which implies that $$S(t_0, H_1(t_0,x_0)) =S(T(t_0,x_0), X(T(t_0,x_0), t_0, x_0)).$$ From $$|u(T(t_0,x_0), T(t_0,x_0), X(T(t_0,x_0), t_0, x_0))| = |X(T(t_0,x_0), t_0, x_0)| = 1,$$ we obtain that $$S(T(t_0,x_0), X(T(t_0,x_0), t_0, x_0))= T(t_0,x_0).$$ Thus $$S(t_0, H_1(t_0,x_0)) = T(t_0,x_0).$$ Similarly, we could prove that $T(\tau, L_1(\tau,\xi)) = S(\tau,\xi).$ $\Box$ *For any $t_0 \in \mathbb{R}$, $x_0 \in \mathbb{R}^{n_1}$ and $y_0 \in \mathbb{R}^{n_2}$, we have $$L(t_0,H(t_0,x_0))=(x_0,y_0)^T.$$* If $x_0=0$, it is easy to see that $L_1(t_0, H_1(t_0,x_0)) = x_0.$ If $x_0 \neq 0$, from Lemma 6.9 and the definitions of $L1$ and $H_1$, we get $$\begin{aligned} L_1(t_0, H_1(t_0,x_0)) \ &= X\Big{(}t_0, S(t_0, H_1(t_0,x_0)), u\big{(}S(t_0, H_1(t_0,x_0)), t_0, H_1(t_0,x_0)\big{)}\Big{)} \notag\\ &= X\Big{(}t_0, T(t_0,x_0), u\big{(}T(t_0,x_0), t_0, u(t_0, T(t_0,x_0), X(T(t_0,x_0), t_0, x_0))\big{)}\Big{)} \notag\\ &= X\Big{(}t_0, T(t_0,x_0), u\big{(}T(t_0,x_0), T(t_0,x_0), X(T(t_0,x_0), t_0, x_0)\big{)}\Big{)} \notag\\ &= X(t_0, T(t_0,x_0), X(T(t_0,x_0), t_0, x_0)) \notag\\ &= x_0, \notag\end{aligned}$$ which together with the definitions of $L_2$ and $H_2$ implies that $$\begin{aligned} &\quad L_2(t_0, H_1(t_0,x_0), H_2(t_0,x_0,y_0)) \notag\\ &= H_2(t_0,x_0,y_0) - \int_{t_0}^{+\infty}G_2(t_0,s)g\Big{(}s, X(s, t_0, L_1(t_0, H_1(t_0,x_0))),X(\gamma(s), t_0, L_1(t_0, H_1(t_0,x_0)))\Big{)}ds \notag\\ &= y_0 + \int_{t_0}^{+\infty}G_2(t_0,s)g(s, X(s,t_0,x_0), X(\gamma(s),t_0,x_0))ds \notag\\ &\qquad - \int_{t_0}^{+\infty}G_2(t_0,s)g(s,X(s,t_0,x_0),X(\gamma(s),t_0,x_0))ds \notag\\ &= y_0. \notag \quad \Box\end{aligned}$$ *For any $\tau \in \mathbb{R}$, $\xi \in \mathbb{R}^{n_1}$ and $\eta \in \mathbb{R}^{n_2}$, we have $$H(\tau,L(\tau,\xi,\eta))=(\xi,\eta)^T.$$* If $\xi=0$, it is obvious that $H_1(\tau,L_1(\tau,\xi))=\xi.$ If $\xi \neq 0$, by Lemma 6.9 and the definitions of $H_1$ and $L_1$, we obtain $$\begin{aligned} H_1(\tau,L_1(\tau,\xi)) \ &= u\Big{(}\tau, T(\tau,L_1(\tau,\xi)), X\big{(}T(\tau,L_1(\tau,\xi)), \tau, L_1(\tau,\xi)\big{)}\Big{)} \notag\\ &= u\Big{(}\tau, S(\tau,\xi), X\big{(}S(\tau,\xi), \tau, L_1(\tau,\xi)\big{)}\Big{)} \notag\\ &= u\Big{(}\tau, S(\tau,\xi), X\big{(}S(\tau,\xi), \tau, X(\tau, S(\tau,\xi), u(S(\tau,\xi), \tau, \xi))\big{)}\Big{)} \notag\\ &= u(\tau, S(\tau,\xi), u(S(\tau,\xi), \tau, \xi)) \notag\\ &= \xi. \notag\end{aligned}$$ In what follows, we prove that $H_2(\tau,L_1(\tau,\xi),L_2(\tau,\xi,\eta))=\eta.$ For any $t \in \mathbb{R}, x \in \mathbb{R}^{n_1}$ and $y \in \mathbb{R}^{n_2}$, due to Lemma 6.4, we have $$\begin{aligned} |H_2(t,x,y)-y| \ &= |\int_{t}^{+\infty}G_2(t,s)g(s,X(s,t,x),X(\gamma(s),t,x))ds| \notag\\ &\leq K\lambda \tilde{\rho}(B)((\alpha+\rho_0)^{-1}+\alpha^{-1})|x_0|. \notag\end{aligned}$$ From Lemma 6.4 and the definition of $L_2$, we obtain $$\begin{aligned} |L_2(t,\xi,\eta)-\eta| \ &\leq |\int_{t}^{+\infty}G_2(t,s)g\Big{(}s,X(s,t,L_1(t,\xi)),X(\gamma(s),t,L_1(t,\xi))\Big{)}ds| \notag\\ &\leq K\lambda \tilde{\rho}(B)((\alpha+\rho_0)^{-1}+\alpha^{-1})|L_1(t,\xi)|. \notag\end{aligned}$$ Thus, by Lemma 6.6 we get $$\begin{aligned} &J\triangleq|H_2(t, L_1(t,u(t,\tau,\xi)), L_2(t, u(t,\tau,\xi), v(t,\tau,\eta))) - v(t,\tau,\eta)| \notag\\ &\leq |H_2(t, L_1(t,u(t,\tau,\xi)), L_2(t, u(t,\tau,\xi), v(t,\tau,\eta))) - L_2(t, u(t,\tau,\xi), v(t,\tau,\eta))| \notag\\ &\quad + |L_2(t, u(t,\tau,\xi), v(t,\tau,\eta)) - v(t,\tau,\eta)| \notag\\ &\leq 2K\lambda \tilde{\rho}(B)((\alpha+\rho_0)^{-1}+\alpha^{-1})|L_1(t,u(t,\tau,\xi))|\notag\\ &\leq 2K\lambda \tilde{\rho}(B)((\alpha+\rho_0)^{-1}+\alpha^{-1})|X(t,\tau,L_1(\tau,\xi))|. \notag\end{aligned}$$ It follows from Lemma 6.1 that $$\label{Lemma6.11-temp1} J\leq 2K\lambda \tilde{\rho}(B)((\alpha+\rho_0)^{-1}+\alpha^{-1})|L_1(\tau,\xi)|e^{-\alpha_0(t-\tau)}, \quad t\geqslant\tau.$$ From , Lemmas 6.6 and 6.8, we have $$\begin{aligned} &\quad H_2(t, L_1(t,u(t,\tau,\xi)), L_2(t,u(t,\tau,\xi),v(t,\tau,\eta))) \notag\\ &= H_2(t, X(t,\tau,L_1(\tau,\xi)), Y(t,\tau,L_1(\tau,\xi)),L_2(\tau,\xi,\eta))) \notag\\ &= v(t, \tau, H_2(t,L_1(\tau,\xi),L_2(\tau,\xi,\eta))) \notag \\ &= Z_2(t,\tau)H_2(\tau, L_1(\tau,\xi), L_2(\tau,\xi,\eta)). \notag\end{aligned}$$ By and $v(t,\tau,\eta)=Z_2(t,\tau)\eta$, we get $$\begin{aligned} &\quad|Z_2(t,\tau)\cdot\Big{(}H_2(\tau,L_1(\tau,\xi),L_2(\tau,\xi,\eta))-\eta\Big{)}| \notag\\ &= |H_2(t, L_1(t,u(t,\tau,\xi)), L_2(t, u(t,\tau,\xi), v(t,\tau,\eta))) - v(t,\tau,\eta)| \notag\\ &\leq 2K\lambda \tilde{\rho}(B)((\alpha+\rho_0)^{-1}+\alpha^{-1})|L_1(\tau,\xi)|e^{-\alpha_0(t-\tau)}, \quad t\geqslant\tau. \notag\end{aligned}$$ For fixed $\tau$ and $\xi$, $L_1(\tau,\xi)$ is a fixed value. Thus the above equality is bounded when $t\geq \tau$. Moreover, it follows from condition **$(\mathfrak{D})$** that the above equality is bounded when $t\leq \tau$. Therefore, $Z_2(t,\tau)\cdot\Big{(}H_2(\tau,L_1(\tau,\xi),L_2(\tau,\xi,\eta))-\eta\Big{)}$ is a bounded solution of system . Since system has an $\alpha$-exponential dichotomy, for fixed $\tau$, $\xi$ and $\eta$, it has a unique bounded solution, zero solution. Thus $$H_2(\tau, L_1(\tau,\xi), L_2(\tau,\xi,\eta)) - \eta = 0.$$ That is $H_2(\tau, L_2(\tau,\xi,\eta)) = \eta.$ $\Box$ *System is topologically conjugate to system .* It follows from Lemmas 6.10 and 6.11 that for a fixed $t$, $H(t,x,y): \mathbb{R}^{n_1}\times \mathbb{R}^{n_2} \rightarrow \mathbb{R}^{n}$ is a bijection and $H^{-1}(t,x,y)=L(t,x,y).$ According to Lemma 5.4 and Remark 5.3, solutions of systems and are continuous with respect to initial values. By the definitions of $H(t,\cdot)$ and $L(t,\cdot)$, and lemmas 6.5 and 6.7, we get that both $H(t,\cdot)$ and $L(t,\cdot)$ are continuous. Thus $H(t,\cdot)$ and $L(t,\cdot)$ are homeomorphisms of $\mathbb{R}^n$. Moreover, Lemmas 6.6 and 6.8 imply that $H(t,\cdot)$ sends the solutions of system onto those of system and $L(t,\cdot)$ sends the solutions of system onto those of system . Therefore, system and system are topologically conjugated. $\Box$ System is topologically conjugate to system ============================================ First we introduce a new system $$\label{sysnlxy-pq} \left\{\begin{array}{lc} x' = A(t)x(t) + A_0(t)x(\gamma(t)) +f(t,x(t),x(\gamma(t))) + p(t,y(t),y(\gamma(t))) \\ y' = B(t)y(t) + B_0(t)x(\gamma(t))+ g(t,x(t),x(\gamma(t))) + q(t,y(t),y(\gamma(t))), \end{array}\right.$$ where $f(t,\cdot)$ and $g(t,\cdot)$ are defined in system , $p:\mathbb{R}\times\mathbb{R}^{n_2}\times\mathbb{R}^{n_2}\rightarrow\mathbb{R}^{n_1}$ and $q:\mathbb{R}\times\mathbb{R}^{n_2}\times\mathbb{R}^{n_2}\rightarrow\mathbb{R}^{n_2}$ satisfying that for the $\delta$ and $\omega$ in **($\mathfrak{B}_2$)** and any $ t \in \mathbb{R}$, $y_1, y_2, \bar{y}_1,\bar{y}_2 \in \mathbb{R}^{n_2}$ such that $$|p(t,y_1, y_2)| \leq \delta, \quad |q(t,,y_1, y_2)| \leq \delta,$$ $$|p(t,y_1, y_2)-p(t,\bar{y}_1,\bar{y}_2)| \leq \omega(|y_1-\bar{y}_1|+|y_2-\bar{y}_2|),$$ $$|q(t,y_1,y_2)-q(t,\bar{y}_1,\bar{y}_2))| \leq \omega(|y_1-\bar{y}_1|+|y_2-\bar{y}_2|).$$ *If holds, then there exists a unique function $\bar{H}(t,x,y): \mathbb{R} \times \mathbb{R}^{n_1+n_2} \rightarrow \mathbb{R}^{n_1+n_2}$ satisfying that* (i) : *There exists a constant $\bar{\sigma}>0$ such that $$|\bar{H}(t,x,y)- (x, y)^T| \leq \bar{\sigma}.$$* (ii) : *If $ {\left( \begin{array}{c} x(t) \\ y(t) \end{array} \right)} $ is a solution of system , then $\bar{H}(t,x(t),y(t))$ is a solution of system .* For any fixed $\tau \in \mathbb{R}$, $\xi \in \mathbb{R}^{n_1}$ and $\eta \in \mathbb{R}^{n_2}$, suppose that $ {\left( \begin{array}{c} x(t,\tau,\xi,\eta) \\ y(t,\tau,\xi,\eta) \end{array} \right)} $ is a solution of system satisfying $ {\left( \begin{array}{c} x(\tau,\tau,\xi,\eta) \\ y(\tau,\tau,\xi,\eta) \end{array} \right)} = {\left( \begin{array}{c} \xi \\ \eta \end{array} \right).} $ Denote $ z(t)= {\left( \begin{array}{c} z_1(t) \\ z_2(t) \end{array} \right)} $ where $z_1(t) \in \mathbb{R}^{n_1}$ and $z_2(t) \in \mathbb{R}^{n_2},$ $ W(t)= {\left[ \begin{array}{cc} A(t) & \\ & B(t) \end{array} \right],} $ $ W_0(t)= {\left[ \begin{array}{cc} A_0(t) & \\ & B_0(t) \end{array} \right]} $ and $$\begin{aligned} &\quad \quad\bar{ h}(t,z(t),z(\gamma(t)),(\tau,\xi,\eta))\notag\\ &= {\left( \begin{array}{c} \bar{h}_1(t,z(t),z(\gamma(t)),(\tau,\xi,\eta)) \\ \bar{h}_2(t,z(t),z(\gamma(t)),(\tau,\xi,\eta)) \end{array} \right)} \notag\\ &= {\left( \begin{array}{c} f(t,x(t,\tau,\xi,\eta)+z_1(t),x(\gamma(t),\tau,\xi,\eta)+z_1(\gamma(t))) \\ g(t,x(t,\tau,\xi,\eta)+z_1(t),x(\gamma(t),\tau,\xi,\eta)+z_1(\gamma(t))) \end{array} \right)} \notag \\ &\quad\quad+ {\left( \begin{array}{c} p(t,y(t,\tau,\xi,\eta)+z_2(t),y(\gamma(t),\tau,\xi,\eta)+z_2(\gamma(t))) \\ q(t,y(t,\tau,\xi,\eta)+z_2(t),y(\gamma(t),\tau,\xi,\eta)+z_2(\gamma(t))) \end{array} \right)} \notag \\ &\quad\quad+ {\left( \begin{array}{c} - f(t,x(t,\tau,\xi,\eta),x(\gamma(t),\tau,\xi,\eta)) - \phi(t,y(t,\tau,\xi,\eta),y(\gamma(t),\tau,\xi,\eta)) \\ - g(t,x(t,\tau,\xi,\eta),x(\gamma(t),\tau,\xi,\eta)) - \psi(t,y(t,\tau,\xi,\eta),y(\gamma(t),\tau,\xi,\eta)) \end{array} \right).} \notag\end{aligned}$$ From $$|\bar{h}(t,z(t),z(\gamma(t)),(\tau,\xi,\eta))| \leq 2\lambda|z(t)| + 2\lambda|z(\gamma(t))|+4\delta,$$ $$\begin{aligned} &\quad |\bar{h}(t,z(t),z(\gamma(t)),(\tau,\xi,\eta)) - \bar{h}(t,\bar{z}(t),\bar{z}(\gamma(t)),(\tau,\xi,\eta))| \notag\\ &\leq 2\omega |z(t)-\bar{z}(t)|+2\omega |z(\gamma(t))-\bar{z}(\gamma(t))|, \notag\end{aligned}$$ and Theorem 1, we get that system $$\label{Lemma7.1-temp1} z'(t)= W(t)z(t) +W_0(t)z(\gamma(t))+ \bar{h}(t,z(t),z(\gamma(t)),(\tau,\xi,\eta))$$ has a unique bounded solution for fixed $\tau$, $\xi$ and $\eta$. We denote by $$\chi(t,(\tau,\xi,\eta)) = {\left( \begin{array}{c} \chi_1(t,(\tau,\xi,\eta)) \\ \chi_2(t,(\tau,\xi,\eta)) \end{array} \right)}\quad \text{and} \quad |\chi(t,(\tau,\xi,\eta))| \leq \bar{\sigma},$$ where $\chi_1(t,(\tau,\xi,\eta))\in \mathbb{R}^{n_1}$ and $\chi_2(t,(\tau,\xi,\eta))\in \mathbb{R}^{n_2}.$ For any $t \in \mathbb{R}$, $\xi \in \mathbb{R}^{n_1}$ and $\eta \in \mathbb{R}^{n_2}$, define $$\bar{H}(t,\xi,\eta) ={\left( \begin{array}{c} \bar{H}_1(t,\xi,\eta) \\ \bar{H}_2(t,\xi,\eta) \end{array} \right)}= {\left( \begin{array}{c} \xi + \chi_1(t,(t,\xi,\eta)) \\ \eta + \chi_2(t,(t,\xi,\eta)) \end{array} \right).}$$ Thus $\bar{H}(t,\xi,\eta)$ is continuous on $\mathbb{R} \times \mathbb{R}^{n_1+n_2}$ and $$\left| \bar{H}(t,\xi,\eta) - {\left( \begin{array}{c} \xi \\ \eta \end{array} \right)} \right| \leq \bar{\sigma}.$$ Moreover, $$\bar{H}(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta)) = {\left( \begin{array}{c} x(t,\tau,\xi,\eta) + \chi_1(t,(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta)))\\ y(t,\tau,\xi,\eta) + \chi_2(t,(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta))) \end{array} \right),}$$ where $ \chi(s,(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta))) $ is the unique bounded solution of system $$\frac{dz}{ds} = W(s)z(s) +W_0(s)z(\gamma(s))+ \bar{h}(s,z(s),z(\gamma(s)),(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta))).$$ From $$x(s,(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta))) = x(s,\tau,\xi,\eta),$$ $$y(s,(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta))) = y(s,\tau,\xi,\eta),$$ we have $$\bar{h}(s,z(s),z(\gamma(s)),(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta))) = \bar{h}(s,z(s),z(\gamma(s)),(\tau,\xi,\eta)).$$ Thus $$\chi(s,(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta))) = \chi(s,(\tau,\xi,\eta)), \quad \forall s \in \mathbb{R}.$$ Taking $s=t$, we get $$\chi(t,(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta))) = \chi(t,(\tau,\xi,\eta)).$$ Therefore, $ \bar{H}(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta)) = {\left( \begin{array}{c} x(t,\tau,\xi,\eta)+\chi_1(t,(\tau,\xi,\eta))\\ y(t,\tau,\xi,\eta)+\chi_2(t,(\tau,\xi,\eta)) \end{array} \right).} $ We could check that $\bar{H}(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta))$ is a solution of system and\ $|\bar{H}(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta))-(x(t,\tau,\xi,\eta), y(t,\tau,\xi,\eta))^T|$ is bounded. Therefore $\bar{H}(t,x,y)$ satisfies (i) and (ii). Assume that $\bar{K}(t,x,y)={\left( \begin{array}{c} \bar{K}_1(t,x,y) \\ \bar{K}_2(t,x,y) \end{array} \right)}$ satisfies (1) and (2), too, where $\bar{K}_1(t,x,y)\in \mathbb{R}^{n_1}$ and $\bar{K}_2(t,x,y)\in \mathbb{R}^{n_2}$. Since $ {\left( \begin{array}{c} x(t,\tau,\xi,\eta) \\ y(t,\tau,\xi,\eta) \end{array} \right)} $ is the solution of system , $\bar{K}(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta))$ is a solution of system . Denote $ w(t) = {\left( \begin{array}{c} w_1(t) \\ w_2(t) \end{array} \right)} = {\left( \begin{array}{c} \bar{K}_1(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta)) -x(t,\tau,\xi,\eta) \\ \bar{K}_2(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta))-y(t,\tau,\xi,\eta) \end{array} \right).} $ From $w'(t)=W(t)w(t) +W_0(t)w(\gamma(t))+ \bar{h}(t,w(t),w(\gamma(t)),(\tau,\xi,\eta))$, we have that $w(t)$ is a bounded solution of system . Therefore $$w(t)=\chi(t,(\tau,\xi,\eta)).$$ Thus $ \bar{K}(t,x(t,\tau,\xi,\eta),y(t,\tau,\xi,\eta)) = {\left( \begin{array}{c} x(t,\tau,\xi,\eta)+ \chi_1(t,(\tau,\xi,\eta)) \\ y(t,\tau,\xi,\eta)+ \chi_2(t,(\tau,\xi,\eta)) \end{array} \right).} $ Taking $ t = \tau$, we have $$\bar{K}(\tau,\xi,\eta) = {\left( \begin{array}{c} \xi+ \chi_1(\tau, (\tau,\xi,\eta)) \\ \eta+ \chi_2(\tau, (\tau,\xi,\eta)) \end{array} \right)} = \bar{H}(\tau,\xi,\eta).$$ Thus $\bar{H}(t,x,y)$ is a unique function satisfying the conditions (1) and (2). We complete the proof. $\Box$ *System is topologically conjugate to system .* From Lemma 7.1, for any $t \in \mathbb{R}$, $x, \tilde{x} \in \mathbb{R}^{n_1}$ and $y, \tilde{y} \in \mathbb{R}^{n_2}$, there exists a unique function $\tilde{H}(t,x,y)$ satisfies that (i) : There exists a constant $\sigma_1>0$ such that $$|\tilde{H}(t,x,y) - (x,y)^T|\leq \sigma_1.$$ (ii) : If $ {\left( \begin{array}{c} x(t) \\ y(t) \end{array} \right)} $ is a solution of system , then $H(t,x(t),y(t))$ is a solution of system . Similarly, there exists a unique function $\tilde{L}(t,\tilde{x},\tilde{y})$ satisfies that (i) : There exists a constant $\sigma_2>0$ such that $$|\tilde{L}(t,\tilde{x},\tilde{y}) -(\tilde{x},\tilde{y})^T|\leq \sigma_2.$$ (ii) : If $ {\left( \begin{array}{c} \tilde{x}(t) \\ \tilde{y}(t) \end{array} \right)} $ is a solution of system , then $\tilde{L}(t,\tilde{x}(t),\tilde{y}(t))$ is a solution of system . In what followings, we prove that $ \tilde{L}(t,\tilde{H}(t,x,y)) =(x,y)^T$ and $\tilde{H}(t,\tilde{L}(t,x,y)) =(x, y)^T$. Denote $\tilde{J}(t,x,y)=\tilde{L}(t,\tilde{H}(t,x,y))$. If $ {\left( \begin{array}{c} x(t) \\ y(t) \end{array} \right)} $ is a solution of system , then $\tilde{H}(t,x(t),y(t))$ is a solution of system . Thus $\tilde{L}(t,\tilde{H}(t,x(t),y(t)))$ is a solution of system . By a simple calculation, we get $$|\tilde{J}(t,x,y) -(x, y)^T|\leq |\tilde{L}(t,\tilde{H}(t,x,y))-\tilde{H}(t,x,y)| +|\tilde{H}(t,x,y) -(x,y)^T|\leq \sigma_1+\sigma_2.$$ Therefore $\tilde{J}(t,x,y)$ is the unique function satisfying the conditions (1) and (2) in Lemma 7.1 which transforms the solution of system to those of itself. In particular, taking $p=\phi$ and $=\psi$ in system , then system becomes system . From system to itself, for any $t \in \mathbb{R}, x \in \mathbb{R}^{n_1}, y \in \mathbb{R}^{n_2}$, the function $\bar{H}(t,x,y)= {\left( \begin{array}{c} x \\ y \end{array} \right)} $ satisfies the conditions (1) and (2) in Lemma 7.1. Thus, for any $t \in \mathbb{R}$, $x \in \mathbb{R}^{n_1}$ and $y \in \mathbb{R}^{n_2},$ $$\tilde{J}(t,x,y)=\bar{H}(t,x,y)= {\left( \begin{array}{c} x \\ y \end{array} \right)}.$$ That is $$\tilde{L}(t,\tilde{H}(t,x,y))= {\left( \begin{array}{c} x \\ y \end{array} \right),} \quad \forall t \in \mathbb{R}, x \in \mathbb{R}^{n_1}, y \in \mathbb{R}^{n_2}.$$ Applying Lemma 7.1 to system with $p=0$ and $q=0$, we could prove that $$\tilde{H}(t,\tilde{L}(t,\tilde{x},\tilde{y}))= {\left( \begin{array}{c} \tilde{x} \\ \tilde{y} \end{array} \right),} \quad \forall t \in \mathbb{R}, \tilde{x} \in \mathbb{R}^{n_1}, \tilde{y}\in \mathbb{R}^{n_2}.$$ Therefore, for a fixed $t$, $\tilde{H}^{-1}(t,\cdot,\cdot)=\tilde{L}(t,\cdot,\cdot)$. According to Lemma 5.4 and Remark 5.3, solutions of systems and are continuous with respect to initial values. Since both $\tilde{H}(t,\cdot)$ and $\tilde{L}(t,\cdot)$ are continuous, $\tilde{H}(t,\cdot)$ and $\tilde{L}(t,\cdot)$ are homeomorphisms of $\mathbb{R}^n$. Thus System is topologically conjugate to system . The proof is complete. $\Box$ The proof of Theorem 2 ====================== From Lemmas 6.12 and 7.2, we have that $H(t,\cdot)\circ\tilde{H}(t,\cdot)$ and $\tilde{L}(t,\cdot)\circ L(t,\cdot)$ are homeomorphisms of $\mathbb{R}^n$ and $\big{(}H(t,\cdot)\circ\tilde{H}(t,\cdot)\big{)}^{-1}=\tilde{L}(t,\cdot)\circ L(t,\cdot)$. Moreover, $H(t,\cdot)\circ\tilde{H}(t,\cdot)$ sends the solutions of system onto those of system and $\tilde{L}(t,\cdot)\circ L(t,\cdot)$ sends the solutions of system onto those of system . It is easy to see that $|H(t,\cdot)\circ\tilde{H}(t,(x,y)^T)-(x,y)^T|$ and $|\tilde{L}(t,\cdot)\circ L(t,(x,y)^T)-(x,y)^T|$ are bounded. Therefore system and system are topologically conjugated. $\Box$ Conflict of Interests ===================== The authors declare that there is no conflict of interests regarding the publication of this article. [10]{} A.R. Aftabizadeh, J. Wiener, and J.M. Xu, Oscillatory and periodic solutions of delay differential equations with piecewise constant argument, Proc. Amer. Math. Soc. 99 (1987) 673-679. M.U. Akhmet, Integral manifolds of differential equations with piecewise constant argument of generalized type, Nonlinear Anal. 66 (2007) 367-383. M.U. Akhmet, Stability of differential equations with piecewise constant arguments of generalized type, Nonlinear Anal. 68 (2008) 794-803. M.U. Akhmet, Nonlinear Hybrid Continuous/Discrete Time Models, Atlantis Press, Paris, 2011. M.U. Akhmet, Exponentially dichotomous linear systems of differential equations with piecewise constant argument, Discontinuity, Nonlinearity and Complexity, 1 (2012) 337-352. M.U. Akhmet, On the reduction principle for differential equations with piecewise constant argument of generalized type, [*J. Math. Appl. Math.*]{}, 336 (2007) 646-663. G.R. Belitskii, Equivalence and normal forms of germs of smooth mappings, [*Russian Math. Surveys*]{}, 33 (1978)107-177. L. Barreira, C. Valls, A Grobman-Hartman theorem for nonuniformly hyperbolic dynamics, J. Differential Equations 228 (2006) 285-310. L. Barreira, C. Valls, A simple proof of the Grobman-Hartman theorem for the nonuniformly hyperbolic flows, Nonlinear Anal. 74 (2011) 7210-7225. A. Cabada, J.B. Ferreiro, and J.J. Nieto, Green’s function and comparison principles for first order differential equations with piecewise constant arguments, J. Math. Anal. Appl. 291 (2004) 690-697. A. Casta$\tilde{\mathrm{n}}$eda, G. Robledo, A topological equivalence result for a family of nonlinear difference systems having generalized exponential dichotomy, http://arxiv.org/abs/1501.0320. S. Castillo, M. Pinto, Existence and stability of almost periodic solutions to differential equations with piecewise constant argument, Electron. J. Diff. Equ. 58 (2015) 1-15. K.S. Chiu, M. Pinto, Periodic solutions of differential equations with a general piecewise constant argument and applications, Electron. J. Qual. Theory Diff. Equ. 46 (2010) 1-20. K.S. Chiu, M. Pinto, J.C. Jeng, Existence and global convergence of periodic solutions in the current neural network with a general piecewise alternately advanced and retarded argument, Acta Appl. Math. 133 (2014) 133-152. K.L. Cooke and J. Wiener, Oscillations in systems of differential equations with piecewise constant delays, J. Math. Anal. Appl. 137 (1989) 221-239. A. Coronel, C. Maul$\acute{\mathrm{e}}$n, M. Pinto, D. Sep$\acute{\mathrm{u}}$lveda, Dichotomies and asymptotoic equivalence in alternately advanced and delayed differential systems, [*J. Math. Annal. Appl.*]{} 2017. L. Dai, Nonlinear Dynamics of Piecewise Constants Systems and Implementation of Piecewise Constants Arguments, Singapore, World Scientific, 2008. M.S. ElBialy, Local contractions of Banach spaces and spectral gap conditions, [*J. Funct. Anal.* ]{}, 182 (2001) 108-150. G. Farkas, A Hartman-Grobman result for retarded functional differential equations with an application to the numerics around hyperbolic equilibria, [*Z. Angew. Math. Phys.*]{}, 52 (2001), 421-432. M. Guysinsky, B. Hasselblatt, V. Rayskin, Differentiability of the Grobman-Hartman linearization, [*Discrete Contin. Dyn. Syst.*]{}, 9 (2003), 979-984. D.R. Grobman, The topological classification of vicinity of a singular point in n-dimensional space, Math. Ussr-sb, 56 (1962) 77-94. P. Hartman, On the local linearization of differential equation, [*Proc. Amer. Math. Soe.*]{} 14 (1963) 568-573. Z.K. Huang, Y.H. Xia, X.H. Wang, The existence and exponential attractivity of $k$-almost periodic sequence solution of discrete time neural networks. [*Nonlinear Dyn.*]{} 50 (2007) 13-26. L. Jiang, Generalized exponential dichotomy and global linearization, [*J. Math. Anal. Appl.*]{}, 315 (2006) 474-490. L. Jiang, Strongly topological linearization with generalized exponential dichotomy, [*Nonlinear Anal.*]{}, 67 (2007) 1102-1110. J. Kurzweil, G. Papaschinopoulos, Topological equivalence and structural stability for linear difference equations, [*J. Differential Equations*]{}, 89 (1991) 89-94. J. L$\acute{\mathrm{o}}$pez-Fenner, M. Pinto, On a Hartman linearization theorem for a class of ODE with impulse effect, [*Nonlinear Anal.*]{}, 38 (1999) 307-325. K. Lu, A Hartman-Grobman theorem for scalar reaction-diffusion equations, [*J. Diff. Equ.*]{} 93, 364-394, (1991). P. McSwiggen, A geometric characterization of smooth linearizability, [*Michigan Math. J.*]{}, 43(1996), 321-335. Y. Nakata, Global asymptotic stability beyond 3/2 type stability for a logistic equation with piecewise constants arguments, [*Nonlinear Anal.*]{}, 73 (2010) 3179-3194. K.J. Palmer, A generalization of Hartman’s linearization theorem, [*J. Math. Anal. Appl.*]{}, 41 (1973) 752-758. K.J. Palmer, Linearization near an integral manifold, [*J. Math. Anal. Appl.*]{}, 51 (1975) 243-255. G. Papaschinopoulos, Exponential dichotomy, topological equivalence and structural stability for differential equations with piecewise constant argument, [*Analysis*]{}, 145 (1994) 239-247. G. Papaschinopoulos, A linearization result for a differential equation with piecewise constant argument, [*Analysis*]{}, 16 (1996) 161-170. G. Papaschinopoulos, On the integral manifold for a system of differential equations with piecewise constant argument, [*J. Math. Anal. Appl.*]{}, 201 (1996) 75-90. M. Pinto, Asymptotic equivalence of nonlinear and quasi linear differential equations with piecewise constant arguments, [*Math. Comp. Modelling.*]{} 49 (2009) 1750-1758. M. Pinto, Cauchy and Green matrices type and stability in alternately advanced and delayed differential systems, [*J. Difference Equ. Appl.*]{} 17 (2011) 721-735. M. Pinto, G. Robledo, Controllability and observability for a linear time varying system with piecewise constant delay, [*Acta Appl. Math.*]{} 136 (2015) 193-216. M. Pinto, G. Robledo, A Grobman-Hartman theorem for a differential equation with piecewise constant generalized argument, http://arxiv.org/abs/1506.00533. C. P$\ddot{\mathrm{o}}$tzche, Topological decoupling, linearization and perturbation on inhomogeneous time scales, [*J. Differential Equations*]{}, 245 (2008) 1210-1242. C. Pugh, On a theorm of P. Hartman, [*Amer. J. Math.* ]{} 91 (1969) 363-367. Rodrigues, H.M., Solá-Morales, J.: Linearization of class $C^1$ for contractions on Banach spaces. [*J. Differ. Equ.*]{}, 201 (2004), 351-382. Rodrigues, H.M., Solá-Morales, J.: Smooth linearization for a saddle on Banach spaces. [*J. Dyn. Differ. Equ.* ]{},16 (2004), 767-793. G. Sell, Smooth linearization near a fixed point, [*Amer. J. Math.*]{}, 107(1985), 1035-1091. A. Seuret, A novel stability analysis of linear systems under asynchronous samplings, [*Automatica*]{} 48 (2012) 177-182. J. Shi, K. Xiong, On Hartman’s linearization theorem and Palmer’s linearization theorem, [*J. Math. Anal. Appl.*]{}, 92 (1995) 813-832. S. Sternberg, Local $C^n$ transformations of the real line, [*Duke Math. J.*]{}, 24 (1957) 97?102. S. Sternberg, Local contractions and a theorem of Poincaré, [*Amer. J. Math.*]{}, 79(1957), 809-824. T. Veloz, M. Pinto, Existence, computability and stability for solutions of the diffusion equation with general piecewise constant argument, [*J. Math. Anal. Appl.*]{} 426 (2015) 330-339. J. Wiener, Generalized Solutions of Functional Differential Equations, Singapore, World Scientific, 1993. R. Yuan, The existence of almost periodic solutions of retarded differential equations with piecewise constant argument, [*Nonlinear Anal*]{}. 48 (2002) 1013-1032. R. Yuan, J. Hong, The existence of almost periodic solutions for a class of differential equations with piecewise constant argument, [*Nonlinear Anal.*]{}, 28(1997), 1439-1450. W.M. Zhang, W.N. Zhang, Sharpness for $C^1$ linearization of planar hyperbolic diffeomorphisms, [*J. Diff. Equ.*]{}, [**257**]{} (2014), 4470-4502. W.M. Zhang, W.N. Zhang, $C^1$ linearization for planar contractions, [*J. Funct. Anal.*]{}, 260(2011), 2043-2063. W.M. Zhang, W.N. Zhang, W. Jarczyk, Sharp regularity of linearization for $C^{1,1}$ hyperbolic diffeomorphisms, [*Mathematische Annalen*]{}, 358(2014), pp 69-113. [^1]: Changwu Zou was supported by the National Natural Science Foundation of China under Grant (No.11471027) and Foundation of Fujiang Province Education Department under Grant (No. JAT160082). [^2]: Corresponding author. Yonghui Xia was supported by the National Natural Science Foundation of China under Grant (No. 11671176 and No. 11271333), Natural Science Foundation of Zhejiang Province under Grant (No. Y15A010022), Marie Curie Individual Fellowship within the European Community Framework Programme(MSCA-IF-2014-EF), the Scientific Research Funds of Huaqiao University and China Postdoctoral Science Foundation (No. 2014M562320). [^3]: Manuel Pinto was supported by FONDECYT Grant (No. 1120709 and No. 1170466 ).
{ "pile_set_name": "ArXiv" }
--- author: - Tobias Hartung and Karl Jansen title: 'Integrating Gauge Fields in the $\zeta$-formulation of Feynman’s path integral' --- Introduction ============ Feynman’s path integral is a fundamental building block of modern quantum field theory. For instance, the time evolution semigroup $(U(t,s))_{t,s\in{\ensuremath{\mathbb{R}}}_{\ge0}}$ of a quantum field theory is a semigroup of integral operators whose kernels are given by the path integral. In terms of the Hamiltonian $H$ of a given quantum field theory, $U$ is the semigroup generated by $-\frac{i}{\hbar}H$, i.e., $U$ formally satisfies $U(t,s)={\ensuremath{\mathrm{Texp}}}{\ensuremath{\left}}(-\frac{i}{\hbar}\int_s^t H(\tau)d\tau{\ensuremath{\right}})$ where ${\ensuremath{\mathrm{Texp}}}$ is the time-ordered exponential for unbounded operators as to be understood in terms of the time-dependent Hille-Yosida Theorem (e.g., Theorem 5.3.1 in [@pazy]). Furthermore, the path integral is intimately connected to vacuum expectation values which play two very crucial roles. On one hand, vacuum expectation values are physical and allow for experimental verification and thus to test theories. On the other hand, vacuum expectation values of $n$ field operators (so called $n$-point functions) uniquely determine the quantum field theory by Wightman’s Reconstruction Theorem (Theorem 3-7 in [@streater-wightman]). Let us consider a quantum field theory with Hilbert space ${\ensuremath{\mathcal{H}}}$ and time evolution semigroup $U$. Then, the vacuum expectation value $\langle A\rangle$ of an observable $A$ can be expressed as $$\begin{aligned} \tag{$*$}\label{eq:vev-formal} \langle A\rangle=\lim_{T\to\infty+i0^+}\frac{{\ensuremath{\operatorname{tr}}}{\ensuremath{\left}}(U(T,0)A{\ensuremath{\right}})}{{\ensuremath{\operatorname{tr}}}U(T,0)}\end{aligned}$$ where the denominator ${\ensuremath{\operatorname{tr}}}U(T,0)$ is also known as the partition function. Upon closer inspection however, reveals one of the major mathematical obstacles. The traces on the right hand side of  should be the canonical trace on trace-class operators ${\ensuremath{\mathcal{S}}}_1({\ensuremath{\mathcal{H}}})$ but for a continuum theory $U(T,0)$ is a bounded, non-compact operator and $U(T,0)A$ is in general an unbounded operator on ${\ensuremath{\mathcal{H}}}$. Vacuum expectation values are thus only generally understood in terms of discretized quantum field theories. This is the starting point of lattice quantum field theory for instance and great computational effort is necessary to extrapolate the continuum limit from these discretized vacuum expectation values. If we wish to understand  in the continuum however, the traces need to be constructed in such a way that they coincide with the canonical trace on ${\ensuremath{\mathcal{S}}}_1({\ensuremath{\mathcal{H}}})$ provided $U(T,0),U(T,0)A\in{\ensuremath{\mathcal{S}}}_1({\ensuremath{\mathcal{H}}})$. One such trace construction technique are operator $\zeta$-functions. They were introduced by Ray and Singer [@ray; @ray-singer] for pseudo-differential operators and first proposed as a regularization method for path integrals in perturbation theory by Hawking [@hawking]. The Fourier integral operator $\zeta$-function approach generalizes the pseudo-differential framework to non-perturbative settings with general metrics (Euclidean and Lorentzian) and includes special cases like Lattice discretizations in a Lorentzian background. Given an operator $A$ and a trace $\tau$ for which we want to define $\tau(A)$, we construct a holomorphic family ${\ensuremath{\mathfrak{A}}}$ such that ${\ensuremath{\mathfrak{A}}}(0)=A$ and there exists a maximal open and connected subset $\Omega$ of ${\ensuremath{\mathbb{C}}}$ for which ${\ensuremath{\mathfrak{A}}}$ maps $\Omega$ into the domain of $\tau$. In general, we construct ${\ensuremath{\mathfrak{A}}}$ such that $\Omega$ contains a half-space ${\ensuremath{\mathbb{C}}}_{\Re(\cdot)<R}:=\{z\in{\ensuremath{\mathbb{C}}};\ \Re(z)<R\}$ for some $R\in{\ensuremath{\mathbb{R}}}$. Then, we define the $\zeta$-function $\zeta({\ensuremath{\mathfrak{A}}})$ to be the meromorphic extension of $\Omega\ni z\mapsto\tau({\ensuremath{\mathfrak{A}}}(z))\in{\ensuremath{\mathbb{C}}}$ to an open, connected neighborhood of $0$ (provided it exists). If $\zeta({\ensuremath{\mathfrak{A}}})$ is holomorphic in a neighborhood of $0$ and $\zeta({\ensuremath{\mathfrak{A}}})(0)$ depends only on $A$ and not the explicit choice of ${\ensuremath{\mathfrak{A}}}$ (that is, if ${\ensuremath{\mathfrak{B}}}$ is another admissible choice of holomorphic family with ${\ensuremath{\mathfrak{B}}}(0)={\ensuremath{\mathfrak{A}}}(0)$, then $\zeta({\ensuremath{\mathfrak{A}}})(0)=\zeta({\ensuremath{\mathfrak{B}}})(0)$), then we can define $\tau(A)$ as $\zeta({\ensuremath{\mathfrak{A}}})(0)$. For example, if $A$ is a positive operator whose spectrum $\sigma(A)$ is discrete and free from accumulation points, then we could define ${\ensuremath{\mathfrak{A}}}(z):=A^z$ and $\zeta({\ensuremath{\mathfrak{A}}})$ is given by the meromorphic extension of $z\mapsto\sum_{\lambda\in\sigma(A)\setminus\{0\}}\lambda^z$ (counting multiplicities); hence, giving rise to the name “operator $\zeta$-function.” This is precisely how Hawking [@hawking] employed $\zeta$-regularization, it has been used successfully in many physical settings (e.g., the Casimir effect, defining one-loop functional determinants, the stress-energy tensor, conformal field theory, and string theory [@beneventano-santangelo; @blau-visser-wipf; @bordag-elizalde-kirsten; @bytsenko-et-al; @culumovic-et-al; @dowker-critchley; @elizalde2001; @elizalde; @elizalde-et-al; @elizalde-vanzo-zerbini; @fermi-pizzocchero; @hawking; @iso-murayama; @marcolli-connes; @mckeon-sherry; @moretti97; @moretti99; @moretti00; @moretti11; @robles; @shiekh; @tong-strings]), and is related to Hadamard parametrix renormalization [@hack-moretti]. This approach has been fundamental for many subsequent developments as it allows for an effective Lagrangian to be defined [@blau-visser-wipf] as well as heat kernel coefficients to easily be computed [@bordag-elizalde-kirsten], and implies non-trivial extensions of the Chowla-Selberg formula [@elizalde2001]. Furthermore, the residues have been studied extensively because they give rise to the multiplicative anomaly which appears in perturbation theory [@elizalde-vanzo-zerbini] and contributes a substantial part to the energy momentum tensor of a black hole for instance [@hawking]. Kontsevich and Vishik [@kontsevich-vishik; @kontsevich-vishik-geometry] showed that this construction gives rise to a well-defined (unbounded) trace for pseudo-differential operators. Their approach was later extended to Fourier integral operators [@hartung-phd; @hartung-scott]. Since Radzikowski [@radzikowski92; @radzikowski96] showed that the operators $U(T,0)A$ and $U(T,0)$ are pseudo-differential operators (Euclidean spacetimes) or more generally Fourier integral operators (Lorentzian spacetimes), we can apply this framework of operator $\zeta$-functions to the definition of vacuum expectation values as it was first done in [@hartung; @hartung-iwota] and define a $\zeta$-regularized vacuum expectation value of $A$ to be $$\begin{aligned} \langle A\rangle_\zeta:=\lim_{z\to0}\lim_{T\to\infty+i0^+}\frac{\zeta(U(T,0){\ensuremath{\mathfrak{G}}}A)}{\zeta(U(T,0){\ensuremath{\mathfrak{G}}})}(z)\end{aligned}$$ where ${\ensuremath{\mathfrak{G}}}$ is a suitable family of Fourier integral operators (usually pseudo-differential) with ${\ensuremath{\mathfrak{G}}}(0)=1$ such that $U(T,0){\ensuremath{\mathfrak{G}}}A$ and $U(T,0){\ensuremath{\mathfrak{G}}}$ satisfy the assumptions on the construction of the corresponding operator $\zeta$-functions. If we consider ${\ensuremath{\left}}(U(t,s){\ensuremath{\mathfrak{G}}}(z){\ensuremath{\right}})_{s,t\in{\ensuremath{\mathbb{R}}}_{\ge0}}$ to be the time evolution semigroup of a quantum field theory $QFT(z)$, then this essentially means that we construct a “holomorphic family of quantum field theories $QFT$” such that the vacuum expectation value of $A$ in $QFT(z)$ is well-defined in Feynman’s sense for $z$ in some open subset $\Omega$ of ${\ensuremath{\mathbb{C}}}$ and the vacuum expectation value of $A$ in the quantum field theory $QFT(0)$, that we wish to study, is defined via analytic continuation. Furthermore, it was recently shown [@hartung-jansen] that this construction of $\zeta$-regularized vacuum expectation values can be understood in terms of a continuum limit of discretized quantum field theories which is accessible using quantum computing. This discretization can be constructed directly in the continuum on general metrics, including Riemannian and Lorentzian spacetimes. Alternatively, the discretization can be constructed from spacetime lattices. Given the universal applicability result [@hartung-jansen] of the Fourier integral operator $\zeta$-function approach to $\zeta$-regularized vacuum expectation values, many examples have been considered in this framework on a mathematically fundamental level [@hartung; @hartung-iwota; @hartung-jansen]. However, applications of $\zeta$-regularization in the physical literature [@beneventano-santangelo; @blau-visser-wipf; @bordag-elizalde-kirsten; @bytsenko-et-al; @culumovic-et-al; @dowker-critchley; @elizalde2001; @elizalde; @elizalde-et-al; @elizalde-vanzo-zerbini; @fermi-pizzocchero; @hawking; @iso-murayama; @marcolli-connes; @mckeon-sherry; @moretti97; @moretti99; @moretti00; @moretti11; @robles; @shiekh; @tong-strings] have focused on different aspects which leaves a wide gap to demonstrate the practicability of treating quantum field theories with the Fourier integral operator $\zeta$-function regularization in a non-perturbative fashion. We therefore want to start filling this gap with some fundamental examples of quantum fields which are underlying many gauge field theories. In particular, we will consider free real and complex scalar quantum fields (Sections \[sec:free-real-scalar\] and \[sec:free-complex-scalar\] respectively) and the free Dirac field (Section \[sec:free-dirac\]). Finally, we will consider light coupled to a fermion (Section \[sec:fermion-light\]) where we ignore self-interaction of the radiation field for simplicity (the free radiation field has already been discussed in [@hartung-jansen]). The example of light coupling to matter is of particular interest as it is one of the well-known examples of $\zeta$-regularization from the physical literature [@iso-murayama] which we can now understand in terms of the Fourier integral operator approach to $\zeta$-regularized vacuum expectation values. The free real scalar quantum field {#sec:free-real-scalar} ================================== The first example we would like to consider is the free scalar quantum field in $1+1$ dimensions. Its Lagrangian density is given by $$\begin{aligned} {\ensuremath{\mathcal{L}}}=\frac12({\partial}_0{\varphi})^2-\frac12({\partial}_1{\varphi})^2.\end{aligned}$$ Hence, the generalized momentum is $$\begin{aligned} \hat{\varphi}={\partial}_{{\partial}_0{\varphi}}{\ensuremath{\mathcal{L}}}={\partial}_0{\varphi}\end{aligned}$$ and thus we obtain the Hamiltonian density $$\begin{aligned} h=\hat{\varphi}{\partial}_0{\varphi}-{\ensuremath{\mathcal{L}}}=\frac12\hat{\varphi}^2+\frac12({\partial}_1{\varphi})^2.\end{aligned}$$ Considering the spatial torus ${\ensuremath{\mathbb{R}}}/X{\ensuremath{\mathbb{Z}}}$, the momenta of the quantum field take values in $\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}\setminus\{0\}$ and the dispersion relation $E_p^2=p^2$ yields the energy $E_p={{\left\lvert}{p}{\right\lvert}}$ of a particle with momentum $p\in\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}\setminus\{0\}$. Hence, using the canonical quantization of free fields (cf. e.g. [@tong] chapter 2) we obtain the quantized field $\Phi$ and momentum $\Pi$ $$\begin{aligned} \Phi(x)=&\sum_{p\in\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}\setminus\{0\}}\frac{1}{\sqrt{2XE_p}}{\ensuremath{\left}}(a_pe^{ipx}+a_p^\dagger e^{-ipx}{\ensuremath{\right}})\\ \Pi(x)=&\sum_{p\in\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}\setminus\{0\}}(-i)\sqrt{\frac{E_p}{2X}}{\ensuremath{\left}}(a_pe^{ipx}-a_p^\dagger e^{-ipx}{\ensuremath{\right}})\end{aligned}$$ where $a_p$ and $a_p^\dagger$ are the normalized annihilation and creation operators for a particle of momentum $p$. In other words, they satisfy the canonical commutation relations $[a_p,a_q]=[a_p^\dagger,a_q^\dagger]=0$ and $[a_p,a_q^\dagger]=\delta_{p,q}$. Plugging these expressions into the Hamiltonian density (${\varphi}\rightsquigarrow\Phi$ and $\hat{\varphi}\rightsquigarrow\Pi$) and integrating over ${\ensuremath{\mathbb{R}}}/X{\ensuremath{\mathbb{Z}}}$ then yields the Hamiltonian $$\begin{aligned} H=&\frac12\sum_{p,q\in\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}\setminus\{0\}}\Bigg(\frac{-\sqrt{E_pE_q}}{2}{\ensuremath{\left}}(a_pa_q\delta_{p,-q}-a_pa_q^\dagger\delta_{p,q}-a_p^\dagger a_q\delta_{p,q}+a_p^\dagger a_q^\dagger\delta_{p,-q}{\ensuremath{\right}})\\ &+\frac{1}{2\sqrt{E_pE_q}}{\ensuremath{\left}}(-pqa_pa_q\delta_{p,-q}+pqa_pa_q^\dagger\delta_{p,q}+pqa_p^\dagger a_q\delta_{p,q}-pqa_p^\dagger a_q^\dagger\delta_{p,-q}{\ensuremath{\right}})\Bigg)\\ =&\frac12\sum_{p\in\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}\setminus\{0\}}\frac{1}{2E_p}{\ensuremath{\left}}((-E_p^2+p^2)(a_pa_{-p}+a_p^\dagger a_{-p}^\dagger)+(E_p^2+p^2)(a_pa_p^\dagger+a_p^\dagger a_p){\ensuremath{\right}})\\ =&\frac12\sum_{p\in\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}\setminus\{0\}}E_p(2a_p^\dagger a_p+1)\\ =&\sum_{p\in\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}\setminus\{0\}}E_p{\ensuremath{\left}}(a_p^\dagger a_p+\frac12{\ensuremath{\right}})\end{aligned}$$ since $a_pa_p^\dagger=a_p^\dagger a_p+1$ and $E_p^2=p^2$. Here, the term $\sum_{p\in\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}\setminus\{0\}}E_pa_p^\dagger a_p$ is precisely what we expect to see since $a_p^\dagger a_p$ counts the number of particles with momentum $p$. The term $\sum_{p\in\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}\setminus\{0\}}\frac{E_p}{2}$ on the other hand diverges. In the physics literature, you usually encounter a renormalization argument at this point or the Hamiltonian is directly redefined to be normally ordered, and the term is dropped. Therefore, we define the normally ordered Hamiltonian to be $$\begin{aligned} H_n:=\sum_{p\in\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}}E_pa_p^\dagger a_p\end{aligned}$$ where we artificially added the $p=0$ term which corresponds to the “there are no particles” case. On the other hand, we are looking to use a $\zeta$-regularized framework and this additional term $$\begin{aligned} \sum_{p\in\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}\setminus\{0\}}\frac{E_p}{2}=&\frac{2\pi}{X}\sum_{k\in{\ensuremath{\mathbb{N}}}}k\text{ ``$=$'' }\frac{2\pi}{X}\zeta_R(-1)=-\frac{\pi}{6X}\end{aligned}$$ can be interpreted as such where $\zeta_R$ denotes the Riemann $\zeta$-function. In other words, we can define a $\zeta$-regularized Hamiltonian $$\begin{aligned} H_\zeta:=H_n-\frac{\pi}{6X}.\end{aligned}$$ It is interesting to note that $H_\zeta$ and $H_n$ coincide in the limit $X\to\infty$ which eventually we need to perform if we want to obtain vacuum expectation values in $1+1$ Minkowski space. However, physically this constant has no impact at all since it is not an observable. This relies on the fact that we cannot measure “absolute” energies but only differences in energy. The choice between $H_\zeta$ and $H_n$ is therefore similar to the choice between measuring temperature in Kelvin ($H_n$) or Celsius ($H_\zeta$). In order to use the $\zeta$-formalism, we need to find Fourier integral operator representations of $H_\zeta$ and $H_n$. Since the two only differ by a constant, we will only consider $H_n$ for the moment. Let $H_n^1$ be the restriction of $H_n$ to the space generated by at most single particle states. Calling the vacuum state ${\ensuremath{\left|{0}\right\rangle}}$, we can obtain all single particle states ${\ensuremath{\left|{p}\right\rangle}}=a_p^\dagger{\ensuremath{\left|{0}\right\rangle}}$ using the corresponding creation operator and, since $a_q^\dagger a_q{\ensuremath{\left|{p}\right\rangle}}=\delta_{p,q}{\ensuremath{\left|{p}\right\rangle}}$, we directly obtain $H_n^1{\ensuremath{\left|{p}\right\rangle}}={{\left\lvert}{p}{\right\lvert}}{\ensuremath{\left|{p}\right\rangle}}$. In other words, ${\ensuremath{\left}}({\ensuremath{\left|{p}\right\rangle}}{\ensuremath{\right}})_{p\in\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}}$ is an orthonormal basis of the Hilbert space spanned by all at most single particle states which we can thus identify with $\ell_2{\ensuremath{\left}}(\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}{\ensuremath{\right}})$ and therefore with $L_2({\ensuremath{\mathbb{R}}}/X{\ensuremath{\mathbb{Z}}})$ as well. In particular, we have the correspondence $$\begin{aligned} \ell_2{\ensuremath{\left}}(\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}{\ensuremath{\right}})\ni{\ensuremath{\left|{p}\right\rangle}}\longleftrightarrow{\ensuremath{\left}}(x\mapsto e^{ipx}{\ensuremath{\right}})\in L_2({\ensuremath{\mathbb{R}}}/X{\ensuremath{\mathbb{Z}}})\end{aligned}$$ and obtain the $L_2({\ensuremath{\mathbb{R}}}/X{\ensuremath{\mathbb{Z}}})$ representation $$\begin{aligned} H_n^1={{\left\lvert}{{\partial}}{\right\lvert}}.\end{aligned}$$ In order to allow multiple particles to exist, suppose we have the $N$ particle state ${\ensuremath{\left|{P}\right\rangle}}={\ensuremath{\left|{P_0,P_1,\ldots,P_{N-1}}\right\rangle}}={\ensuremath{\left}}(\prod_{j=0}^{N-1}a_{P_j}^\dagger{\ensuremath{\right}}){\ensuremath{\left|{0}\right\rangle}}$. This state can be represented as a sum of permutations of tensor products ${\ensuremath{\left|{P}\right\rangle}}=\frac{1}{N!}\sum_{\pi\in S_N}\bigotimes_{j=0}^{N-1}{\ensuremath{\left|{P_{\pi(j)}}\right\rangle}}$, where $S_N$ denotes the symmetric group on the set $N$, in the symmetric tensor product $S\bigotimes_{j=1}^{N}{\ensuremath{\left}}(L_2({\ensuremath{\mathbb{R}}}/X{\ensuremath{\mathbb{Z}}})\ominus{\ensuremath{\mathbb{C}}}{\ensuremath{\right}})$. The Hilbert space ${\ensuremath{\mathcal{H}}}$ is then the Fock space given by the Hilbert space completion of $\bigoplus_{N\in{\ensuremath{\mathbb{N}}}_0}S\bigotimes_{j=1}^N{\ensuremath{\left}}(L_2({\ensuremath{\mathbb{R}}}/X{\ensuremath{\mathbb{Z}}})\ominus{\ensuremath{\mathbb{C}}}{\ensuremath{\right}})$. Thus, states in ${\ensuremath{\mathcal{H}}}$ are of the form $$\begin{aligned} {\ensuremath{\left|{\Psi}\right\rangle}} =& a_0{\ensuremath{\left|{0}\right\rangle}}\oplus\bigoplus_{N\in{\ensuremath{\mathbb{N}}}}\sum_{j_0^N,\ldots,j_{N-1}^N}a_{j_0^N,\ldots,j_{N-1}^N}{\ensuremath{\left|{p_{j_0^N},\ldots,p_{j_{N-1}^N}}\right\rangle}}\\ {\ensuremath{\left|{\Phi}\right\rangle}} =& b_0{\ensuremath{\left|{0}\right\rangle}}\oplus\bigoplus_{N\in{\ensuremath{\mathbb{N}}}}\sum_{k_0^N,\ldots,k_{N-1}^N}b_{k_0^N,\ldots,k_{N-1}^N}{\ensuremath{\left|{p_{k_0^N},\ldots,p_{k_{N-1}^N}}\right\rangle}}\end{aligned}$$ and the inner product is given by $$\begin{aligned} \langle\Psi,\Phi\rangle=a_0^*b_0+\sum_{n\in{\ensuremath{\mathbb{N}}}}\sum_{j_0^N,\ldots,j_{N-1}^N}\sum_{k_0^N,\ldots,k_{N-1}^N}a_{j_0^N,\ldots,j_{N-1}^N}^*b_{k_0^N,\ldots,k_{N-1}^N}\prod_{m=0}^{N-1}\langle p_{j_m^N},p_{k_m^N}\rangle.\end{aligned}$$ Given a pure $N$ particle state, we deduce that the restriction $H_n^N$ of $H_n$ to the $N$ particle Hilbert space $S\bigotimes_{j=1}^{N}{\ensuremath{\left}}(L_2({\ensuremath{\mathbb{R}}}/X{\ensuremath{\mathbb{Z}}})\ominus{\ensuremath{\mathbb{C}}}{\ensuremath{\right}})$ is given by $$\begin{aligned} H_n^N=\sum_{j=1}^N{\ensuremath{\left}}(\bigotimes_{k=1}^{j-1}{\ensuremath{\mathrm{id}}}{\ensuremath{\right}})\otimes H_n^1\otimes{\ensuremath{\left}}(\bigotimes_{k=j+1}^{N}{\ensuremath{\mathrm{id}}}{\ensuremath{\right}})=\sum_{j=1}^N\frac{2\pi}{X}{{\left\lvert}{{\partial}_j}{\right\lvert}}\end{aligned}$$ and, finally, we can represent $H_n$ on the Fock space ${\ensuremath{\mathcal{H}}}$ as $$\begin{aligned} H_n=\bigoplus_{N\in{\ensuremath{\mathbb{N}}}}H_n^N={\ensuremath{\mathrm{diag}}}{\ensuremath{\left}}({\ensuremath{\left}}(\sum_{j=1}^N\frac{2\pi}{X}{{\left\lvert}{{\partial}_j}{\right\lvert}}{\ensuremath{\right}})_{N\in{\ensuremath{\mathbb{N}}}_0}{\ensuremath{\right}}).\end{aligned}$$ In this case, the energy of the state ${\ensuremath{\left|{\Psi}\right\rangle}}$ is given by $$\begin{aligned} \langle\Psi,H_n\Psi\rangle_{{\ensuremath{\mathcal{H}}}}=&\sum_{N\in{\ensuremath{\mathbb{N}}}}\sum_{j_0^N,\ldots,j_{N-1}^N}{{\left\lvert}{a_{j_0^N,\ldots,j_{N-1}^N}}{\right\lvert}}^2\sum_{k=0}^{N-1}E_{p_{j_k^N}}\end{aligned}$$ i.e., precisely the expression we were looking for. In particular, this expression is minimal if and only if each summand is zero which implies ${\ensuremath{\left|{\Psi}\right\rangle}}={\ensuremath{\left|{0}\right\rangle}}$. In other words, the vacuum expectation of $H_n$ is $$\begin{aligned} \langle H_n\rangle={\ensuremath{\left\langle{0}\right|}}H_n{\ensuremath{\left|{0}\right\rangle}}=0.\end{aligned}$$ Of course, this directly implies $$\begin{aligned} \langle H_\zeta\rangle={\ensuremath{\left\langle{0}\right|}}H_n{\ensuremath{\left|{0}\right\rangle}}-\frac{\pi}{6X}\langle0|0\rangle=-\frac{\pi}{6X}\end{aligned}$$ as expected. The $\zeta$-regularized vacuum expectation values of $H_n$ and $H_\zeta$ ------------------------------------------------------------------------ Let us now compare the true vacuum expectations $\langle H_n\rangle=0$ and $\langle H_\zeta\rangle=-\frac{\pi}{6X}$ to the $\zeta$-regularized vacuum expectation values $\langle H_n\rangle_\zeta$ and $\langle H_\zeta\rangle_\zeta$. Again, we will start with $H_n$. However, if we try to naïvely ignore that we have a Fock space here, $$\begin{aligned} &\langle H_n\rangle_\zeta\\ =&\lim_{z\to0}\lim_{T\to\infty}\frac{\int_{\bigtimes_{N\in{\ensuremath{\mathbb{N}}}}{\ensuremath{\mathbb{R}}}^N}\sum_{N\in{\ensuremath{\mathbb{N}}}}e^{i\frac{2\pi T}{X}\sum_{n=1}^N{{\left\lVert}{\xi_{N,n}}{\right\lVert}}}\frac{2\pi}{X}\sum_{n=1}^N{{\left\lVert}{\xi_{N,n}}{\right\lVert}}\prod_{m=1}^N{{\left\lVert}{\xi_{N,m}}{\right\lVert}}^zd\xi}{\int_{\bigtimes_{N\in{\ensuremath{\mathbb{N}}}}{\ensuremath{\mathbb{R}}}^N}\sum_{N\in{\ensuremath{\mathbb{N}}}}e^{i\frac{2\pi T}{X}\sum_{n=1}^N{{\left\lVert}{\xi_{N,n}}{\right\lVert}}}\prod_{m=1}^N{{\left\lVert}{\xi_{N,m}}{\right\lVert}}^zd\xi}\\ =&\lim_{z\to0}\lim_{T\to\infty}\frac{\sum_{N\in{\ensuremath{\mathbb{N}}}}\frac{2\pi}{X}\sum_{n=1}^N\prod_{m=1}^N\int_{{\ensuremath{\mathbb{R}}}^N}e^{i\frac{2\pi T}{X}{{\left\lVert}{\xi}{\right\lVert}}}{{\left\lVert}{\xi}{\right\lVert}}^{z+\delta_{m,n}}d\xi}{\sum_{N\in{\ensuremath{\mathbb{N}}}}\sum_{n=1}^N\prod_{m=1}^N\int_{{\ensuremath{\mathbb{R}}}^N}e^{i\frac{2\pi T}{X}{{\left\lVert}{\xi}{\right\lVert}}}{{\left\lVert}{\xi}{\right\lVert}}^{z}d\xi}\\ =&\lim_{z\to0}\lim_{T\to\infty}\frac{\sum_{N\in{\ensuremath{\mathbb{N}}}}\frac{2\pi N}{X}\int_{{\ensuremath{\mathbb{R}}}^N}e^{i\frac{2\pi T}{X}{{\left\lVert}{\xi}{\right\lVert}}}{{\left\lVert}{\xi}{\right\lVert}}^{z+1}d\xi{\ensuremath{\left}}(\int_{{\ensuremath{\mathbb{R}}}^N}e^{i\frac{2\pi T}{X}{{\left\lVert}{\xi}{\right\lVert}}}{{\left\lVert}{\xi}{\right\lVert}}^{z}d\xi{\ensuremath{\right}})^{N-1}}{\sum_{N\in{\ensuremath{\mathbb{N}}}}N{\ensuremath{\left}}(\int_{{\ensuremath{\mathbb{R}}}^N}e^{i\frac{2\pi T}{X}{{\left\lVert}{\xi}{\right\lVert}}}{{\left\lVert}{\xi}{\right\lVert}}^{z}d\xi{\ensuremath{\right}})^N}\\ =&\lim_{z\to0}\lim_{T\to\infty}\frac{\sum_{N\in{\ensuremath{\mathbb{N}}}}\frac{2\pi N{\ensuremath{\left}}({\ensuremath{\mathrm{vol}}}{\partial}B_{{\ensuremath{\mathbb{R}}}^N}{\ensuremath{\right}})^N}{X}\int_{{\ensuremath{\mathbb{R}}}_{>0}}e^{i\frac{2\pi T}{X}r}r^{z+N}dr{\ensuremath{\left}}(\int_{{\ensuremath{\mathbb{R}}}_{>0}}e^{i\frac{2\pi T}{X}r}r^{z+N-1}dr{\ensuremath{\right}})^{N-1}}{\sum_{N\in{\ensuremath{\mathbb{N}}}}N{\ensuremath{\left}}({\ensuremath{\mathrm{vol}}}{\partial}B_{{\ensuremath{\mathbb{R}}}^N}{\ensuremath{\right}})^N{\ensuremath{\left}}(\int_{{\ensuremath{\mathbb{R}}}_{>0}}e^{i\frac{2\pi T}{X}r}r^{z+N-1}dr{\ensuremath{\right}})^N}\\ =&\lim_{z\to0}\lim_{T\to\infty}\frac{\sum_{N\in{\ensuremath{\mathbb{N}}}}N{\ensuremath{\left}}({\ensuremath{\mathrm{vol}}}{\partial}B_{{\ensuremath{\mathbb{R}}}^N}{\ensuremath{\right}})^N\Gamma(z+N+1)\Gamma(z+N)^{N-1}{\ensuremath{\left}}(i\frac{2\pi T}{X}{\ensuremath{\right}})^{-Nz-N^2-1}}{\sum_{N\in{\ensuremath{\mathbb{N}}}}\frac{2\pi N{\ensuremath{\left}}({\ensuremath{\mathrm{vol}}}{\partial}B_{{\ensuremath{\mathbb{R}}}^N}{\ensuremath{\right}})^N}{X}\Gamma(z+N)^N{\ensuremath{\left}}(i\frac{2\pi T}{X}{\ensuremath{\right}})^{-Nz-N^2}}\end{aligned}$$ shows that we have not completely $\zeta$-regularized since the series might not be convergent for sufficiently small $\Re(z)$. Instead we need to introduce a regularization for the summation over $N$ as well. For instance, let $$\begin{aligned} \alpha_N^z:=\Gamma(z+N+1)^{-1}\Gamma(z+N)^{-N}{\ensuremath{\left}}(i\frac{2\pi T}{X}{\ensuremath{\right}})^{Nz}N^z{\ensuremath{\left}}({\ensuremath{\mathrm{vol}}}{\partial}B_{{\ensuremath{\mathbb{R}}}^N}{\ensuremath{\right}})^{Nz}.\end{aligned}$$ Then ${\forall}N\in{\ensuremath{\mathbb{N}}}:\ \alpha_N^0=1$ and we obtain $$\begin{aligned} &\langle H_n\rangle_\zeta\\ =&\lim_{z\to0}\lim_{T\to\infty}\frac{\int_{\bigtimes_{N\in{\ensuremath{\mathbb{N}}}}{\ensuremath{\mathbb{R}}}^N}\sum_{N\in{\ensuremath{\mathbb{N}}}}\alpha_N^ze^{i\frac{2\pi T}{X}\sum_{n=1}^N{{\left\lVert}{\xi_{N,n}}{\right\lVert}}}\frac{2\pi}{X}\sum_{n=1}^N{{\left\lVert}{\xi_{N,n}}{\right\lVert}}\prod_{m=1}^N{{\left\lVert}{\xi_{N,m}}{\right\lVert}}^zd\xi}{\int_{\bigtimes_{N\in{\ensuremath{\mathbb{N}}}}{\ensuremath{\mathbb{R}}}^N}\sum_{N\in{\ensuremath{\mathbb{N}}}}\alpha_N^ze^{i\frac{2\pi T}{X}\sum_{n=1}^N{{\left\lVert}{\xi_{N,n}}{\right\lVert}}}\prod_{m=1}^N{{\left\lVert}{\xi_{N,m}}{\right\lVert}}^zd\xi}\\ =&\lim_{z\to0}\lim_{T\to\infty}{\underbrace}{\frac{\sum_{N\in{\ensuremath{\mathbb{N}}}}\frac{2\pi N^{1+z}{\ensuremath{\left}}({\ensuremath{\mathrm{vol}}}{\partial}B_{{\ensuremath{\mathbb{R}}}^N}{\ensuremath{\right}})^{N(1+z)}}{X}\Gamma(z+N)^{-1}{\ensuremath{\left}}(i\frac{2\pi T}{X}{\ensuremath{\right}})^{-N^2-1}}{\sum_{N\in{\ensuremath{\mathbb{N}}}}N^{1+z}{\ensuremath{\left}}({\ensuremath{\mathrm{vol}}}{\partial}B_{{\ensuremath{\mathbb{R}}}^N}{\ensuremath{\right}})^{N(1+z)}\Gamma(z+N+1)^{-1}{\ensuremath{\left}}(i\frac{2\pi T}{X}{\ensuremath{\right}})^{-N^2}}}_{\in O{\ensuremath{\left}}(\frac{1}{T}{\ensuremath{\right}})}\\ =&0\end{aligned}$$ which coincides with the $\langle H_n\rangle$. Regarding $H_\zeta$, let ${\ensuremath{\mathfrak{G}}}$ be a gauged Fourier integral operator such that ${\ensuremath{\mathfrak{G}}}(0)=1$. Then, $e^{-\frac{iT\pi}{6X}}e^{iTH_n}{\ensuremath{\mathfrak{G}}}(0)=e^{iTH_\zeta}$ and $$\begin{aligned} \langle H_\zeta\rangle_\zeta=&\lim_{z\to0}\lim_{T\to\infty}\frac{\zeta{\ensuremath{\left}}(e^{-\frac{iT\pi}{6X}}e^{iTH_n}{\ensuremath{\mathfrak{G}}}H_\zeta{\ensuremath{\right}})}{\zeta{\ensuremath{\left}}(e^{-\frac{iT\pi}{6X}}e^{iTH_n}{\ensuremath{\mathfrak{G}}}{\ensuremath{\right}})}(z)\\ =&\lim_{z\to0}\lim_{T\to\infty}\frac{e^{-\frac{iT\pi}{6X}}{\ensuremath{\left}}(\zeta{\ensuremath{\left}}(e^{iTH_n}{\ensuremath{\mathfrak{G}}}H_n{\ensuremath{\right}})-\frac{\pi}{6X}\zeta{\ensuremath{\left}}(e^{iTH_n}{\ensuremath{\mathfrak{G}}}{\ensuremath{\right}}){\ensuremath{\right}})}{e^{-\frac{iT\pi}{6X}}\zeta{\ensuremath{\left}}(e^{iTH_n}{\ensuremath{\mathfrak{G}}}{\ensuremath{\right}})}(z)\\ =&\langle H_n\rangle_\zeta-\frac{\pi}{6X}\end{aligned}$$ implies $\langle H_\zeta\rangle_\zeta=\langle H_\zeta\rangle=-\frac{\pi}{6X}$. The $N\to\infty$ particle limit ------------------------------- Alternatively, we can consider the Hamiltonian $$\begin{aligned} H_n^{\le N}=\sum_{j=1}^N{{\left\lvert}{{\partial}_j}{\right\lvert}}\end{aligned}$$ in the “up to $N$ particle Hilbert space” $L_2(({\ensuremath{\mathbb{R}}}/X{\ensuremath{\mathbb{Z}}})^N)$ where ${\ensuremath{\left|{P_0,\ldots,P_{k-1}}\right\rangle}}$ is embedded as $\bigotimes_{j\in k}{\ensuremath{\left|{P_j}\right\rangle}}\otimes\bigotimes_{j\in N-k}{\ensuremath{\left|{0}\right\rangle}}$. Physically, taking the limit $N\to\infty$ says that we are only considering states that have finitely many particles. The $\zeta$-regularized vacuum energy is then computed as $$\begin{aligned} &\lim_{N\to\infty}\langle H_n^{\le N}\rangle_\zeta\\ =&\lim_{N\to\infty}\lim_{z\to0}\lim_{T\to\infty}\frac{\int_{{\ensuremath{\mathbb{R}}}^N}e^{i\frac{2\pi T}{X}\sum_{n=1}^N{{\left\lVert}{\xi_{n}}{\right\lVert}}}\frac{2\pi}{X}\sum_{n=1}^N{{\left\lVert}{\xi_{n}}{\right\lVert}}\prod_{n=1}^N{{\left\lVert}{\xi_{n}}{\right\lVert}}^zd\xi}{\int_{{\ensuremath{\mathbb{R}}}^N}e^{i\frac{2\pi T}{X}\sum_{n=1}^N{{\left\lVert}{\xi_{n}}{\right\lVert}}}\prod_{n=1}^N{{\left\lVert}{\xi_{n}}{\right\lVert}}^zd\xi}\\ =&\lim_{N\to\infty}\lim_{z\to0}\lim_{T\to\infty}\frac{\frac{2\pi}{X}\sum_{n=1}^N\prod_{m=1}^N\int_{{\ensuremath{\mathbb{R}}}^N}e^{i\frac{2\pi T}{X}{{\left\lVert}{\xi}{\right\lVert}}}{{\left\lVert}{\xi}{\right\lVert}}^{z+\delta_{m,n}}d\xi}{\sum_{n=1}^N\prod_{m=1}^N\int_{{\ensuremath{\mathbb{R}}}^N}e^{i\frac{2\pi T}{X}{{\left\lVert}{\xi}{\right\lVert}}}{{\left\lVert}{\xi}{\right\lVert}}^{z}d\xi}\\ =&\lim_{N\to\infty}\lim_{z\to0}\lim_{T\to\infty}\frac{\frac{2\pi N}{X}\int_{{\ensuremath{\mathbb{R}}}^N}e^{i\frac{2\pi T}{X}{{\left\lVert}{\xi}{\right\lVert}}}{{\left\lVert}{\xi}{\right\lVert}}^{z+1}d\xi{\ensuremath{\left}}(\int_{{\ensuremath{\mathbb{R}}}^N}e^{i\frac{2\pi T}{X}{{\left\lVert}{\xi}{\right\lVert}}}{{\left\lVert}{\xi}{\right\lVert}}^{z}d\xi{\ensuremath{\right}})^{N-1}}{N{\ensuremath{\left}}(\int_{{\ensuremath{\mathbb{R}}}^N}e^{i\frac{2\pi T}{X}{{\left\lVert}{\xi}{\right\lVert}}}{{\left\lVert}{\xi}{\right\lVert}}^{z}d\xi{\ensuremath{\right}})^N}\\ =&\lim_{N\to\infty}\lim_{z\to0}\lim_{T\to\infty}\frac{\frac{2\pi}{X}\int_{{\ensuremath{\mathbb{R}}}_{>0}}e^{i\frac{2\pi T}{X}r}r^{z+N}dr{\ensuremath{\left}}(\int_{{\ensuremath{\mathbb{R}}}_{>0}}e^{i\frac{2\pi T}{X}r}r^{z+N-1}dr{\ensuremath{\right}})^{N-1}}{{\ensuremath{\left}}(\int_{{\ensuremath{\mathbb{R}}}_{>0}}e^{i\frac{2\pi T}{X}r}r^{z+N-1}dr{\ensuremath{\right}})^N}\\ =&\lim_{N\to\infty}\lim_{z\to0}\lim_{T\to\infty}\frac{\frac{2\pi}{X}\Gamma(z+N+1)\Gamma(z+N)^{N-1}{\ensuremath{\left}}(i\frac{2\pi T}{X}{\ensuremath{\right}})^{-Nz-N^2-1}}{\Gamma(z+N)^N{\ensuremath{\left}}(i\frac{2\pi T}{X}{\ensuremath{\right}})^{-Nz-N^2}}\\ =&0\end{aligned}$$ in this setting. Free complex scalar quantum fields {#sec:free-complex-scalar} ================================== Complex scalar fields are generalizations of real scalar fields which allow for the creation of antiparticles. More precisely, in a real scalar field the particle is its own antiparticle. The distinction between particles and antiparticles for the complex scalar field becomes obvious once they are quantized. Writing a complex scalar field $\psi=\frac{{\varphi}_1+i{\varphi}_2}{\sqrt2}$ as the sum of two real scalar fields ${\varphi}_1$ and ${\varphi}_2$ with creation operators $b^\dagger$ and $c^\dagger$, and expanding the field operator as a sum of planar waves yields $$\begin{aligned} \Psi(x)=&\sum_{p\in M}\frac{1}{\sqrt{2XE_p}}{\ensuremath{\left}}(b_pe^{ipx}+c_p^\dagger e^{-ipx}{\ensuremath{\right}})\\ \Psi^\dagger(x)=&\sum_{p\in M}\frac{1}{\sqrt{2XE_p}}{\ensuremath{\left}}(b_p^\dagger e^{-ipx}+c_p e^{ipx}{\ensuremath{\right}})\end{aligned}$$ on ${\ensuremath{\mathbb{R}}}/X{\ensuremath{\mathbb{Z}}}$ where $M=\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}\setminus\{0\}$ is the set of momenta. This furthermore implies the conjugate momentum $$\begin{aligned} \Pi(x)=&\sum_{p\in M}i\sqrt{\frac{E_p}{2X^N}}{\ensuremath{\left}}(b_p^\dagger e^{-ipx}-c_pe^{ipx}{\ensuremath{\right}})\\ \Pi^\dagger(x)=&\sum_{p\in M}(-i)\sqrt{\frac{E_p}{2X^N}}{\ensuremath{\left}}(b_pe^{ipx}-c_p^\dagger e^{-ipx}{\ensuremath{\right}}).\end{aligned}$$ If we consider the charge operator $Q=i\int\Pi(x)\Psi(x)-\Psi^*(x)\Pi^*(x)dx$ we directly obtain $$\begin{aligned} Q=\sum_{p\in M}c_pc_p^\dagger-b_p^\dagger b_p\end{aligned}$$ which is not normally ordered. The normally ordered charge $Q_n$ is thus given by $$\begin{aligned} Q_n=\sum_{p\in M}c_p^\dagger c_p-b_p^\dagger b_p.\end{aligned}$$ This again can be explained using a $\zeta$-argument and the commutator relation $[c_p,c_p^\dagger]=1$. More precisely, we need to $\zeta$-regularize the series $\sum_{p\in M}1$ which is nothing other than ${\ensuremath{\operatorname{tr}}}{\ensuremath{\mathrm{id}}}$ on ${\ensuremath{\mathbb{R}}}/X{\ensuremath{\mathbb{Z}}}$. Since ${\ensuremath{\mathrm{id}}}$ has no critical degree of homogeneity, ${\ensuremath{\operatorname{tr}}}{\ensuremath{\mathrm{id}}}:=\zeta(z\mapsto{{\left\lvert}{\nabla}{\right\lvert}}^z)(0)$ exists and is a well-defined constant (in fact, it is $2\zeta_R{\ensuremath{\left}}(0{\ensuremath{\right}})=-1$ where $\zeta_R$ is the Riemann $\zeta$-function), i.e., $$\begin{aligned} Q_\zeta=Q_n+{\ensuremath{\operatorname{tr}}}{\ensuremath{\mathrm{id}}}=Q_n-1.\end{aligned}$$ As for the Hamiltonian, we repeat the same calculation we did in the real case but with Lagrangian ${\partial}^\mu{\varphi}^*{\partial}_\mu{\varphi}$ instead of ${\partial}^\mu{\varphi}{\partial}_\mu{\varphi}$ and obtain the normally ordered Hamiltonian $$\begin{aligned} H_n=\sum_{p\in\frac{2\pi}{X}{\ensuremath{\mathbb{Z}}}}E_p(b_p^\dagger b_p-c_p^\dagger c_p)\end{aligned}$$ which differs from the $\zeta$-regularized Hamiltonian by a constant again. This also shows the interesting effect that antiparticles appear with negative energy in the theory which allows us to reproduce the Feynman-Stückelberg interpretation of antiparticles. Considering the wave propagator under time-reversal $\exp(itH_n)\rightsquigarrow\exp(-itH_n)$ we obtain an algebraically equivalent theory with reversed roles for $b_p$ and $c_p$. In other words, antiparticles are particles that move backwards in time and creation and annihilation of particle-antiparticle pairs can be seen as a particle reversing the direction it travels through time. In any case, the negative energies yield the up to $N$ particle and $N$ anti-particle Hamiltonian $$\begin{aligned} H_n^{\le N}=\sum_{j=1}^N{\ensuremath{\left}}({{\left\lvert}{{\partial}_{1,j}}{\right\lvert}}-{{\left\lvert}{{\partial}_{2,j}}{\right\lvert}}{\ensuremath{\right}})\end{aligned}$$ on $L_2(({\ensuremath{\mathbb{R}}}/X{\ensuremath{\mathbb{Z}}})^{2N})$ which directly implies that $\lim_{N\to\infty}\langle H_n^{\le N}\rangle_\zeta=\langle H_n\rangle_\zeta=0$ since the degrees of homogeneity are identical to the ones in the real scalar field case. The Dirac field {#sec:free-dirac} =============== The free Dirac field is closely related to the complex scalar field but we are now considering spinor valued fields, assume that the creation and annihilation operators satisfy the canonical anticommutator relations, and possibly introduce a mass term $m$. Hence, our fields on the spatial torus $({\ensuremath{\mathbb{R}}}/X{\ensuremath{\mathbb{Z}}})^N$ are $$\begin{aligned} \Psi(x)=&\sum_{p\in M}\sum_{s\in S}\frac{1}{\sqrt{2X^NE_p}}{\ensuremath{\left}}(b_p^su_p^se^{ipx}+c_p^{s\dagger}v_p^s e^{-ipx}{\ensuremath{\right}})\\ \Psi^\dagger(x)=&\sum_{p\in M}\sum_{s\in S}\frac{1}{\sqrt{2X^NE_p}}{\ensuremath{\left}}(b_p^{s\dagger}u_p^{s\dagger} e^{-ipx}+c_pv_p^{s\dagger} e^{ipx}{\ensuremath{\right}})\end{aligned}$$ where $S$ is the set of spins, $M$ the set of momenta, and $u$ and $v$ are spinors, i.e., they satisfy 1. $(\gamma^\mu p_\mu-m)u^s_p=0$ 2. $(\gamma^\mu p_\mu+m)v^s_p=0$ 3. $u_p^{r\dagger}u_p^s=v_p^{r\dagger}v_p^s=2E_p\delta^{rs}$ 4. $u_p^{r\dagger}v_{-p}^s=v_p^{r\dagger}u_{-p}^s=0$ where $(p^\mu)_{\mu}=(E_p,p)^T$, $E_p=\sqrt{\langle p,p\rangle+m^2}$, and the $\gamma$-matrices are given in the Dirac basis $\gamma^0=\begin{pmatrix}1&0\\0&-1\end{pmatrix}$ and $\gamma^k=\begin{pmatrix}0&\sigma^k\\-\sigma^k&0\end{pmatrix}$ with the Pauli matrices $\sigma^1=\begin{pmatrix}0&1\\1&0\end{pmatrix}$, $\sigma^2=\begin{pmatrix}0&-i\\i&0\end{pmatrix}$, and $\sigma^3=\begin{pmatrix}1&0\\0&-1\end{pmatrix}$. Plugging everything into the Dirac Hamiltonian density $\Psi^\dagger\gamma^0(-i\gamma^j{\partial}_j+m)\Psi$ and integrating then yields $$\begin{aligned} H=\sum_{p\in M}\sum_{s\in S}E_p(b_p^{s\dagger} b_p^s+c_p^s c_p^{s\dagger})\end{aligned}$$ and $c_p^s c_p^{s\dagger}=1-c_p^{s\dagger} c_p^{s}$ yields the normally ordered Hamiltonian $$\begin{aligned} H_n=\sum_{p\in M}\sum_{s\in S}E_p(b_p^{s\dagger} b_p^s-c_p^{s\dagger}c_p^s).\end{aligned}$$ For $m=0$ this is precisely the same situation we had for the complex scalar field just with an additional summation over spins. For $m>0$ we still have the question whether we can normally order the Hamiltonian using a $\zeta$-argument again. In other words, we need to $\zeta$-regularize the trace of an operator with kernel $\sqrt{{{\left\lVert}{\xi}{\right\lVert}}^2+m^2}$ but for ${{\left\lVert}{\xi}{\right\lVert}}>m$ we observe the asymptotic expansion $$\begin{aligned} \sqrt{{{\left\lVert}{\xi}{\right\lVert}}^2+m^2}=\sum_{j\in{\ensuremath{\mathbb{N}}}_0}{\frac{1}{2}\choose j}{{\left\lVert}{\xi}{\right\lVert}}^{1-2j}m^{2j}\end{aligned}$$ which has a degree of homogeneity $-N$ if and only if $N$ is odd. In particular, the residue trace is given by ${\frac{1}{2}\choose \frac{N+1}{2}}m^{N+1}{\ensuremath{\mathrm{vol}}}{\partial}B_{{\ensuremath{\mathbb{R}}}^N}$. Hence, $\zeta$-regularization fails to normally order this Hamiltonian. However, this is no problem in the light of vacuum expectation values as we are taking quotients of $\zeta$-functions. Hence, the presence of poles simply means that the value of $\frac{\zeta(U{\ensuremath{\mathfrak{G}}}A)}{\zeta(U{\ensuremath{\mathfrak{G}}})}(0)$ is given by the quotient of residues rather than the quotient of constant Laurent coefficients. Coupling a fermion of mass $m$ to light in $1+1$ dimensions {#sec:fermion-light} =========================================================== Coupling light to matter in $1+1$ dimensions is one of the text-book examples of $\zeta$-regularization in the physical literature because it is a toy model for QED. In particular, the Schwinger model which has $m=0$ has been studied extensively (cf. e.g. [@iso-murayama]). Here, we will show how the well-known applications of $\zeta$-regularization tie into the framework of $\zeta$-regularized vacuum expectation values as discussed in [@hartung; @hartung-iwota; @hartung-jansen]. In order to consider coupling a fermion to a gauge field $(A_\mu)_\mu$, we will restrict our considerations to a fermion in $1+1$ dimensions with a constant background field. This ignores the self-interaction of the gauge field which gives an additional term to the Hamiltonian that has already been discussed in [@hartung-jansen]. In the present case, and using the temporal gauge $A_0=0$, $A:=A_1$, the (fermionic coupling) Hamiltonian on ${\ensuremath{\mathbb{R}}}/X{\ensuremath{\mathbb{Z}}}$ is given by $$\begin{aligned} H_F=\int_0^X\Psi(x)^\dagger{\ensuremath{\left}}((i{\partial}-eA)\sigma_3+m{\ensuremath{\right}})\Psi(x)dx\end{aligned}$$ where $e$ is the coupling constant, $\sigma_3=\begin{pmatrix}1&0\\0&-1\end{pmatrix}$, and $\Psi$ is the spinor field which we endow with anti-periodic boundary conditions $\Psi(x+X)=-\Psi(x)$ (this is allowed because $\Psi$ is an auxiliary field; all physical quantities are composed of sesquilinear forms in $\Psi$ which are periodic). To study this system, we will first expand $\Psi$ into eigenmodes of $(i{\partial}-eA)\sigma_3+m$, i.e., we are looking to solve $$\begin{aligned} (i{\partial}-eA)\sigma_3 \begin{pmatrix} \psi^+\\0 \end{pmatrix} =&{\varepsilon}^+-m \begin{pmatrix} \psi^+\\0 \end{pmatrix}\quad\text{and}\quad (i{\partial}-eA)\sigma_3 \begin{pmatrix} 0\\\psi^- \end{pmatrix} =-{\varepsilon}^--m \begin{pmatrix} 0\\\psi^- \end{pmatrix}.\end{aligned}$$ These imply $$\begin{aligned} \psi^\pm(x)=\frac{1}{\sqrt{X}}\exp{\ensuremath{\left}}(-ie\int_0^xA(y)dy-i({\varepsilon}^\pm\mp m) x{\ensuremath{\right}})\end{aligned}$$ where $$\begin{aligned} e^{-i\pi-2i\pi n}\Psi(x)=-\Psi(x)=\Psi(x+X)\end{aligned}$$ implies that ${\varepsilon}^\pm$ has to satisfy $$\begin{aligned} &-i\pi-2i\pi n-ie\int_0^xA(y)dy-i({\varepsilon}^\pm\mp m) x\\ =&-ie\int_0^xA(y)dy-i({\varepsilon}^\pm\mp m) x-ie\int_0^XA(y)dy-i({\varepsilon}^\pm\mp m) X.\end{aligned}$$ In other words, the eigenvalues are given by $$\begin{aligned} {\forall}n\in{\ensuremath{\mathbb{Z}}}:\ {\varepsilon}^\pm_n:=\frac{\pi}{X}+\frac{2\pi}{X} n-\frac{e\oint A}{X}\pm m=\frac{2\pi}{X}{\ensuremath{\left}}(n+\frac{1}{2}\pm mX-\frac{e\oint A}{2\pi}{\ensuremath{\right}})\end{aligned}$$ where $\oint A:=\int_0^XA(y)dy$. For brevity, we will write $C^\pm:=\frac{e\oint A}{2\pi}\mp mX$. First quantization of $\Psi$, then introduces annihilation operators $a_n$ and $b_n$ for the upper and lower components of $\Psi$ with $\{a_m,a_n^\dagger\}=\{b_m,b_n^\dagger\}=\delta_{m,n}$ and $\Psi$ is given by $$\begin{aligned} \Psi(x)=\sum_{n\in{\ensuremath{\mathbb{Z}}}} \begin{pmatrix} \psi_n^+(x)a_n\\ \psi_n^-(x)b_n \end{pmatrix} =\frac{1}{\sqrt{X}}\sum_{n\in{\ensuremath{\mathbb{Z}}}} \begin{pmatrix} \exp{\ensuremath{\left}}(-ie\int_0^xA(y)dy-i({\varepsilon}_n^+-m) x{\ensuremath{\right}})a_n\\ \exp{\ensuremath{\left}}(-ie\int_0^xA(y)dy-i({\varepsilon}_n^-+m) x{\ensuremath{\right}})b_n \end{pmatrix}.\end{aligned}$$ In particular, this implies $$\begin{aligned} H_F=&\int_0^X\Psi(x)^\dagger{\ensuremath{\left}}((i{\partial}-eA)\sigma_3+m{\ensuremath{\right}})\Psi(x)dx=\sum_{n\in{\ensuremath{\mathbb{Z}}}}({\varepsilon}_n^+a_n^\dagger a_n-{\varepsilon}_n^-b_n^\dagger b_n).\end{aligned}$$ At this point, we will split our considerations into the positive ($a_n$) and negative ($b_n$) chirality sectors. The positive sector has the Hamiltonian $H_+:=\sum_{n\in{\ensuremath{\mathbb{Z}}}}{\varepsilon}_n^+a_n^\dagger a_n$ and chiral charge $Q_+:=\sum_{n\in{\ensuremath{\mathbb{Z}}}}a_n^\dagger a_n$. Since there is no minimum energy, we define the $N^+$-vacuum of the positive chirality sector by filling all states with energies ${\varepsilon}_n^+$ where $n<N^+$. To compute the $N^+$-chiral charge $\langle Q_+\rangle_{N^+}=\sum_{n\in{\ensuremath{\mathbb{Z}}}_{<N^+}}1$ and $N^+$-vacuum energy $\langle H_+\rangle_{N^+}=\sum_{n\in{\ensuremath{\mathbb{Z}}}_{<N^+}}{\varepsilon}_n^+$, we use $\zeta$-regularization. The gauge family ${\ensuremath{\mathfrak{G}}}_+(z):={{\left\lvert}{H_+}{\right\lvert}}^z$ makes the computation easily accessible on the spectral side. With this choice of gauge, we observe $$\begin{aligned} \langle Q_+\rangle_{N^+,\zeta}=&\lim_{z\to0}\sum_{n\in{\ensuremath{\mathbb{Z}}}_{<N^+}}{{\left\lvert}{{\varepsilon}_n^+}{\right\lvert}}^z\\ =&\lim_{z\to0}\sum_{n\in{\ensuremath{\mathbb{Z}}}_{<N^+}}{{\left\lvert}{n+\frac{1}{2}-C^+}{\right\lvert}}^z\\ =&\lim_{z\to0}\sum_{n\in{\ensuremath{\mathbb{N}}}_0}{{\left\lvert}{N^+-n-\frac{1}{2}-C^+}{\right\lvert}}^z\\ =&\zeta_H{\ensuremath{\left}}(0;\frac{1}{2}+C^+-N^+{\ensuremath{\right}})\end{aligned}$$ where $\zeta_H$ is the Hurwitz $\zeta$-function (analytically continued in the second argument as well). Using ther Bernoulli polynomials $B_n$ - which are defined as $B_0:=1$, $B_n'=nB_{n-1}$, and $n\ge1\ {\ensuremath{\Rightarrow}}\ \int_0^1B_n(x)dx=0$ - non-positive integer values of $\zeta_H$ are given by $\zeta_H(-n;x)=-\frac{B_{n+1}(x)}{n+1}$. In particular, we will need $\zeta_H(0;x)=\frac{1}{2}-x$ and $\zeta_H(-1;x)=-\frac{1}{2}{\ensuremath{\left}}({\ensuremath{\left}}(x-\frac{1}{2}{\ensuremath{\right}})^2-\frac{1}{12}{\ensuremath{\right}})$, the former of which directly implies $$\begin{aligned} \langle Q_+\rangle_{N^+,\zeta}=&N^+-C^+.\end{aligned}$$ Similarly, $$\begin{aligned} \langle H_+\rangle_{N^+,\zeta}=&\lim_{z\to0}\sum_{n\in{\ensuremath{\mathbb{Z}}}_{<N^+}}{\varepsilon}_n^+{{\left\lvert}{{\varepsilon}_n^+}{\right\lvert}}^z\\ =&-\frac{2\pi}{X}\zeta_H{\ensuremath{\left}}(-1;\frac{1}{2}+C^+-N^+{\ensuremath{\right}})\\ =&-\frac{\pi}{X}{\ensuremath{\left}}({\ensuremath{\left}}(C^+-N^+{\ensuremath{\right}})^2-\frac{1}{12}{\ensuremath{\right}})\\ =&-\frac{\pi}{X}{\ensuremath{\left}}(\langle Q_+\rangle_{N^+,\zeta}^2-\frac{1}{12}{\ensuremath{\right}}).\end{aligned}$$ The negative chirality sector has chiral charge $Q_-:=\sum_{n\in{\ensuremath{\mathbb{Z}}}}b_n^\dagger b_n$ and Hamiltonian $H_-:=\sum_{n\in{\ensuremath{\mathbb{Z}}}}(-{\varepsilon}_n^-)b_n^\dagger b_n$. Again, we introduce an $N^-$-vacuum filling all energy states $-{\varepsilon}_n^-$ with $n\ge N^-$ and choose the gauge family ${\ensuremath{\mathfrak{G}}}_-(z):={{\left\lvert}{H_-}{\right\lvert}}^z$. This yields $$\begin{aligned} \langle Q_-\rangle_{N^-,\zeta}=&\lim_{z\to0}\sum_{n\in{\ensuremath{\mathbb{Z}}}_{\ge N^-}}{{\left\lvert}{{\varepsilon}_n^-}{\right\lvert}}^z =\zeta_H{\ensuremath{\left}}(0;\frac{1}{2}-C^-+N^-{\ensuremath{\right}}) =C^--N^-\end{aligned}$$ and $$\begin{aligned} \langle H_-\rangle_{N^-,\zeta}=&\lim_{z\to0}\sum_{n\in{\ensuremath{\mathbb{Z}}}_{\ge N^-}}(-{\varepsilon}_n^-){{\left\lvert}{{\varepsilon}_n^-}{\right\lvert}}^z\\ =&-\frac{2\pi}{X}\zeta_H{\ensuremath{\left}}(-1;\frac{1}{2}-C^-+N^-{\ensuremath{\right}})\\ =&\frac{\pi}{X}{\ensuremath{\left}}({\ensuremath{\left}}(N^--C^-{\ensuremath{\right}})^2-\frac{1}{12}{\ensuremath{\right}})\\ =&\frac{\pi}{X}{\ensuremath{\left}}(\langle Q_-\rangle_{N^-,\zeta}^2-\frac{1}{12}{\ensuremath{\right}}).\end{aligned}$$ Combining both sectors then yields the charge $Q:=Q_++Q_-$, the chiral charge $Q_5:=Q_+-Q_-$, their $N^+$-$N^-$-vacuum expectations $$\begin{aligned} \langle Q\rangle_{N^+,N^-,\zeta}=&N^+-C^+-N^-+C^-=N^+-N^-+2mX,\\ \langle Q_5\rangle_{N^+,N^-,\zeta}=&N^++N^--C^+-C^-=N^++N^--2\frac{e\oint A}{2\pi},\end{aligned}$$ and the ground state energy of the fermion $$\begin{aligned} \langle H_F\rangle_{N^+,N^-,\zeta}=&\langle H_+\rangle_{N^+,\zeta}+\langle H_-\rangle_{N^-,\zeta} =\frac{\pi}{X}{\ensuremath{\left}}(\langle Q_+\rangle_{N^+,\zeta}^2+\langle Q_-\rangle_{N^-,\zeta}^2-\frac{1}{6}{\ensuremath{\right}}).\end{aligned}$$ This combined calculation above can be expressed in terms of Fourier integral operator $\zeta$-functions as $\langle\Omega\rangle_\zeta=\lim_{T\to\infty+i0^+}\frac{\zeta(U{\ensuremath{\mathfrak{G}}}\Omega)}{\zeta(U{\ensuremath{\mathfrak{G}}})}(0)$ where ${\ensuremath{\mathfrak{G}}}(z)={{\left\lvert}{H_+}{\right\lvert}}^z\oplus{{\left\lvert}{H_-}{\right\lvert}}^z$. Conclusion ========== In this paper we provided a number of fundamental examples using the Fourier integral operator $\zeta$-function regularization for systems that are relevant in high energy physics. We demonstrated analytically that we obtain the correct vacuum expectation values within this framework and directly addressed the non-trivial problem of treating gauge fields using this point of view. In particular, we discussed scalar fields in sections \[sec:free-real-scalar\] (real) and \[sec:free-complex-scalar\] (complex), and the Dirac field in section \[sec:free-dirac\]. Additionally, we have shown in section \[sec:fermion-light\] how one of the canonical applications of $\zeta$-regularization in the physics literature (light coupling to a fermion) appears as a special case of the Fourier integral operator $\zeta$-function approach. This opens the door to also study problems where no analytic solution exists and where the $\zeta$-regularization has to be evaluated numerically, e.g. on a quantum computer as demonstrated in [@hartung-jansen]. [99.]{} C. G. Beneventano and E. M. Santangelo. Effective action for QED$_4$ through $\zeta$-function regularization. *J. Math. Phys.* **42** (2001), 3260-3269. S. K. Blau, M. Visser, and A. Wipf. Analytic results for the effective action. *Int. J. Mod. Phys.* **A6** (1991), 5409-5433. M. Bordag, E. Elizalde, and K. Kirsten. Heat kernel coefficients of the Laplace operator on the D-dimensional ball. *J. Math. Phys.* **37**, 895 (1996). A. A. Bytsenko, G. Cognola, E. Elizalde, V. Moretti, and S. Zerbini. Analytic Aspects of Quantum Fields. *World Scientific Publishing* (2003). L. Culumovic, M. Leblanc, R. B. Mann, D. G. C. McKeon, and T. N. Sherry. Operator regularization and multiloop Green’s functions. *Phys. Rev. D* **41** (1990), 514 J. S. Dowker and R. Critchley. Effective Lagrangian and energy-momentum tensor in de Sitter space. *Phys. Rev. D* **13** (1976), 3224. E. Elizalde. Explicit zeta functions for bosonic and fermionic fields on a non-commutative toroidal spacetime. *J. Phys. A* **34** (2001), 3025-3035. E. Elizalde. Ten Physical Applications of Spectral Zeta Functions. 2nd Ed., *Lecture Notes in Physics*, vol 855, Springer (2012). E. Elizalde, S. D. Odintsov, A. Romeo, A. A. Bytsenko, and S. Zerbini. Zeta Regularization Techniques With Applications. *World Scientific Publishing* (1994). E. Elizalde, L. Vanzo, and S. Zerbini. Zeta-Function Regularization, the Multiplicative Anomaly and the Wodzicki Residue. *Commun. Math. Phys.* **194** (1998), 613-630. D. Fermi and L. Pizzocchero. Local Zeta Regularization And The Scalar Casimir Effect. *World Scientific Publishing* (2017) R. P. Feynman. Space-Time Approach to Non-Relativistic Quantum Mechanics. *Rev. Mod. Phys.* **20** (1948), 367-387. R. P. Feynman, A. R. Hibbs and D. F. Styer. Quantum Mechanics and Path Integrals. *Dover Publications, Inc.*, Emended Edition, Mineola, NY, 2005. T.-P. Hack and V. Moretti. On the stress-energy tensor of quantum fields in curved spacetimes-comparison of different regularization schemes and symmetry of the Hadamard/Seeley-DeWitt coefficients. *J. Phys. A: Math. Theor.* **45** (2012), 374019. T. Hartung. $\zeta$-functions of Fourier Integral Operators. *Ph.D. thesis, King’s College London*, London, 2015. T. Hartung. Regularizing Feynman Path Integrals using the generalized Kontsevich-Vishik trace. *J. Math. Phys.* **58** (2017), 123505. T. Hartung. Feynman path integral regularization using Fourier Integral Operator $\zeta$-functions. In: A. Böttcher, D. Potts, P. Stollmann, D. Wenzel (eds) *The Diversity and Beauty of Applied Operator Theory.* Operator Theory: Advances and Applications, vol 268. Birkhäuser (2018), 261-289 T. Hartung and K. Jansen. Quantum computing of zeta-regularized vacuum expectation values. (2018) arXiv:1808.06784. T. Hartung and S. Scott. A generalized Kontsevich-Vishik trace for Fourier Integral Operators and the Laurent expansion of $\zeta$-functions. (2015) arXiv:1510.07324. S. W. Hawking. Zeta Function Regularization of Path Integrals in Curved Spacetime. *Communications in Mathematical Physics* **55** (1977), 133-148. S. Iso and H. Murayama. Hamiltonian Formulation of the Schwinger Model. *Progr. Theor. Phys.* **84** (1990), 142-163. M. Kontsevich and S. Vishik. Determinants of elliptic pseudo-differential operators. Max Planck Preprint, arXiv:hep-th/9404046 (1994). M. Kontsevich and S. Vishik. Geometry of determinants of elliptic operators. *Functional Analysis on the Eve of the XXI century, Vol. I, Progress in Mathematics* **131** (1994), 173-197. M. Marcolli and A. Connes. From physics to number theory via noncommutative geometry. Part II: Renormalization, the Riemann-Hilbert correspondence, and motivic Galois theory. In: P. E. Cartier, B. Julia, P. Moussa, P. Vanhove (eds) *Frontiers in Number Theory, Physics, and Geometry: On Random Matrices, Zeta Functions, and Dynamical Systems*, Springer (2006). D. G. C. McKeon and T. N. Sherry. Operator regularization and one-loop Green’s functions. *Phys. Rev. D* **35** (1987), 3854 V. Moretti. Direct $\zeta$-function approach and renormalization of one-loop stress tensor in curved spacetimes. *Phys. Rev. D* **56** (1997), 7797. V. Moretti. One-loop stress-tensor renormalization in curved background: the relation between $\zeta$-function and point-splitting approaches, and an improved point-splitting procedure. *J. Math. Phys.* **40** (1999), 3843. V. Moretti. A review on recent results of the $\zeta$-function regularization procedure in curved spacetime. In: D. Fortunato, M. Francaviglia, A. Masiello (eds) *Recent developments in General Relativity*, Springer (2000) V. Moretti. Local $\zeta$-functions, stress-energy tensor, field fluctuations, and all that, in curved static spacetime. *Springer Proc. Phys.* **137** (2011), 323-332 A. Pazy. Semigroups of Linear Operators and Applications to Partial Differential Equations. *Springer* (1992). M. J. Radzikowski. The Hadamard condition and Kay’s conjecture in (axiomatic) quantum field theory on curved space-time. *Ph.D. thesis, Princeton University* (1992). M. J. Radzikowski. Micro-local approach to the Hadamard condition in quantum field theory on curved space-time. *Communications in Mathematical Physics* **179** (1996), 529-553. D. B. Ray. Reidemeister torsion and the Laplacian on lens spaces. *Advances in Mathematics* **4** (1970), 109-126. D. B. Ray and I. M. Singer. $R$-torsion and the Laplacian on Riemannian manifolds *Advances in Mathematics* **7** (1971), 145-210. N. M. Robles. Zeta Function Regularization. *Ph.D. thesis, Imperial College London* (2009). A. Y. Shiekh. Zeta Function Regularization of Quantum Field Theory. *Can. J. Phys.* **68** (1990), 620-629. R. F. Streater and A. S. Wightman. PCT, Spin and Statistics and All That. *Princeton University Press*, (2000) D. Tong. Quantum Field Theory. *University of Cambridge Part III Mathematical Tripos*, lecture notes, 2006, http://www.damtp.cam.ac.uk/user/tong/qft/qft.pdf. D. Tong. String Theory. *University of Cambridge Part III Mathematical Tripos*, lecture notes, 2009, http://www.damtp.cam.ac.uk/user/tong/string/string.pdf.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a formal verification of the functional correctness of the Muen Separation Kernel. Muen is representative of the class of modern separation kernels that leverage hardware virtualization support, and are *generative* in nature in that they generate a specialized kernel for each system configuration. These features pose substantial challenges to existing verification techniques. We propose a verification framework called conditional parametric refinement which allows us to formally reason about generative systems. We use this framework to carry out a conditional refinement-based proof of correctness of the Muen kernel generator. Our analysis of several system configurations shows that our technique is effective in producing mechanized proofs of correctness, and also in identifying issues that may compromise the separation property.' author: - Inzemamul Haque - 'Deepak D’Souza' - Habeeb P - Arnab Kundu - Ganesh Babu bibliography: - 'references.bib' title: Verification of a Generative Separation Kernel ---
{ "pile_set_name": "ArXiv" }
--- abstract: 'Often, when dealing with real-world recognition problems, we do not need, and often cannot have, knowledge of the entire set of possible classes that might appear during operational testing. Moreover, sometimes some of these classes may be ill-sampled, not sampled at all or undefined. In such cases, we need to think of robust classification methods able to deal with the “unknown” and properly reject samples belonging to classes never seen during training. Notwithstanding, almost all existing classifiers to date were mostly developed for the closed-set scenario, i.e., the classification setup in which it is assumed that all test samples belong to one of the classes with which the classifier was trained. In the open-set scenario, however, a test sample can belong to none of the known classes and the classifier must properly reject it by classifying it as unknown. In this work, we extend upon the well-known Support Vector Machines (SVM) classifier and introduce the Specialized Support Vector Machines (SSVM), which is suitable for recognition in open-set setups. SSVM balances the empirical risk and the risk of the unknown and ensures that the region of the feature space in which a test sample would be classified as known (one of the known classes) is always bounded, ensuring a finite risk of the unknown. The same cannot be guaranteed by the traditional SVM formulation, even when using the Radial Basis Function (RBF) kernel. In this work, we also highlight the properties of the SVM classifier related to the open-set scenario, and provide necessary and sufficient conditions for an RBF SVM to have bounded open-space risk. An extensive set of experiments compares the proposed method with existing solutions in the literature for open-set recognition and the reported results show its effectiveness.' author: - | Pedro Ribeiro Mendes J[ú]{}nior pedrormjunior@gmail.com\ RECOD Lab., Institute of Computing (IC)\ University of Campinas (UNICAMP)\ Av. Albert Einstein, 1251, Campinas, SP, 13083-852, Brazil Terrance E. Boult tboult@vast.uccs.edu\ VAST Lab., Department of Computer Science, Engineering Building\ University of Colorado Colorado Springs (UCCS)\ 1420 Austin Bluffs Parkway, Colorado Springs, CO 80918, USA Jacques Wainer wainer@ic.unicamp.br\ Anderson Rocha anderson.rocha@ic.unicamp.br\ RECOD Lab., Institute of Computing (IC)\ University of Campinas (UNICAMP)\ Av. Albert Einstein, 1251, Campinas, SP, 13083-852, Brazil bibliography: - 'mybib.bib' title: | Specialized Support Vector Machines\ for Open-set Recognition --- open-set recognition, support vector machines, specialized support vector machines, bounded open-space risk, risk of the unknown Introduction {#sec:introduction} ============ Machine learning literature is rich with works proposing classifiers for closed-set pattern recognition, with well-known examples including , Random Forests [@Breiman2001], , and . These classifiers were inherently designed to work in closed-set scenarios, i.e., scenarios in which all test samples must belong to a class used in training. What happens when the test sample belongs to a class not seen at training time? Consider a digital forensic scenario—e.g., source-camera attribution [@Costa2014], printer identification [@Ferreira2017]—in which law officials want verify that a particular artifact (e.g., a digital photo or a printed page) did originated from one of a few suspect devices. The suspected devices are the classes of interest, and a classifier can be trained on many examples of artifacts from these devices and from other non-suspected devices. But when assigning the source for the particular artifact in question the classifier must be aware that if the artifact is from an unknown region of the feature space—possibly far away from the training data—it cannot be assigned to one of the suspected devices, even if examples of those devices are the closest to the artifact. The classifier must be allowed to declare that the example does not belong to any of the classes it was trained on. One possible way to address recognition in an open-set scenario is to use a closed-set classifier, obtain a similarity score—or simply the distance in the feature space—to the most likely class, apply a threshold on that similarity score aiming at classifying as unknown any test sample whose similarity score is below a specified threshold [@Dubuisson1993; @Muzzolini1998; @Moeini2017]. @MendesJunior2017 showed that when applying thresholds to the ratio of distances instead of distance themselves results in better performance in open-set scenarios. Instead of using similarity-based algorithms, another alternative is to exploit kernel-based algorithms such as and one-class classifier [@Scholkopf1999]—applied to the entire training set as a rejection function [@Cortes2016]. This approach is sometimes called *classification with abstention*. The idea is to have an initial rejection phase that predicts if the input belongs to one of the training classes or not (known or unknown). In the former case, a second phase is performed with any sort of multiclass classifier aiming at choosing the correct class. Another alternative method relies on having a binary rejection function for each of the known classes such that a test sample is classified as unknown when decisions are negative for every function. This is the case for any classifier based on the approach [@Rocha2014]. Some recent works have explored this idea [@Pritsos2013; @Costa2014; @Scheirer2014; @Jain2014] making efforts at minimizing the for every classifier that composes the one. In binary classification, refers to the that receives positive classification. is the region of the feature space outside the support of the training samples [@Scheirer2013]. In the multiclass level, a similar concept applies: , i.e., the region of the feature space outside the support of the training samples in which a test sample would be classified as one of the known classes. In this class of methods, the potential of binary methods can be highly explored to multiclass open-set scenarios. Furthermore, methods in this class can be adapted for multiple class recognition in open-set scenarios, as accomplished by @Heflin2012. In this work, we propose the , that falls into the last class of methods and receives its name due to its ability to bound the for every classification in the binary level, consequently, bounding as well—when is applied. relies on the optimization of the bias term with kernel taking advantage of the following property we demonstrate in this work: with kernel bounds if and only if its bias term is negative. Along with this work, we have evaluated multiple implementations of open-set methods for different . For all those methods, we have evaluated both traditional grid search and the open-set grid search formalized by @MendesJunior2017. The remaining of this work is organized as follows. In Section \[sec:related-work\], we discuss some of the most important previous work in open-set recognition. In Section \[sec:ssvm\], we introduce the while, in Section \[sec:experiments\], we present the experiments that validate the proposed method. Finally, in Section \[sec:conclusion\], we present the conclusions and future work. Related Work {#sec:related-work} ============ In this section, we review recent works that explicitly deal with open-set recognition in the literature, including some base works for them. We note that other insights presented in many existing works can be somehow extended to be used for the open-set scenario. Most of those works, however, did not perform the experiments with appropriate open-set recognition setup. @Heflin2012 and @Pritsos2013 present a multiclass classifier based on the . For each of the training classes, they fit an . In the prediction phase, all $n$ classify the test sample, where $n$ is the number of for training. The test sample is classified as the class in which its classified as positive. When no classifies the test sample as positive, it is classified as unknown. @Heflin2012 extends the idea to multiple class classification, by allowing more than one to classify the example as positive; in this case, the example receives as labels, all classes whose corresponding classify it as positive. Differently, @Pritsos2013 choose the more confident classifier among the ones that classify as positive. In those works, is used with the kernel, which allows bounding the . was proposed for data domain description, which means it is target to classify the input as belonging to the or not (known or unknown). In general, any one-class or binary classifier can be applied in such cascade approach to reject or accept the test sample as belonging to one of the known classes and further defining which class it is. It is similar to the framework proposed by @Cortes2016 in which a rejection function is trained along with a classifier, however, in open-set scenario the classifier should be multiclass. For the case in which the rejection function accepts a sample, any multiclass classifier can be applied to choose the correct class. In case of rejection, samples are classified as unknown. In @Costa2012 [@Costa2014], the authors propose the , an extension of the classifier aiming at a more restrictive specialization on the positive class of the binary classifier. For this, they move the hyperplane a value $\epsilon$ towards the positive class (in rare cases backwards). The value $\epsilon$ is obtained by minimizing the . For multiclass classification, the approach can be used. uses the kernel. @Scheirer2013 formalized the open-set recognition problem and proposed an extension upon the classifier called . Similar to the works of @Costa2012 [@Costa2014], they move the main hyperplane either direction depending on the . In addition, a second hyperplane, parallel to the main one, is created such that the positive class is between the two hyperplanes. This second hyperplane allows the samples “behind” the positive class to be classified as negative. Then a refinement step is performed on both hyperplanes to balance and . According to the authors, the method works better with the linear kernel, as the second plane does not provide much benefit for an kernel which has a naturally occurring upper bound. A approach is used to combine the binary classifiers for open-set multiclass classification. @Scheirer2014 propose the . The authors proposed the model which decreases the probability of a test sample to be considered as belonging to one of the known classes when it is far away from the training samples. They use two stages for classification: a model based on a one-class classifier followed by a binary classifier with normalization based on . The binary classifier seeks to improve discrimination and its normalization has two steps. The first aims at obtaining the probability of a test sample to belong to a positive/known class and the second step estimates the probability of it not being from the negative classes. Product of both probabilities is the final probability of the test sample to belong to a positive/known class. uses the kernel and also the approach. In @Jain2014, the authors propose the , also based on the . It is an algorithm for estimating the unnormalized posterior probability of class inclusion. For each known class, a Weibull distribution [@Coles2001] is estimated based on the smallest decision values of the positive training samples. The binary classifier for each class is an with kernel trained using the approach, i.e., the samples of all remaining classes are considered as negative samples. They introduce the idea of which is similar to the open-set [@MendesJunior2017] we apply in our work, which we shall discuss in details in Section \[sec:grid-search\]. For a test sample, chooses the class for which the decision value produces the maximum probability of inclusion. If that maximum is below a given threshold, the input is marked as unknown. Applying in open-set recognition [@Scheirer2014; @Jain2014; @Zhang2017; @Scheirer2017] has been a recent research focus. @Scheirer2017 presents an overview of how have been recently applied to visual recognition, mainly on the context of open-set recognition. Notice these previous -based works are not capable of ensuring a bounded by solely relying on models. In the case of @Scheirer2014, they ensure bounded based on one-class models but not on models. For [@Jain2014], the authors did not prove their method is able to bound . In fact, can leave an unbounded when the value of the bias term of the model is in the range of scores used to fit the Weibull model. Recently, @MendesJunior2017 have shown that thresholding ratio of distances in the feature space for a nearest neighbor classifiers is more accurate at prediction unknown samples in an open-set problem. The effectiveness of working with ratio of decision scores is confirmed by @Vareto2017, in which on of the best reject threshold on a face-recognition problem is established based on the ratio of the two highest scores obtained by a voting from a set of binary classifiers. It is worth noticing that well-known machine learning areas have been investigated from the point of view of the open-set scenario, e.g., domain adaptation [@Busto2017], taking into account particularities from open-set recognition. Beyond those methods specifically proposed for open-set setups, many other solutions in literature can be investigated to be extended for open-set scenarios. In general, any binary classification method that aims at decreasing false positive rate [@Moraes2016] would potentially recognize unknown samples when composing such classifiers with approach. In general, recent solutions for open-set recognition problems have focused on methods that use samples of all known classes for training models for individual classes. That is different from generative approaches that checks if the test sample is in the distribution of each of the known classes, as it tries to use data from all classes for generating the model for each class. It differs from [@Tax2008; @Bartlett2008] in the sense that we do not want just to postpone decision making. Moreover, open-set recognition differs from domain adaptation and from transfer learning in the sense that transferring knowledge from one domain to another does not ensure the ability of identifying samples belonging to unknown classes. SSVM (SSVMs) {#sec:ssvm} ============ One cannot ensure that the positive class of the traditional has a bounded , even when the kernel is used. The main characteristic of the proposed is that a high enough regularization parameter ensures a bounded for every known class of interest, consequently, a bounded in multiclass level. The regularization parameter is a weight for optimizing the in relation to the . In Section \[sec:ssvm-bounding-open-space\], we present how to ensure a bounded using kernel. In Section \[sec:ssvm-choosing-bias-term\], we present the formulation of the optimization problem of the . In Section \[sec:grid-search\], we present the process of optimization of the parameters. To begin with, in Section \[sec:svm-basics\], we present some basic aspects of the classifier. Basic aspects of Support Vector Machines {#sec:svm-basics} ---------------------------------------- is a binary classifier that, given a set $X$ of training samples $\mathbf{x}_i \in \mathbb{R}^d$ and the corresponding labels $y_i \in \{-1, 1\}$, $i = 1, \dots, m$, it finds a maximum-margin hyperplane that separates $\mathbf{x}_i$ for which $y_{i} = -1$ from $\mathbf{x}_i$ for which $y_{i} = 1$ [@Cortes1995]. We consider the soft margin case with parameter $C$. The primal optimization problem is usually defined as $$ \min_{\mathbf{w},b,\xi} \frac{1}{2}||\mathbf{w}||^{2} + C\sum_{i=1}^{m}\xi_{i},$$ $$\begin{aligned} \label{eq:constraint-1} \mbox{s.t.}\ & y_i(\mathbf{w}^{T}\mathbf{x}_{i} + b) \ge 1 - \xi_{i}, \ \forall i,\\ \label{eq:constraint-2} & \xi_{i} \ge 0, \ \forall i. \end{aligned}$$ To solve this optimization problem, we use the Lagrangian method to create the dual optimization problem. In this case, the final Lagrangian is defined as $$\label{eq:lagrangian} \mathcal{L}(\mathbf{w}, b, \xi, \alpha, r) = \sum_{i=1}^{m}\alpha_{i} - \frac{1}{2}||\mathbf{w}||^{2},$$ in which $\alpha_{i} \in \mathbb{R}$, $r_{i} \in \mathbb{R}$, $i = 1, \dots, m$, are the Lagrangian multipliers. Then, the optimization problem now is defined as $$\begin{aligned} \label{eq:lagrangian-inv} \min_{\alpha}\ & W(\alpha) = -\mathcal{L}(\mathbf{w}, b, \xi, \alpha, r) = \frac{1}{2}||\mathbf{w}||^{2} - \sum_{i=1}^{m}\alpha_{i},\\ \label{eq:slack-constraint} \mbox{s.t.}\ & 0 < \alpha_{i} < C, \ \forall i,\\ \label{eq:sum_0_constraint} & \sum_{i=1}^{m}\alpha_{i}y_{i} = 0. \end{aligned}$$ The decision function of a test sample $\mathbf{x}$ comes from the constraint in Equation and is defined as $$ f(\mathbf{x}) = \operatorname{sign}(\mathbf{w}^T \mathbf{x} + b) = \operatorname{sign}\left(\sum_{i=1}^{m} y_i \alpha_i \mathbf{x}_i^T \mathbf{x} + b\right).$$ In @Boser1992, the authors proposed a modification in for the cases in which the training data are not linearly separated in the feature space. Instead of linearly separating the samples in the original space $\mathcal{X}$ of the training samples in $X$, the samples are projected onto a higher dimensional space $\mathcal{Z}$ in which they are linearly separated. This projection is accomplished using the kernel trick. One advantage of this method is that in addition to separating non-linear data, the optimization problem of the remains almost the same: instead of calculating the inner product $\mathbf{x}^T\mathbf{x'}$, it uses a kernel $K(\mathbf{x}, \mathbf{x'})$ that is equivalent to the inner product $\phi(\mathbf{x})^T\phi(\mathbf{x'})$ in a higher dimensional space $\mathcal{Z}$, in which $\phi: \mathcal{X} \mapsto \mathcal{Z}$ is a projection function. When using the kernel trick, we do not need to know the $\mathcal{Z}$ space explicitly. Using kernels, the decision function of a test sample $\mathbf{x}$ becomes $$\label{eq:decision-function-kernel} f(\mathbf{x}) = \operatorname{sign}\left(\sum_{i=1}^{m} y_i \alpha_i K(\mathbf{x}_i, \mathbf{x}) + b\right).$$ The most used kernel for is the kernel [@Scholkopf2001], defined as follows. $$\label{eq:kernel-rbf} K(\mathbf{x}, \mathbf{x'}) = e^{-\gamma ||\mathbf{x} - \mathbf{x'}||^2}.$$ It is proved that using this kernel, the projection space $\mathcal{Z}$ is an $\infty$-dimensional space. Ensuring a bounded PLOS {#sec:ssvm-bounding-open-space} ----------------------- By simply using the kernel we cannot ensure the is bounded. \[thm:unbounded-positive\] with kernel has a bounded if and only if the bias term $b$ is negative.[^1] We know that $$\label{eq:limit} \lim_{d \rightarrow \infty} K(\mathbf{x}, \mathbf{x'}) = 0,$$ in which $K(\mathbf{x}, \mathbf{x'})$ is the RBF kernel and $d = \|\mathbf{x} - \mathbf{x'}\|$. For the cases in which the test sample $\mathbf{x}$ is far away from every support vector $\mathbf{x}_i$, $$ \sum_{i=1}^{m} y_i \alpha_i K(\mathbf{x}_i, \mathbf{x})$$ also tends to 0. From Equation it follows that $$ f(\mathbf{x}) \rightarrow \operatorname{sign}\left(b\right)$$ when $\mathbf{x}$ is far away from the support vectors. Therefore, for negative values of $b$, $f(\mathbf{x})$ is always negative for far away $\mathbf{x}$ samples. That is, samples in an bounded region of the feature space will be classified as positive. For the only if direction, let $b$ be positive. Then there will exist a distance $d$ such that $\forall i: \|\mathbf{x}_{i}-\mathbf{x}\|>d \implies f(\mathbf{x}) = \operatorname{sign}(b) > 0$, i.e., positively classified samples will be in an unbounded region of the feature space. Notice that Theorem \[thm:unbounded-positive\] can be applied not only to the kernel of Equation  but to any radial basis function [@Buhmann2003] kernel satisfying Equation , e.g., kernel, kernel, and kernel [@Souza2010], however, for the remaining part of this work, we focus on the kernel of Equation . Figure \[fig:figPlottingkernel\] depicts the rationale behind Theorem \[thm:unbounded-positive\]. $z$ axis represents the decision values that possible test samples $(x, y)$ would have for different regions of the feature space. Training samples are normalized between 0 and 1. Note in the subfigures that for possible test samples far away from the training ones, e.g., $(2, 2)$, the decision value approaches the bias term $b$. Note in Figure \[fig:figPlottingkernel3d\_\_boat\_forboundaries\_\_cls03\_\_01\_\_gammaexp+04\] that an unbounded region of the feature space would have samples classified as positive. Consequently, all those samples would be classified as class 3 by the final multiclass-from-binary classifier. In general usage, both positive and negatives biases occur as $b$ depends on the training data. In case of s without explicit bias term [@Vogt2002; @Kecman2005], $b=0$ is implicit.[^2] Consequently, the decision function is defined as $$ f(\mathbf{x}) = \operatorname{sign}\left(\sum_{i=1}^{m} y_i \alpha_i K(\mathbf{x}_i, \mathbf{x})\right).$$ For test samples far away from the support vectors, we have that $\sum_{i=1}^{m} y_i \alpha_i K(\mathbf{x}_i, \mathbf{x})$ converges to 0 from the bottom or from the above, depending on the training samples. Consequently, a bounded cannot be ensured in all cases. [0.30]{} ![image]({\striplastbar{figsR2/}__}figBoundaries__boat_forboundaries__color.pdf){width="95.00000%"} //in /figPlottingkernel3d\_\_boat\_forboundaries\_\_cls01\_\_01\_\_gammaexp+04/ , /figPlottingkernel3d\_\_boat\_forboundaries\_\_cls02\_\_01\_\_gammaexp+04/ , /figPlottingkernel3d\_\_boat\_forboundaries\_\_cls03\_\_01\_\_gammaexp+04/ ![image]({\striplastbar{figs/}__}\filename.pdf){width="95.00000%"} \[fig:\] Theorem \[thm:unbounded-positive\] also provides a solution to the problem of unbounded . We can ensure a bounded by using an kernel and ensuring a negative $b$. In Section \[sec:ssvm-choosing-bias-term\], we present a new optimization objective that optimizes the margin while ensuring the bias term $b$ is negative. Optimization to ensure negative bias term b {#sec:ssvm-choosing-bias-term} ------------------------------------------- As we discussed in Section \[sec:ssvm-bounding-open-space\], we must ensure a negative $b$ to obtain a bounded . For this, we define the optimization problem as $$\label{eq:ssvm-primal-opt} \min_{\mathbf{w},b,\xi} \frac{1}{2}||\mathbf{w}||^{2} + C\sum_{i=1}^{m}\xi_{i} + \lambda b,$$ subject to the same constraints defined in Equations and , in which $\lambda$ is a regularization parameter that trades off between the and the . From Equation , the dual formulation has the same Lagrangian defined in Equation . Consequently, we have to optimize the same function as defined in Equation with the constraint in Equation . However, the constraint in Equation is replaced by the constraint $$\label{eq:sum_lambda_constraint} \sum_{i=1}^{m}\alpha_{i}y_{i} = \lambda.$$ The same algorithm proposed by @Platt1998, with the proposed by @Fan2005, for optimizing ensuring the constraint in Equation can be applied to this optimization containing the constraint of the Equation . As the main idea of the algorithm is to ensure that $\sum\alpha_{i}y_{i}$ remains the same from one iteration to the other, before the optimization starts, we initialize $\alpha_{i}$ such that $\sum\alpha_{i}y_{i} = \lambda$. For this, we let $\alpha_{i} = \lambda/m_{p}$, $\forall i$ such that $y_{i} = 1$, in which $m_{p}$ is the number of positive training samples. \[prop:maximum-lambda\] In the soft margin case, with parameter $C$, the maximum valid value for $\lambda$ is $C m_{p}$. From Equation , $0\le\alpha_{i}\le C$. Maximum value $\lambda=\sum\alpha_{i}y_{i}$ is thus obtained by setting $\alpha_{i} = C$ for $i$ such that $y_{i} = 1$ and setting $\alpha_{i} = 0$ for $i$ such that $y_{i} = -1$. This yields $\lambda \le C m_{p}$ Thus during optimization, we must ensure , otherwise, for , the constraint in Equation would be broken for some $\alpha_{i}$. Despite Proposition \[prop:maximum-lambda\] saying that it is allowed , when it happens, we have that $\alpha_{i} = C$ for $y_{i} = 1$ and $\alpha_{i} = 0$ for $y_{i} = -1$, and there will be no optimization. In this case, despite satisfying the constraints, there is no flexibility for changing the values of $\alpha_{i}$ because, for each pair $\alpha_{i}, \alpha_{j}$ selected by the algorithm [@Fan2005], we must update $\alpha_{i} = \alpha_{i} + \nabla_{\alpha}$, $\alpha_{j} = \alpha_{j} + \nabla_{\alpha}$ when $y_{i} \ne y_{j}$ and $\alpha_{i} = \alpha_{i} - \nabla_{\alpha}$, $\alpha_{j} = \alpha_{j} + \nabla_{\alpha}$ when $y_{i} = y_{j}$. For any $\nabla_{\alpha} \ne 0$, the constraint $0 < \alpha_{i} < C$ would break for either $\alpha_{i}$ or $\alpha_{j}$, for any selected pair. Then, in practice, we $\lambda$ in the interval $0 \le \lambda < C m_{p}$. \[prop:negative-bias-term\] There exists some $\lambda$ such that we can obtain a bias term $b < 0$. From the conditions, the bias term is defined as $$\begin{aligned} b &= y_{i} - \sum_{j=1}^{m}\alpha_{j}y_{j}K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right)\\ &= y_{i} - \sum_{\substack{j=1:\\y_{j} = 1}}^{m}\alpha_{j}K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right) + \sum_{\substack{j=1:\\y_{j} = -1}}^{m}\alpha_{j}K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right), \end{aligned}$$ for any $i$ such that $0 < \alpha_{i} < C$. Now, let’s consider two possible cases: (1) $y_{i} = 1$ and (2) $y_{i} = -1$. For **Case (1)**, we have $$b = 1 - \alpha_{i} - \sum_{\substack{j=1:\\y_{j} = 1,\\j\ne i}}^{m}\alpha_{j}K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right) + \sum_{\substack{j=1:\\y_{j} = -1}}^{m}\alpha_{j}K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right),$$ as $K\left(\mathbf{x}_{i}, \mathbf{x}_{i}\right) = 1$. Note that $0 < K(\mathbf{x}, \mathbf{x}') \le 1$ for the kernel. To show that there exists a $\lambda$ such that $b < 0$, we analyze the worst case, i.e., when the kernel in the second summation—for negative training samples—is $1$. Then, we have $$ b = 1 - \alpha_{i} - \sum_{\substack{j=1:\\y_{j} = 1,\\j\ne i}}^{m}\alpha_{j}K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right) + \sum_{\substack{j=1:\\y_{j} = -1}}^{m}\alpha_{j}.$$ From Equation , we have $$\label{eq:sum_lambda_constraint_neg_equality} \sum_{\substack{j=1:\\y_{j} = -1}}^{m}\alpha_{j} = \sum_{\substack{j=1:\\y_{j} = 1}}^{m}\alpha_{j} - \lambda,$$ then $$b = 1 - \sum_{\substack{j=1:\\y_{j} = 1,\\j\ne i}}^{m}\alpha_{j}K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right) + \sum_{\substack{j=1:\\y_{j} = 1\\j\ne i}}^{m}\alpha_{j} - \lambda$$ Analyzing the worst case again, considering $\alpha_{j} = C$ for positive training samples, with $j \ne i$, we have $$\begin{aligned} b &= 1 - C\sum_{\substack{j=1:\\y_{j} = 1,\\j\ne i}}^{m}K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right) + C\left(m_{p} - 1\right) - \lambda\\ &= 1 + C m_{p} - C - C\sum_{\substack{j=1:\\y_{j} = 1,\\j\ne i}}^{m}K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right) - \lambda. \end{aligned}$$ Given that, to ensure $b < 0$ it is sufficient to let $$ \lambda > 1 + C m_{p} - C\left(1 + \sum_{\substack{j=1:\\y_{j} = 1,\\j\ne i}}^{m}K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right)\right).$$ Given a $C \ge 1$ it is always possible to obtain a $\lambda$ such that $\lambda < C m_{p}$. For **Case (2)**, we have $$b = - 1 - \sum_{\substack{j=1:\\y_{j} = 1}}^{m}\alpha_{j}K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right) + \sum_{\substack{j=1:\\y_{j} = -1}}^{m}\alpha_{j}K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right).$$ Again, considering the worst case for the values of the kernel for negative samples and using the equality in Equation , we have $$b = -1 - \sum_{\substack{j=1:\\y_{j} = 1}}^{m}\alpha_{j}K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right) + \sum_{\substack{j=1:\\y_{j} = 1}}^{m}\alpha_{j} - \lambda.$$ Again, considering the greatest possible value for $b$, by considering $\alpha_{j} = C$ for positive samples, we have $$b = -1 - C\sum_{\substack{j=1:\\y_{j} = 1}}^{m}K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right) + C m_{p} - \lambda.$$ In this case, to ensure $b < 0$ it is sufficient to let $$\lambda > C m_{p} - 1 - C\sum_{\substack{j=1:\\y_{j} = 1}}^{m}K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right),$$ which is possible to obtain for any value of $C$. In Proposition \[prop:negative-bias-term\], we considered a very extreme case for the proof. For example, in Case (1)—for $i$ such that $y_{i} = 1$—we considered $K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right) = 1$ for $j$ such that $y_{j} = -1$ and $K\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right) \approx 0$ for $j$ such that $y_{j} = 1$. It means that all the negative samples have the same feature vector of sample $\mathbf{x}_{i}$ under consideration and all positive samples are far away from sample $\mathbf{x}_{i}$. In practice, we do not have the $\lambda$ nearly as constrained as in the proof to ensure a negative bias term. More than this, in our experiments with the , we observed that most of the time the bias term is negative for a binary classifier trained with the approach, i.e., it is often the case that even with $\lambda=0$ the bias will be negative. More details about this behavior is shown in Section \[sec:remarks\]. In Appendix \[appendix:sec:ssvm-formulation\] we present the complete formulation of the optimization problem for the classifier.[^3] #### Choosing the $\lambda$ parameter for the Proposition \[prop:negative-bias-term\] states that we can find a $\lambda$ parameter that ensures a bounded for the optimization problem presented above. To ensure this, models with a non-negative bias term receive accuracy of $-\infty$ on the validation set, during the . Nevertheless, we cannot ignore that, in special circumstances, certain $\lambda$ values allow a negative bias term during the but not for training in the whole set of training samples. In this case, once the parameters are obtained by , if the obtained $\lambda$ does not ensure a negative bias term for training set, one would need to retrain the classifier with an increased value for $\lambda$, until a negative bias term is obtained for the final model. However, for grid search, we assume the distribution of the validation set, a subset of the training set, represents the distribution of the training set; that is one possible explanation as for why in our experiments we did not need to retrain the classifier with a value of $\lambda$ larger than the one obtained during grid search, as all values of $\lambda$ obtained during were able to ensure a negative bias term for all binary classifiers. grid search in an open-set setup {#sec:grid-search} -------------------------------- Two different approaches for grid searching the parameters in classifiers have been considered in the literature thus far [@Chang2011]: (1) the external approach and (2) the internal one. These forms were originally defined for a closed-set setting [@Chang2011], and properly extended to open-set scenarios [@MendesJunior2017]. In the external approach, on the parameters happens in the multiclass level so that all binary classifiers that compose the multiclass classifiers share the same set of parameters. On the other hand, in the internal approach, each binary classifier performs its own . To perform a on the training set $X$, a subset $F \subset X$ is used to fit an classifier while performing the . Subset $V=X-F$ then forms the validation set. This can be done for both external and internal approaches. There are two forms of dividing the training set $X$ into the fitting set $F$ and the validation set $V$ while performing the : the closed-set (traditional) and the open-set forms [@MendesJunior2017]. Both forms can be applied to both external and internal approaches. In open-set , the set of classes in $F$ is a proper subset of the classes in $X$. Open-set simulates the real scenario: the set of classes in $X$ is a proper subset of the classes that can appear at system usage—or in the [^4]. #### External closed-set As an example of the external closed-set , for the multiclass classifier, $80\%$ of the samples in $X$ are randomly selected for $F$ and $20\%$ for $V$. $F$ can have representative samples of all $n$ . Then, the multiclass classifier is trained with $F$ and evaluated with $V$, testing possible parameters. This is the traditional form of the external : we do not mind when the negative samples in $F$ represent the $n$ classes available for training. #### External open-set In the external open-set , for the multiclass classifier, $\left\lfloor n/2 \right\rfloor$ of the $n$ for training are taken for $V$. Among the samples of the remaining $\left\lceil n/2 \right\rceil$ classes, $80\%$ are randomly selected for $F$ and $20\%$ for $V$. Then, the multiclass classifier is trained with $F$ and evaluated with $V$, testing possible parameters. #### Internal closed-set In the internal closed-set , for a binary classifier, $80\%$ of the samples in $X$ are randomly selected for $F$ and $20\%$ for $V$. We ensure a random partitioning such that both $F$ and $V$ have representative samples of the positive and negative classes. In this case, the negative samples in $V$ can have representative samples of all $n-1$ negative classes, as the partitioning between $F$ and $V$ is random. That is, when using $V$ to obtain the accuracy during the , all classes in $V$ can be known a priori by the classifier trained with $F$. Then, the binary classifier is trained with $F$ and evaluated with $V$, testing possible parameters. This is the traditional form of the internal : we do not mind when the negative samples in $F$ represent the $n-1$ negative classes of the training. #### Internal open-set In the internal open-set , for a binary classifier, $\left\lfloor (n-1)/2 \right\rfloor$ of the $n-1$ negative classes are taken for $V$. Among the samples of the remaining $\left\lceil (n-1)/2 \right\rceil + 1$ classes (considering the positive class), $80\%$ are randomly selected for $F$ and $20\%$ for $V$. Also, a random partitioning such that both $F$ and $V$ have representative samples of the positive and negative classes must be guaranteed. Then, the binary classifier is trained with $F$ and evaluated with $V$, testing possible parameters. Figure \[fig:figGridSearch\] depicts how the set $X$ of training samples is partitioned for the closed- and open-set . //in /figGridSearchC/ , /figGridSearchO/ ![ Difference between closed- and open-set . Each colored box represents a known class during training. Blue color represents the amount of samples used to train a classifier during the grid-search procedure (fitting set). Red part represents the validation set used to evaluate the classifier. Parameters with the best accuracy on the validation set are selected for training the classifier. For the open-set depicted in (), some of the known classes are considered as unknown during the grid-search procedure. For both external and internal , the closed- and open-set variations are split the same way, however ensuring, for the case of the internal , the positive class have representative samples in fitting set. []{data-label="fig:figGridSearch"}]({\striplastbar{figs/}__}\filename.pdf "fig:"){width="95.00000%"} \[fig:\] Experiments {#sec:experiments} =========== In this section, we present the experiments and details for comparing the proposed method with the existing ones in the literature, discussed in Section \[sec:related-work\]. In Section \[sec:experiments-baselines-summary\], we summarize the baselines. In Section \[sec:evaluation-measures\], we describe the evaluation measures used in our experiments. In Section \[sec:datasets\], we describe the and features. In Section \[sec:results-decision-boundaries\], we present initial results regarding the behavior of the methods in feature space. Finally, we present the results with statistical tests in Section \[sec:results\], finishing this section with some remarks in Section \[sec:remarks\]. Baselines {#sec:experiments-baselines-summary} --------- For a fair comparison, we have implemented both the closed- and open-set approaches for the existing methods in the literature. We denote a given classifier using the closed- and open-set as and , respectively. All compared methods are -based. We use approach in the multiclass level except for the methods specified with subscript OVO. In this work, we employ the one-class–based method of @Pritsos2013 for comparison, hereinafter referred to as . Although @Pritsos2013 uses a as the method to define a rejection function for each class, one could also use a , since it is also a form of one-class classifier. We also implemented this alternative to and refer to it as . For the types of methods that train a rejection method and a multiclass classifier, we use implementations based on and as well: , , , and . indicates methods, i.e., methods that implements one rejection function at first and a multiclass classification method for the case the rejection function accepts the test sample as known. Despite dealing with a multiclass problem, @Costa2012 [@Costa2014] evaluated their method in the binary fashion by obtaining the accuracy of individual binary classifiers. They did not present the multiclass version of the classifier directly. Therefore, in this work, we consider their method with the approach in the experiments. The test sample is classified as unknown when no binary classifier classifies it as positive. Complementarily, it is classified as the most confident class when one or more classifiers tags it as positive. Besides those methods, we have employed [@Chang2011], [@Scheirer2013], [@Scheirer2014], [@Jain2014] as baselines for our experiments. Despite using an model, also makes use of for bounding and . We summarize the methods in Table \[tab:baselines-summary\]. For , , and , we perform external because we use the multiclass implementation made available by the authors [@Scheirer2013; @Scheirer2014; @Jain2014]. and internally make use of the multiclass implementation available in , so we perform external grid search for them as well. We have included and in the experiments to better evaluate the effectiveness of the reject function, as the with approach is closed-set and cannot classify as unknown once the reject function accepts the test sample. All other methods perform approach for both closed- and open-set procedures. Except for —for which @Scheirer2013 have observed linear kernel performs better all other methods use kernel. Among all those methods, only one-class–based methods and are able to ensure a bounded . As for open-set recognition is a growing research area, we highlight that one advantage of our proposed method, compared to -based ones, relies on its simplicity. The proposed method is defined purely as a convex optimization problem, as it is for . -based methods require post-processing after model is trained. At testing phase, our proposed method requires just the prediction from the obtained model, while -based methods require extra predictions from models for each binary classifier that composes the multiclass one. Evaluation measures {#sec:evaluation-measures} ------------------- Most of the proposed evaluation performance measures in the literature are focused on binary classification, e.g., traditional classification accuracy, , etc. [@Sokolova2009]. Even the ones proposed for multiclass scenarios—e.g., average accuracy, multiclass , etc. [@Sokolova2009]—usually consider only the closed-set scenario. Recently, @MendesJunior2017 have proposed and for multiclass open-set recognition problems. In this work, we apply such measures and further extend the to , based on the harmonic mean [@Mitchell2004]. $$\glssymbol{hna} = \begin{cases} 0, \text{if } \glssymbol{aks} = 0 \text{ or } \glssymbol{aus} = 0,\\ \frac{2}{\left(\frac{1}{\scriptsize\glssymbol{aks}} + \frac{1}{\scriptsize\glssymbol{aus}}\right)}, \text{otherwise}. \end{cases}$$ In this case, is the and is the . is the accuracy obtained on the testing samples that belong to one of the classes with which the classifier was trained. is the accuracy on the testing samples whose classes have no representative samples in the training set. One advantage of over is that when a classifier performs poorly on or , drops toward 0. One biased classifier that blindly classifies every sample as unknown, would receive of 0.5 while would be 0. However, notice that 0.5 is not the worst possible accuracy for , as some methods—trying to correctly predict test labels—can have its smaller than 0.5. The worst case for —when it is 0—would be when all known samples are classified as unknown and all unknown samples classified as belonging to one of the known classes. On the other hand, the worst case for would be when at least one of such cases happens. For experiments in this work, we have considered , , , and . For a fair comparison with previous methods in the literature [@Scheirer2013; @Scheirer2014; @Jain2014]—which only showed performance figures using the traditional we also present results regarding the traditional multiclass [@Sokolova2009] considering both and . data set {#sec:datasets} -------- For validating the proposed method and comparing it with existing methods, we consider . In the (with 15 classes), the images are represented by a bag-of-visual-word vector created with soft assignment [@Gemert2010] and max pooling [@Boureau2010], based on a codebook of 1000 Scale Invariant Feature Transform (SIFT) codewords [@Lowe2004]. (with 26 classes) represents the letters of the English alphabet (black-and-white rectangular pixel displays). In the (with 95 classes), the data was acquired using two Fifth Dimension Technologies (5DT) gloves hardware and two Ascension Flock-of-Birds magnetic position trackers. In the (with 256 classes), the feature vectors consider a bag-of-visual-words characterization approach, with features acquired with dense sampling, SIFT descriptor for the points of interest, hard assignment [@Gemert2010], and average pooling [@Boureau2010]. Finally, for the (with 1000 classes), the features were extracted with the Border/Interior (BIC) descriptor [@Stehling2002]. These or other could be used with different characterizations, however, in this work, we focus on the learning part of the problem rather than on the feature characterization one. In Table \[tab:datasets\], we summarize the main features of the considered in terms of number of samples, number of classes, dimensionality, and approximate number of samples per class.[^5] [crrrr]{} **** & **\# samples** & **\# classes** & **\# features** & **\# samples/class**\ **** & & & &\ **** & & & &\ **** & & & &\ **** & & & &\ **** & & & &\ Decision regions {#sec:results-decision-boundaries} ---------------- We start presenting results on artificial so that the of each classifier can be visualized. For these cases, we show the region of the feature space in which a possible test sample would be classified as one of the known classes or unknown. We also show how each classifier handles the . Figure \[fig:cone-torus\] depicts the images of for the . In Figure \[subfig:cone-torus\_\_mcossvm\_ova\_gsic\], as expected, we see that the gracefully bounds the ; any sample that would appear in the white region would be classified as unknown. We present images of for other in Appendix \[appendix:sec:more-results\]. /in [ /, \_\_mcsvm\_ova\_gsic\_highGamma\_fixedC/, \_\_mcsvmdbc\_ova\_gsic/, \_\_svm1vsll\_ovx\_gsec/, \_\_mcocsvm\_ova\_gsic/, \_\_mcocbbsvm\_ova\_gsic/, \_\_mcocbbsvm\_ovo\_gsec/, \_\_mcsvdd\_ova\_gsic/, \_\_mcsvddbb\_ova\_gsic/, \_\_mcsvddbb\_ovo\_gsec/, \_\_wsvm\_ovx\_gsec/, \_\_pisvm\_ovx\_gseo/, \_\_mcossvm\_ova\_gsic/ ]{} [0.24]{} ![ for the . Non-white regions represent the region in which a test sample would be classified as belonging to the same class of the samples with the same color. All samples in the white regions would be classified as unknown. []{data-label="fig:cone-torus"}]({\striplastbar{figsR2/}__}figBoundaries__cone-torus_forboundaries\filename__color.png "fig:"){width="\textwidth"} \[subfig:cone-torus\] Results {#sec:results} ------- We performed a series of experiments simulating an open-set scenario in which 3, 6, 9, and 12 classes are available for training the classifiers. Remaining classes of each are unknown at training phase and only appear on testing stage. Since different have a different number of known classes, the fraction of unknown classes, or the openness [in the sense of @Scheirer2013], varies per . For each , method, and number $n$ of , we ran 10 experiments by choosing $n$ random classes for training, among the classes of the . Selected sets of classes are used across each experiment with different classifiers. In addition, the same samples used for training one classifier $C_i$ is used when training another classifier $C_j$ (a similar setup is adopted for the testing and validation sets), which is referred to as a paired experiment. Figures \[fig:graphs\_ossvm\_R2\_normal\_open/na/na\]–\[fig:graphs\_ossvm\_R2\_normal\_open/mafm/osfmM\] show the results considering , , and , respectively, for all . In those figures, we compare only methods implemented with open-set grid search and thus we have excluded methods from them. We see that, in general, maintains a good accuracy among the , while the other methods have good accuracy for one or two but do not maintain a similar accuracy for the others. We present additional results considering other measures—, , and —and methods in Appendix \[appendix:sec:more-results\]. ![image]({\striplastbar{graphs_ossvm_R2_normal_open/}__}figGraphs_legend.pdf){width="0.75\linewidth"} /in [na/na]{} /in [ 15scenes\_bow\_soft\_max\_1000/15-scenes, aloi\_bic/aloi, auslan/auslan, caltech256\_bow\_dense\_hard\_average\_1000/caltech-256, letter/letter ]{} [0.495]{} ![image]({\striplastbar{graphs_ossvm_R2_normal_open/}__}figGraphs__\dataset__\measure__color__withoutlegend.pdf){width="0.9\linewidth"} ![image]({\striplastbar{graphs_ossvm_R2_normal_open/}__}figGraphs_legend.pdf){width="0.75\linewidth"} /in [harmonicNA/hna]{} /in [ 15scenes\_bow\_soft\_max\_1000/15-scenes, aloi\_bic/aloi, auslan/auslan, caltech256\_bow\_dense\_hard\_average\_1000/caltech-256, letter/letter ]{} [0.495]{} ![image]({\striplastbar{graphs_ossvm_R2_normal_open/}__}figGraphs__\dataset__\measure__color__withoutlegend.pdf){width="0.9\linewidth"} ![image]({\striplastbar{graphs_ossvm_R2_normal_open/}__}figGraphs_legend.pdf){width="0.75\linewidth"} /in [mafm/osfmM]{} /in [ 15scenes\_bow\_soft\_max\_1000/15-scenes, aloi\_bic/aloi, auslan/auslan, caltech256\_bow\_dense\_hard\_average\_1000/caltech-256, letter/letter ]{} [0.495]{} ![image]({\striplastbar{graphs_ossvm_R2_normal_open/}__}figGraphs__\dataset__\measure__color__withoutlegend.pdf){width="0.9\linewidth"} Superiority of is confirmed by the Wilcoxon test using the Holm method to adjust each of the paired p-values to account for multiple comparisons [@Demsar2006]. For the Wilcoxon test, we used the mean of the 10 experiments for each pair of and method. In Table \[tab:wilcoxon-open\], we present the comparison among the methods using the open-set . Once again, in virtually all cases, the outperformed (with statistical significance) all the baselines. Except when comparing with : in this case there is no statistical significance for most of the measures. When comparing with , performs better with statistical significance for all measures. It is worth noticing that @Jain2014 proposed with the scheme, which has the same principle of the open-set later formalized by @MendesJunior2017. The Wilcoxon statistical test verifies if the difference in performance measure is significant. Aiming at evaluating if the frequency wins is statistically significant, we have performed an additional test with Binomial test, presented in Appendix \[appendix:sec:more-results\]. According to Binomial tests, performs better than with statistical significance in most of the cases. We have compared with separately to better organize the results. In Table \[tab:wilcoxon-ocbb\], we present Wilcoxon statistical test comparing with methods. We can see in Table \[tab:wilcoxon-ocbb\] that performs better in all cases; in most of them with statistical significance. Finally, we also investigate the efficacy of the open-set by comparing the open- and closed-set grid search versions of each classifier. We present this comparison, with Wilcoxon test, in Table \[tab:wilcoxon-open-vs-closed\]. For some methods—, , , and implementations with open-set grid search perform consistently better than closed-set counterparts. For the specific case of , it seems that usually the closed-set form is preferable, however, and point better performance with statistical difference towards open-set version. For , , and , in general the measures point towards open-set , however, it is not a consistent result.[^6] Remarks {#sec:remarks} ------- Note that even the traditional obtained better results than some of the methods in the literature specially tailored for the open-set scenario. We observed that the higher the $\gamma$ parameter, the better the ’s performance for the open-set scenario. When grid-searching the $\gamma$ parameter, the implementation used in this work chooses the higher $\gamma$ value in case of tie in the validation set. In many cases, there is statistical difference between this implementation and the implementation that chooses the smallest $\gamma$ in case of ties. Therefore, the used herein is an optimized version for the open-set scenario and it is expected to outperform the traditional implementation. Another remark is about the frequency of negative bias terms in the binary classifiers that compose the . Most of the binary classifiers for the approach already have the correct negative bias term, as shown in Table \[tab:correct-bias-term-frequency\]. To better understand the reason, we also obtained the frequency of binary classifiers with negative bias term using the approach, also shown in Table \[tab:correct-bias-term-frequency\]. In this case, only about half of the binary classifiers have negative bias term. An informal explanation for this behavior is that in the approach, we have more negative than positive samples—and more than one class in the negative set. Then, it is more likely to have the negative samples “surround” the positive ones helping the to create a bounded for the positive class. This intuition is confirmed by Figure \[fig:figPlottingkernel\]. For both class 1 and class 2, the creates a bounded (negative bias term) because class 3 is negative for those binary classifiers and is “surround” the positive class in both cases. Considering class 3 as positive, we have no negative samples surround the positive class. That is why, in this case, the is unbounded (non-negative bias term). The high frequency of negative bias term for the explains why some authors in the literature have been reporting good accuracy for detection problems using s with kernels. For a detection problem, we have one class of interest and multiple others that we consider as negative for what we have access to train with. As the number of negative samples is usually larger, it is more likely to have a classifier with bounded for detection problems. ---------- -------- -------- -------- -------- -------- Approach 99.00  97.67  99.00  98.00  99.67  ---------- -------- -------- -------- -------- -------- : Percentage of binary classifiers with negative bias term for and approaches.[]{data-label="tab:correct-bias-term-frequency"} Despite most of the cases the obtains the correct bias term for the approach, the optimization problem presented in Section \[sec:ssvm\] also optimizes the . That is, it optimizes for recognition in open-set scenario. Conclusions and future work {#sec:conclusion} =========================== In this work, we gave sufficient and necessary condition for the with kernel to have finite . We then showed that by reformulating the optimization policy to simultaneously optimize margin and ensure a negative bias term, the is bounded and it provides a formal open-set recognition algorithm. Proposed extends upon the traditional ’s optimization problem. Objective function is changed in the primal problem, but the Lagrangian for the dual problem remains the same. In the dual problem for the , only a single constraint differs from the ’s dual problem. The same algorithm [@Platt1998] can be used to ensure the new constraint is satisfied between iterations. Also, the same algorithm [@Fan2005] can be applied. Consequently, it makes the method easy to extend from a implementation. A limitation of the proposed method is that it can be applied to specific kinds of kernel: only the ones that are monotonically decreasing when the samples get far apart from each other. Among the well-known kernels, only has this property. Another limitation of this work is the lack of a proof that ensures the parameters selected during grid search phase can always generate a model on the training phase which has a negative bias term for bounding . As future work, one can investigate alternative forms for ensuring a bounded for specific implementations of s that do not rely on the bias term—as the ones in @Vogt2002 and @Kecman2005. As shown in Section \[sec:ssvm-bounding-open-space\], the without the bias term cannot ensure a bounded , as it depends on the shape of the training data. However, a simple solution can be obtained by training the without bias term and establishing an artificial negative bias term in the decision function. In this case, research can be done on how to obtain this artificial bias term. Another future work is to investigate the properties of the with the approach. In a binary , at least one of the two classes will always have infinite —if one is bounded, the other must include all the remaining space and so it must be infinite. We observed experimentally that, according to the way the probability is calculated for each binary classifier [@Platt2000; @Lin2007] and according to the way the probability estimates are combined in the level [@Wu2004], depending on the threshold established, a bounded can occur but cannot be ensured. Future work relies on investigating mechanisms to always ensure a bounded and, consequently, a limited . This is worth investigating because some works in the literature have presented better results with the than approach for closed-set problems [@Galar2011]. We already performed some initial experiments using the proposed herein as a binary classifier for the approach—rather than the ’s configuration—and we checked that by ensuring a bounded for every binary classifier does not ensure a bounded in this new configuration. We then hypothesize that, for approach, this investigation should be accomplished in the probability estimation or/and probability combination. Finally, with the recent trend on deep neural networks for a range of problems, employing deep features with open-set classification methods is worth investigating in a future work. In Appendix \[appendix:sec:imagenet-results\], we present preliminary results in this direction with and some baselines. Complete SSVM formulation {#appendix:sec:ssvm-formulation} ========================= The optimization problem for the classifier is defined as $$\begin{aligned} \min_{\mathbf{w},b,\xi}\ & \frac{1}{2}||\mathbf{w}||^{2} + C\sum_{i=1}^{m}\xi_{i} + \lambda b,\\ \mbox{s.t.}\ & y_{i}\left(\mathbf{w}^{T}\mathbf{x}_{i} + b\right) - 1 + \xi_{i} \ge 0,\\ & \xi_{i} \ge 0. \end{aligned}$$ Using the Lagrangian method, we have the Lagrangian defined as $$\begin{aligned} \label{appendix:eq:lagrangian-initial} \mathcal{L}(\mathbf{w}, b, \xi, \alpha, r) &= \frac{1}{2}||w||^{2} + C\sum_{i=1}^{m}\xi_{i} + \lambda b - \sum_{i=1}^{m}r_{i}\xi_{i} \nonumber \\ &- \sum_{i=1}^{m}\alpha_{i}\left[y_{i}\left(\mathbf{w}^{T}\mathbf{x}_{i} + b\right) - 1 + \xi_{i}\right], \end{aligned}$$ in which $\alpha_{i} \in \mathbb{R}$ and $r_{i} \in \mathbb{R}$, $i = 1, \dots, m$, are the Lagrangian multipliers First we want to minimize with respect to $\mathbf{w}$, $b$, and $\xi_{i}$, then we must ensure $$\nabla_{\mathbf{w}} = \frac{\partial}{\partial b}\mathcal{L} = \frac{\partial}{\partial\xi_{i}}\mathcal{L} = 0.$$ Consequently, we have $$\begin{aligned} \label{appendix:eq:gradient-w} w - \sum_{i=1}^{m}\alpha_{i}y_{i}\mathbf{x}_{i} = 0 \implies w = \sum_{i=1}^{m}\alpha_{i}y_{i}\mathbf{x}_{i},\\ \label{appendix:eq:derivative-b} \lambda - \sum_{i=1}^{m}\alpha_{i}y_{i} = 0 \implies \sum_{i=1}^{m}\alpha_{i}y_{i} = \lambda,\\ \label{appendix:eq:derivative-xi} C - \alpha_{i} - r_{i} = 0 \implies r_{i} = C - \alpha_{i}. \end{aligned}$$ As the Lagrangian multipliers $\alpha_{i}, r_{i}$ must be greater than 0, from Equation we have the constraint $0 \le \alpha_{i} \le C$ as a consequence, in the dual problem, of the soft margin formulation. This is the same constraint we have in the traditional formulation of the classifier. Using Equations – to simplify the Lagrangian in Equation , we have $$ \mathcal{L}(\mathbf{w}, b, \xi, \alpha, r) = \sum_{i=1}^{m}\alpha_{i} - \frac{1}{2}||\mathbf{w}||^{2},$$ i.e., the same Lagrangian of the traditional optimization problem. The optimization of the bias term $b$ relies on the constraint in Equation . Therefore, the dual optimization problem is defined as $$\begin{aligned} \min_{\alpha}\ & W(\alpha) = - \mathcal{L}(\mathbf{w}, b, \xi, \alpha, r) = \frac{1}{2}||\mathbf{w}||^{2} - \sum_{i=1}^{m}\alpha_{i},\\ \mbox{s.t.}\ & 0 < \alpha_{i} < C, \ \forall i,\\ & \sum_{i=1}^{m}\alpha_{i}y_{i} = \lambda. \end{aligned}$$ Additional examples, results and discussion {#appendix:sec:more-results} =========================================== In this section, we present additional results to the ones presented in Sections \[sec:results-decision-boundaries\] and \[sec:results\]. In Figures \[fig:boat\]–\[fig:regular\], we present more examples of of different classifiers for data sets, complementary to those presented in Section \[sec:results-decision-boundaries\]. In those figures, we see that depending on the shape of the , is not able to bound the , as in Figure \[subfig:boat\_\_mcsvm\_ova\_gsic\_highGamma\_fixedC\] for the blue class, while for other , e.g., in Figures \[subfig:four-gauss\_\_mcsvm\_ova\_gsic\_highGamma\_fixedC\] and \[subfig:regular\_\_mcsvm\_ova\_gsic\_highGamma\_fixedC\], we observe that is able to bound the for each know class. In Figures \[subfig:boat\_\_mcsvmdbc\_ova\_gsic\] and \[subfig:four-gauss\_\_mcsvmdbc\_ova\_gsic\], we notice that the behavior of the is similar to the ’s one, as it simply moves the hyperplane towards the positive class. As the translation of the hyperplane is based on the , measured on the training data for each binary classifier, we can see in Figure \[subfig:regular\_\_mcsvmdbc\_ova\_gsic\] that some binary classifiers dominate over others when combining them for the extension. As expected, in Figures \[subfig:boat\_\_svm1vsll\_ovx\_gsec\]–\[subfig:regular\_\_svm1vsll\_ovx\_gsec\], leaves an unbounded . However, the confusion showed in Figure \[subfig:regular\_\_svm1vsll\_ovx\_gsec\] is because the is only . In Figures \[subfig:boat\_\_mcocsvm\_ova\_gsic\]–\[subfig:regular\_\_mcocsvm\_ova\_gsic\], we can observe the highly-specialized behavior of the for both applied with approach (Figures \[subfig:boat\_\_mcocsvm\_ova\_gsic\]–\[subfig:regular\_\_mcocsvm\_ova\_gsic\]) and for classification (Figures \[subfig:boat\_\_mcocbbsvm\_ova\_gsic\]–\[subfig:regular\_\_mcocbbsvm\_ova\_gsic\]). In Figures \[subfig:boat\_\_mcsvdd\_ova\_gsic\]–\[subfig:regular\_\_mcsvdd\_ova\_gsic\], -based methods has a similar behavior to -based ones, however, with better generalization ability. In Figures \[subfig:boat\_\_wsvm\_ovx\_gsec\]–\[subfig:regular\_\_wsvm\_ovx\_gsec\], we can notice the prevalence of the one-class model of the , however not as specialized as the , due to the parameters adjustment performed by . We also note in Figures \[subfig:boat\_\_wsvm\_ovx\_gsec\] and \[subfig:four-gauss\_\_wsvm\_ovx\_gsec\] that the binary model of the is able to better separate among the known classes, compared to (Figures \[subfig:boat\_\_mcocsvm\_ova\_gsic\] and \[subfig:four-gauss\_\_mcocsvm\_ova\_gsic\]) and (Figures \[subfig:boat\_\_mcsvdd\_ova\_gsic\] and \[subfig:four-gauss\_\_mcsvdd\_ova\_gsic\]). In Figures \[subfig:four-gauss\_\_pisvm\_ovx\_gseo\] and \[subfig:regular\_\_pisvm\_ovx\_gseo\], we can see that is able the bound the for every class, however, as seen in Figure \[subfig:boat\_\_pisvm\_ovx\_gseo\], that is not always the case. Authors of proposed the method to optimize according to the , however they did not prove a bounded for the method. Finally, in Figures \[subfig:boat\_\_mcossvm\_ova\_gsic\]–\[subfig:regular\_\_mcossvm\_ova\_gsic\], we observe the behavior of the proposed and how it gracefully bounds the . /in [ /, \_\_mcsvm\_ova\_gsic\_highGamma\_fixedC/, \_\_mcsvmdbc\_ova\_gsic/, \_\_svm1vsll\_ovx\_gsec/, \_\_mcocsvm\_ova\_gsic/, \_\_mcocbbsvm\_ova\_gsic/, \_\_mcocbbsvm\_ovo\_gsec/, \_\_mcsvdd\_ova\_gsic/, \_\_mcsvddbb\_ova\_gsic/, \_\_mcsvddbb\_ovo\_gsec/, \_\_wsvm\_ovx\_gsec/, \_\_pisvm\_ovx\_gseo/, \_\_mcossvm\_ova\_gsic/ ]{} [0.24]{} ![ for the . Non-white regions represent the region in which a test sample would be classified as belonging to the same class of the samples with the same color. All samples in the white regions would be classified as unknown. []{data-label="fig:boat"}]({\striplastbar{figsR2/}__}figBoundaries__boat_forboundaries\filename__color.png "fig:"){width="\textwidth"} \[subfig:boat\] /in [ /, \_\_mcsvm\_ova\_gsic\_highGamma\_fixedC/, \_\_mcsvmdbc\_ova\_gsic/, \_\_svm1vsll\_ovx\_gsec/, \_\_mcocsvm\_ova\_gsic/, \_\_mcocbbsvm\_ova\_gsic/, \_\_mcocbbsvm\_ovo\_gsec/, \_\_mcsvdd\_ova\_gsic/, \_\_mcsvddbb\_ova\_gsic/, \_\_mcsvddbb\_ovo\_gsec/, \_\_wsvm\_ovx\_gsec/, \_\_pisvm\_ovx\_gseo/, \_\_mcossvm\_ova\_gsic/ ]{} [0.24]{} ![ for the . Non-white regions represent the region in which a test sample would be classified as belonging to the same class of the samples with the same color. All samples in the white regions would be classified as unknown. []{data-label="fig:four-gauss"}]({\striplastbar{figsR2/}__}figBoundaries__four-gauss_forboundaries\filename__color.png "fig:"){width="\textwidth"} \[subfig:four-gauss\] /in [ /, \_\_mcsvm\_ova\_gsic\_highGamma\_fixedC/, \_\_mcsvmdbc\_ova\_gsic/, \_\_svm1vsll\_ovx\_gsec/, \_\_mcocsvm\_ova\_gsic/, \_\_mcocbbsvm\_ova\_gsic/, \_\_mcocbbsvm\_ovo\_gsec/, \_\_mcsvdd\_ova\_gsic/, \_\_mcsvddbb\_ova\_gsic/, \_\_mcsvddbb\_ovo\_gsec/, \_\_wsvm\_ovx\_gsec/, \_\_pisvm\_ovx\_gseo/, \_\_mcossvm\_ova\_gsic/ ]{} [0.24]{} ![ for the . Non-white regions represent the region in which a test sample would be classified as belonging to the same class of the samples with the same color. All samples in the white regions would be classified as unknown. []{data-label="fig:regular"}]({\striplastbar{figsR2/}__}figBoundaries__regular_forboundaries\filename__color.png "fig:"){width="\textwidth"} \[subfig:regular\] In Section \[sec:results\], we presented the results for , , and . In Figures \[fig:graphs\_ossvm\_R2\_normal\_open/mifm/osfmm\]–\[fig:graphs\_ossvm\_R2\_normal\_open/aus/aus\], we present complementary results for methods with open-set regarding other measures: , , , , and , respectively. In Figures \[fig:graphs\_ossvm\_R2\_normal\_open/mifm/osfmm\]–\[fig:graphs\_ossvm\_R2\_normal\_open/bbmifm/fmm\], for the results regarding , , and , we see that does not obtain the best behavior in all cases but it consistently appears in the top positions for all the , while the other methods have better results for one or two and lower performance for the others. Overall, the has better performance with statistical difference for those measures, as shown in Tables \[tab:wilcoxon-open\] (Wilcoxon test) and \[tab:binomial-open\] (Binomial test). In Figures \[fig:graphs\_ossvm\_R2\_normal\_open/aks/aks\] and \[fig:graphs\_ossvm\_R2\_normal\_open/aus/aus\] are the results regarding the measures that compose both and : and , respectively. We observe in those figures that is able to keep a reasonable accuracy for both and . ![image]({\striplastbar{graphs_ossvm_R2_normal_open/}__}figGraphs_legend.pdf){width="0.75\linewidth"} /in [mifm/osfmm]{} /in [ 15scenes\_bow\_soft\_max\_1000/15-scenes, aloi\_bic/aloi, auslan/auslan, caltech256\_bow\_dense\_hard\_average\_1000/caltech-256, letter/letter ]{} [0.495]{} ![image]({\striplastbar{graphs_ossvm_R2_normal_open/}__}figGraphs__\dataset__\measure__color__withoutlegend.pdf){width="0.9\linewidth"} ![image]({\striplastbar{graphs_ossvm_R2_normal_open/}__}figGraphs_legend.pdf){width="0.75\linewidth"} /in [bbmafm/fmM]{} /in [ 15scenes\_bow\_soft\_max\_1000/15-scenes, aloi\_bic/aloi, auslan/auslan, caltech256\_bow\_dense\_hard\_average\_1000/caltech-256, letter/letter ]{} [0.495]{} ![image]({\striplastbar{graphs_ossvm_R2_normal_open/}__}figGraphs__\dataset__\measure__color__withoutlegend.pdf){width="0.9\linewidth"} ![image]({\striplastbar{graphs_ossvm_R2_normal_open/}__}figGraphs_legend.pdf){width="0.75\linewidth"} /in [bbmifm/fmm]{} /in [ 15scenes\_bow\_soft\_max\_1000/15-scenes, aloi\_bic/aloi, auslan/auslan, caltech256\_bow\_dense\_hard\_average\_1000/caltech-256, letter/letter ]{} [0.495]{} ![image]({\striplastbar{graphs_ossvm_R2_normal_open/}__}figGraphs__\dataset__\measure__color__withoutlegend.pdf){width="0.9\linewidth"} ![image]({\striplastbar{graphs_ossvm_R2_normal_open/}__}figGraphs_legend.pdf){width="0.75\linewidth"} /in [aks/aks]{} /in [ 15scenes\_bow\_soft\_max\_1000/15-scenes, aloi\_bic/aloi, auslan/auslan, caltech256\_bow\_dense\_hard\_average\_1000/caltech-256, letter/letter ]{} [0.495]{} ![image]({\striplastbar{graphs_ossvm_R2_normal_open/}__}figGraphs__\dataset__\measure__color__withoutlegend.pdf){width="0.9\linewidth"} ![image]({\striplastbar{graphs_ossvm_R2_normal_open/}__}figGraphs_legend.pdf){width="0.75\linewidth"} /in [aus/aus]{} /in [ 15scenes\_bow\_soft\_max\_1000/15-scenes, aloi\_bic/aloi, auslan/auslan, caltech256\_bow\_dense\_hard\_average\_1000/caltech-256, letter/letter ]{} [0.495]{} ![image]({\striplastbar{graphs_ossvm_R2_normal_open/}__}figGraphs__\dataset__\measure__color__withoutlegend.pdf){width="0.9\linewidth"} Regarding methods, in Section \[sec:results\], we have presented only the Wilcoxon statistical tests (Table \[tab:wilcoxon-ocbb\]). Complementing those results, in Figures \[fig:graphs\_ossvm\_R2\_normal\_ocbb/na/na\]–\[fig:graphs\_ossvm\_R2\_normal\_ocbb/mafm/osfmM\], we present accuracies , , and , respectively, for methods compared to . We complement the analysis with Binomial statistical tests in Table \[tab:binomial-ocbb\], where we can see that performs better than any method. ![image]({\striplastbar{graphs_ossvm_R2_normal_ocbb/}__}figGraphs_legend.pdf){width="0.75\linewidth"} /in [na/na]{} /in [ 15scenes\_bow\_soft\_max\_1000/15-scenes, aloi\_bic/aloi, auslan/auslan, caltech256\_bow\_dense\_hard\_average\_1000/caltech-256, letter/letter ]{} [0.495]{} ![image]({\striplastbar{graphs_ossvm_R2_normal_ocbb/}__}figGraphs__\dataset__\measure__color__withoutlegend.pdf){width="0.9\linewidth"} ![image]({\striplastbar{graphs_ossvm_R2_normal_ocbb/}__}figGraphs_legend.pdf){width="0.75\linewidth"} /in [harmonicNA/hna]{} /in [ 15scenes\_bow\_soft\_max\_1000/15-scenes, aloi\_bic/aloi, auslan/auslan, caltech256\_bow\_dense\_hard\_average\_1000/caltech-256, letter/letter ]{} [0.495]{} ![image]({\striplastbar{graphs_ossvm_R2_normal_ocbb/}__}figGraphs__\dataset__\measure__color__withoutlegend.pdf){width="0.9\linewidth"} ![image]({\striplastbar{graphs_ossvm_R2_normal_ocbb/}__}figGraphs_legend.pdf){width="0.75\linewidth"} /in [mafm/osfmM]{} /in [ 15scenes\_bow\_soft\_max\_1000/15-scenes, aloi\_bic/aloi, auslan/auslan, caltech256\_bow\_dense\_hard\_average\_1000/caltech-256, letter/letter ]{} [0.495]{} ![image]({\striplastbar{graphs_ossvm_R2_normal_ocbb/}__}figGraphs__\dataset__\measure__color__withoutlegend.pdf){width="0.9\linewidth"} Extra experiments with deep features {#appendix:sec:imagenet-results} ==================================== With recent advances in deep learning [@LeCun2015; @Russakovsky2015], the machine learning literature has received plenty of works based on those neural networks. Deep features have been explored together with other methods—e.g., linear —for final classification. Although it is not the focus of our work, we have performed an initial experiment on based on deep features. However, results we obtained are not conclusive, as using deep features for open-set experiments raises much details that should be studied in future research. In this section, we describe some of such details for future investigation and preliminary results. We performed experiments on 360 classes of that has no overlap with the 1000 classes in . Those images were made available by @Bendale2016. The network used for feature extraction on those 360 classes was trained on . We used a different for training the network aiming at avoiding considering as unknown the classes that could be known from the point view of the network, i.e., classes for which the network learns how to represent. We trained a network and extracted the features from its last pooling layer. We applied to reduce from 1024 features to 100. Out of those 360 classes, we performed experiments with the same setup presented in Section \[sec:results\]: we considered 3, 6, 9, and 12 for training the methods by selecting 10 random classes per number of available classes—40 experiments in total. An ideal experiment should consider training one individual network per experiment—out of those 40 experiments—because this way we could ensure the network only knows what is known from the point of view of the experiment we are about to perform. As this is not the focus of the current work, we simply trained the network on a separated . We did avoid training the network on itself because the classes we would consider as unknown in our experiments would not be really unknown from the point of view of the network. Open-set scenario combined with deep features introduces particularities that should be investigated in future work. Preliminary results are presented in Figure \[fig:graphs\_ossvm\_R2\_onlyImageNet/na/na,harmonicNA/harmonicNA,mafm/osfmM\]. As expected, in general, methods with open-set performs better than closet-set variants. Among them, , , , , and performs better. Those methods obtained and close to 100% for 3, 6, 9, and 12 , as can be inferred from Figures \[fig:graphs\_ossvm\_R2\_onlyImageNet/na\] and \[fig:graphs\_ossvm\_R2\_onlyImageNet/harmonicNA\]. In general, performed better, with more winning cases. was not the main competing method with throughout our experiments (Section \[sec:results\]), however, for this dataset, Wilcoxon statistical tests in Table \[tab:wilcoxon-open-imagenet\] indicates performs better with statistical significance. ![image]({\striplastbar{graphs_ossvm_R2_onlyImageNet/}__}figGraphs_legend.pdf){width="0.75\linewidth"} /in [na/na,harmonicNA/harmonicNA,mafm/osfmM]{} in [ ImageNet\_googlenet\_openset\_images\_07102017 ]{} [0.945]{} ![image]({\striplastbar{graphs_ossvm_R2_onlyImageNet/}__}figGraphs__\dataset__\measure__color__withoutlegend.pdf){width="0.9\linewidth"} We highlight that those results are preliminary, as further details should be investigated to setup an open-set experiment based on deep features. Future research must be accomplished regarding the relation to what is known from the point of view of the network and what is known for the open-set classification algorithm. In practical cases, that should probably be the same classes for both the network and classification method, however, transfer learning prior from extracting features should not be neglected [@Yosinski2014]. 0.2in [^1]: Note in some implementations, including the library [@Chang2011], the decision function is defined as $f(\mathbf{x}) = \operatorname{sign}(\mathbf{w}^T \mathbf{x} - \rho)$. In that case, instead of ensuring a negative bias term $b$, one must ensure a positive bias term $\rho$ to bound the . [^2]: The main difference from the without bias term to the traditional is that the constraint in Equation does not exist in the dual formulation. [^3]: Source code, extended upon the implementation [@Chang2011], will be available upon acceptance of this manuscript. [^4]: For both the closed-set and the open-set approaches, we consider open-set scenarios in the testing phase of our experiments in Section \[sec:experiments\]. [^5]: Feature vectors for the used in this work will be available upon acceptance of this manuscript. [^6]: Raw results obtained in our experiments as well as the script to perform the statistical analysis are available at <https://github.com/pedrormjunior/ssvm-results>.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Classically, imitation learning algorithms have been developed for idealized situations, e.g., the demonstrations are often required to be collected in the exact same environment and usually include the demonstrator’s actions. Recently, however, the research community has begun to address some of these shortcomings by offering algorithmic solutions that enable imitation learning from observation (), e.g., learning to perform a task from *visual* demonstrations that may be in a different environment and do not include actions. Motivated by the fact that agents often also have access to their own internal states (i.e., *proprioception*), we propose and study an  algorithm that leverages this information in the policy learning process. The proposed architecture learns policies over proprioceptive state representations and compares the resulting trajectories visually to the demonstration data. We experimentally test the proposed technique on several MuJoCo domains and show that it outperforms other imitation from observation algorithms by a large margin.' author: - Faraz Torabi$^1$ - | Garrett Warnell$^2$Peter Stone$^1$ $^1$The University of Texas at Austin\ $^2$Army Research Laboratory\ {faraztrb, pstone}@cs.utexas.edu, garrett.a.warnell.civ@mail.mil bibliography: - 'ijcai19.bib' title: Imitation Learning from Video by Leveraging Proprioception --- Introduction ============ Imitation learning [@schaal1997learning; @argall2009survey; @osa2018algorithmic] is a popular method by which artificial agents learn to perform tasks. In the imitation learning framework, an expert agent provides demonstrations of a task to a learning agent, and the learning agent attempts to mimic the expert. Unfortunately, many existing imitation learning algorithms have been designed for idealized situations, e.g., they require that the demonstrations be collected in the exact same environment as the one that the imitator is in and/or that the demonstrations include the demonstrator’s actions, i.e., the internal control signals that were used to drive the behavior. These limitations result in the exclusion of a large amount of existing resources, including a large number of videos uploaded to the internet. For example, 300 hours of video are uploaded to YouTube every minute[^1], many of which include different types of tasks being performed. Without new imitation learning techniques, none of this video can be used to instruct artificial agents. Fortunately, the research community has recently begun to focus on addressing the above limitations by considering the specific problem of imitation from observation () [@liu2017imitation; @torabi2019recent].  considers situations in which agents attempt to learn tasks by observing demonstrations that contain only state information (e.g., videos). Among  algorithms that learn tasks by watching videos, most attempt to learn imitation policies that rely solely on self-observation through video, i.e., they use a convolutional neural network (CNN) that maps images of themselves to actions. However, in many cases, the imitating agent also has access to its own *proprioceptive* state information, i.e., direct knowledge of itself such as the joint angles and torques associate with limbs. In this paper, we argue that  algorithms that ignore this information are missing an opportunity that could potentially improve the performance and the efficiency of the learning process. Therefore, we are interested here in  algorithms that can make use of both visual and proprioceptive state information. In this paper, we build upon our previous work [@torabi2018generative] proposing an algorithm that uses a -like [@goodfellow2014generative] architecture to learn tasks perform  directly from videos. Unlike our prior work, however, our method *also* uses proprioceptive information from the imitating agent during the learning process. We hypothesize that the addition of such information will improve both learning speed and the final performance of the imitator, and we test this hypothesis experimentally in several standard simulation domains. We compare our method with other, state-of-the-art approaches that do not leverage proprioception, and our results validate our hypothesis, i.e., the proposed technique outperforms the others by a large margin. The rest of this paper is organized as follows. In Section \[sec:related-work\], we review related work in imitation from observation. In Section \[sec:background\], we review technical details surrounding Markov decision processes, imitation learning, and . The proposed algorithm is presented in Section \[sec:algorithm\], and we describe the experiments that we have performed in Section \[sec:experiments\]. Related Work {#sec:related-work} ============ In this section, we review research in imitation learning, plan/goal recognition by mirroring, and recent advances in imitation from observation (). Conventionally, imitation learning is used in autonomous agents to learn tasks from demonstrated [*state-action*]{} trajectories. The algorithms developed for this task can be divided into two general categories, (1) behavioral cloning [@bain1999framework; @ross2011reduction; @daftry2016learning] in which the agents learn a direct mapping from the demonstrated states to the actions, and (2) inverse reinforcement learning (IRL) [@abbeel2004apprenticeship; @bagnell2007boosting; @baker2009action] in which the agents first learn a reward function based on the demonstrations and then learn to perform the task using a reinforcement learning ()  [@sutton1998reinforcement] algorithm. In contrast, imitation from observation () is a framework for learning a task from [*state-only*]{} demonstrations. This framework has recently received a great deal of attention from the research community. The  algorithms that have been developed can be categorized as either (1) model-based, or (2) model-free. Model-based algorithms require the agent to learn an explicit model of its environment as part of the imitation learning process. One algorithm of this type is behavioral cloning from observation () [@torabi2018behavioral], in which the imitator learns a dynamics model of its environment using experience collected by a known policy, and then uses this model to infer the missing demonstrator actions. Using the inferred actions, the imitator then computes an imitation policy using behavioral cloning [@bain1995a]. Another approach of this type is reinforced inverse dynamics modeling (RIDM) [@torabi2019RIDM] which also learns a model of its environment using an exploration policy and then it further optimizes the model using a sparse reward function. It is shown that in some experiments the algorithm can even outperform the expert. Another model-based approach to  is imitating latent policies from observation () [@edwards2018imitating]. Given the current state of the expert, this approach predicts the next state using a latent policy and a forward dynamics model. It then uses the difference between the predicted state and the actual demonstrator next state to update both the model and the imitation policy. Afterwards, the imitator interacts with its environment to correct the action labels. Model-free algorithms, on the other hand, do not require any sort of model to learn imitation policies. One set of approaches of this type learns a time-dependent representation of tasks and then relies on hand-designed, time-aligned reward functions to learn the task via . For example, @sermanet2017time propose an algorithm that learns an embedding function using a triplet loss function that seeks to push states that are close together in time closer together in the embedded space, while pushing other states further away. @liu2017imitation also propose a new architecture to learn a state representation—specifically, one that is capable of handling viewpoint differences. @gupta2017learning also propose a neural network architecture to try to learn a state representation that can overcome possible embodiment mismatch between the demonstrator and the imitator. Each of these approaches requires multiple demonstrations of the same task to be time-aligned, which is typically not a realistic assumption. @aytar2018playing propose an  algorithm that first learns an embedding using a self-supervised objective, and then constructs a reward function based on the embedding representation difference between the current state of the imitator and a specific checkpoint generated by the visual demonstration. @goo2018learning propose an algorithm that uses a shuffle-and-learn style [@misra2016shuffle] loss in order to train a neural network that can predict progress in the task which can then be used as the reward function. Another set of model-free algorithms follow a more end-to-end approach to learning policies directly from observations. An algorithm of this type is generative adversarial imitation from observation () [@torabi2018generative], which uses a -like architecture to bring the state transition distribution of the imitator closer to that of the demonstrator. Another approach of this type is the work of @merel2017learning , which is concerned instead with single state distributions. @stadie2017third also propose an algorithm in this space that combines adversarial domain confusion methods [@ganin2016domain] with adversarial imitation learning algorithms in an attempt to overcome changes in viewpoint. The method we propose in this paper also belongs to the category of end-to-end model-free imitation from observation algorithms. However, it is different from the algorithms discussed above in that we explicitly incorporate the imitator’s proprioceptive information in the learning process in order to study the improvement such information can make with respect to the performance and speed of the learning process. A method that is closely related to imitation from observation is plan/goal recognition through mirroring [@vered2016online; @vered2018towards] in that in it attempts to infer higher-level variables such as the goal or the future plan by observing other agents. However, in plan and goal recognition the observer already has fixed controllers, and then uses these controllers to match/explain the observed agent in order to infer their goal/plan. In imitation from observation, on the other hand, the agent seeks to learn a controller that the observer can use to imitate the observed agent. Background {#sec:background} ========== In this section, we establish notation and provide background information about Markov decision processes (s) and adversarial imitation learning. Notation -------- We consider artificial learning agents operating in the framework of Markov decision processes (s). An  can be described as a 6-tuple ${\mathcal{M}}= \{{\mathcal{S}}, {\mathcal{A}}, P, r, \gamma\}$, where ${\mathcal{S}}$ and ${\mathcal{A}}$ are state and action spaces, $P(s_{t+1}|s_t, a_t)$ is a function which represents the probability of an agent transitioning from state $s_t$ at time $t$ to $s_{t+1}$ at time $t+1$ by taking action $a_t$, $r:{\mathcal{S}}\times {\mathcal{A}}\rightarrow {\mathcal{R}}$ is a function that represents the reward feedback that the agent receives after taking a specific action at a given state, and $\gamma$ is a discount factor. In the context of the notation established above, we are interested here in learning a policy $\pi:{\mathcal{S}}\rightarrow {\mathcal{A}}$ that can be used to select an action at each state. In this paper, we refer to $s$ as the *proprioceptive* state, i.e., $s$ is the most basic, internal state information available to the agent (e.g., the joint angles of a robotic arm). Since we are also concerned with visual observations of agent behavior, we denote these observations as $o \in {\mathcal{O}}$, i.e., an image of the agent at time $t$ is denoted as $o_t$. The visual observations of the agent are determined both by the agent’s current proprioceptive state $s$, and also other factors relating to image formation such as camera position. Importantly, due to phenomena such as occlusion, it is not always possible to infer $s$ from $o$ alone. In imitation learning (), agents do not receive reward feedback $r$. Instead, they have access to expert demonstrations of the task. These demonstrations $\tau_e = \{(s_t,a_t)\}$ are composed of the state and action sequences experienced by the demonstrator. Here, however, we specifically consider the problem of imitation from observation (), in which the agent only has access to sequences of visual observations of the demonstrator performing the task, i.e., $\tau_e = \{o_t\}$. Adversarial Imitation Learning ------------------------------ Generative adversarial imitation learning () is a recent imitation learning algorithm developed by @ho2016generative that formulates the problem of finding an imitating policy as that of solving the following optimization problem: $$\label{gail} \begin{split} \min_{\pi \in \prod} \displaystyle{\max_{D \in (0,1)^{\mathcal{S} \times \mathcal{A}}}} & -\lambda_H H(\pi) + \mathbb{E}_\pi[\log(D(s,a)] +\\ &\mathbb{E}_{\pi_E}[\log(1-D(s,a))]\;, \end{split}$$ where $H$ is the entropy function, and the discriminator function $D:\mathcal{S} \times \mathcal{A} \rightarrow (0,1)$ can be thought of as a classifier trained to differentiate between the state-action pairs provided by the demonstrator and those experienced by the imitator. The objective in (\[gail\]) is similar the one used in generative adversarial networks (s) [@goodfellow2014generative], and the associated algorithm can be thought of as trying to induce an imitator state-action occupancy measure that is similar to that of the demonstrator. Even more recently, there has been research on methods that seek to improve on  by, e.g., increasing sample efficiency [@kostrikov2018discriminatoractorcritic; @sasaki2018sample] and improving reward representation [@fu2018learning; @qureshi2018adversarial]. The method we propose in this paper is most related to generative adversarial imitation from observation [@torabi2018generative], which models the imitating policy using a randomly-initialized convolutional neural network, executes the policy to generate recorded video of the imitator’s behavior, and then trains a discriminator to differentiate between video of the demonstrator and video of the imitator. Next, it uses the discriminator as a reward function for the imitating agent (higher rewards corresponding to behavior the discriminator classifies as coming from the demonstrator), and uses a policy gradient technique (e.g.,  [@schulman2015trust]) to update the policy. The process repeats until convergence. This algorithm differs from what we propose in that  uses visual data [*both*]{} in the process of discriminator [*and*]{} policy learning. That is, the learned behavior policy maps images $o$ to actions using a convolutional neural network. The technique we propose, on the other hand, leverages proprioceptive information in the policy learning step, instead learning policies that map proprioceptive states $s$ to actions using a multilayer perceptron architecture. (-3,0.3) rectangle (14, 3.3); at (-1.8, 2) [Proprioceptive\ features ($s_t$)\ (e.g. joint angles)]{}; (input image) at (-0.2,2.4) [![image](walker-trans2.png){height="30mm"}]{}; (.3,2) – (1.1,2); (1.1,1) – (1.1,3) – (2.1,3) – (2.1,1) – (1.1,1); at (1.6, 2) [MLP\ ($\pi_\theta$)]{}; (2.1,2) – (2.7,2); (2.9,1.3) – (2.9,2.7) – (3.1,2.7) – (3.1,1.3) – (2.9,1.3); (2.9,2.5) – (3.1,2.5); (2.9,1.5) – (3.1,1.5); (2.9,1.7) – (3.1,1.7); at (3, 1.9) [.]{}; at (3, 2.1) [.]{}; at (3, 2.3) [.]{}; at (3, 1) [$a_t$]{}; (3.3,2) – (3.9,2); (3.9,1.5) – (3.9,2.5) – (4.7,2.5) – (4.7,1.5) – (3.9,1.5); at (4.3, 2) [Env]{}; (4.7,2) – (5.5,2); (input image) at (6.35,2.2) [![image](walker2.png){height="15mm" width="15mm"}]{}; (5.6,2.95) – (5.8,2.75); (7.1,2.95) – (7.3,2.75); (5.6,1.45) – (5.8,1.25); (input image) at (6.55,2) [![image](walker2.png){height="15mm" width="15mm"}]{}; at (6.45, .8) [$\tau_i=\{o_{t-2}:o_{t+1}\}$]{}; (7.5,2) – (7.9,2) – (7.9,2.7) – (8.9,2.7) – (8.9,2.5); (8.5,2.5) – (10.5,2.5) – (10,1.5) – (9,1.5) – (8.5,2.5); at (9.5, 2) [CNN\ ($D_\phi$)]{}; (9.5,1.5) – (9.5,1); at (9.5, 0.75) [v]{}; (input image) at (12.4,2.2) [![image](walker.png){height="15mm" width="15mm"}]{}; (11.65,2.95) – (11.85,2.75); (13.15,2.95) – (13.35,2.75); (11.65,1.45) – (11.85,1.25); (input image) at (12.6,2) [![image](walker.png){height="15mm" width="15mm"}]{}; at (12.5, .8) [$\tau_e=\{o_{t-2}:o_{t+1}\}$]{}; (11.5,2) – (11.1,2) – (11.1,2.7) – (10.1,2.7) – (10.1,2.5); Proposed Method {#sec:algorithm} =============== As presented in Section \[sec:background\], we are interested in the problem of imitation from observation (), where an imitating agent has access to visual demonstrations, $\tau_e = \{o_t\}$, of an expert performing a task, and seeks to learn a behavior that is approximately the same as the expert’s. In many previous approaches to this problem, the imitator selects actions on the basis of visual self-observation alone (i.e., using images of itself). We hypothesize that also leveraging available proprioceptive state information, $s$, during the learning process will result in better and faster learning. Inspired by , our algorithm is comprised of two pieces: (1) a generator, which corresponds to the imitation policy, and (2) a discriminator, which serves as the reward function for the imitator. We model the imitation policy as a multilayer perceptron (MLP), $\pi_\theta$. The imitating agent, being aware of its own proprioceptive features $s$, feeds them into the policy network and receives as output a distribution over actions from which the selected action $a$ can be sampled. The imitator then executes this action and we record a video of the resulting behavior. After several actions have been executed, we have accumulated a collection of visual observations of the imitator’s behavior, $\tau_i = \{o\}$. Meanwhile, we use a convolutional neural network as a discriminator $D_\phi$. Given visual observations of the demonstrator, $\tau_e$, and observations of the imitator, $\tau_i$, we train the discriminator to differentiate between the data coming from these different sources. Since single video frames lack observability in most cases, we instead stack four frames, $\{o_{t-2}, o_{t-1}, o_t, o_{t+1}\}$, and feed this stack as input to the discriminator. Initialize policy $\pi_\theta$ randomly Initialize discriminator $D_\phi$ randomly Obtain visual demonstrations $\tau_e=\{o\}$ Execute $\pi_\theta$ and record video observation $\tau_i=\{o\}$ Update the discriminator $D_\phi$ using loss $$\label{eq:disc} \begin{split} - & \Big(\mathbb{E}_{\tau_i} [\log(D_\phi(o_{t-2}:o_{t+1}))]+ \\& \mathbb{E}_{\tau_e} [\log(1-D_\phi(o_{t-2}:o_{t+1}))]\Big) \end{split}$$ Update $\pi_\theta$ by performing updates with gradient steps of $$\label{eq:step} \begin{split} \mathbb{E}_{\tau_i} [\nabla_\theta \log \pi_\theta(a|s) Q(s,a)] - \lambda \nabla_\theta H(\pi_\theta), \end{split}$$ where $$\label{eq:step} \begin{split} & Q(\hat{s}_t,\hat{a}_t) = \\& -\mathbb{E}_{\tau_i} [\log(D_\phi(o_{t-2}:o_{t+1}))| s_0=\hat{s}_t, a_0=\hat{a}_t] \end{split}$$ We train the discriminator to output values closer to zero for the transitions coming from the expert, and values closer to one for those coming from the imitator. Therefore, the discriminator aims to solve the following optimization problem: $$\label{eq:disc} \begin{split} \displaystyle{\max_\phi}& \Big(\mathbb{E}_{\tau_i} [\log(D_\phi(o_{t-2}:o_{t+1}))]+\\& \mathbb{E}_{\tau_e} [\log(1-D_\phi(o_{t-2}:o_{t+1}))]\Big) \; . \end{split}$$ The lower the value outputted by the discriminator, the higher the chance of the input being from the expert. Recall that the objective for the imitator is to mimic the demonstrator, which can be thought of as fooling the discriminator. Therefore, we use $$\label{eq:rew} \begin{split} -\Big(\mathbb{E}_{\tau_i} [\log(D_\phi(o_{t-2}:o_{t+1}))]\Big)\\[5pt] \end{split}$$ as the reward to update the imitation policy using . In particular, we use proximal policy optimization () [@schulman2017proximal] with gradient steps of $$\label{eq:step} \begin{split} \mathbb{E}_{\tau_i} [\nabla_\theta \log \pi_\theta(a|s) Q(s,a)] - \lambda \nabla_\theta H(\pi_\theta),\\[5pt] \end{split}$$ where $Q(s,a)$ is the state-action value, i.e. the potential reward that the agent receives starting from $s$ and taking action $a$: $$\label{eq:step} \begin{split} & Q(\hat{s}_t,\hat{a}_t) = \\&-\mathbb{E}_{\tau_i} [\log(D_\phi(o_{t-2}:o_{t+1}))| s_0=\hat{s}_t, a_0=\hat{a}_t].\\[5pt] \end{split}$$ As presented, our algorithm uses the visual information in order to learn the reward function by comparing visual data generated by the imitator and the demonstrator. It also takes advantage of proprioceptive state features in the process of policy learning by learning a mapping from those features to actions using a reinforcement learning algorithm. Pseudocode and a diagrammatic representation of our proposed algorithm are presented in Algorithm \[alg\] and Figure \[fig:alg\], respectively. Experiments {#sec:experiments} =========== The algorithm introduced above combines proprioceptive state information with video observations in an adversarial imitation learning paradigm. We hypothesize that using the extra state information in the proposed way will lead to both faster imitation learning and better performance on the imitated task when compared to similar techniques that ignore proprioception. In this section, we describe the experimental procedure by which we evaluated this hypothesis, and discuss the results. Setup ----- We evaluated our method on a subset of the continuous control tasks available via OpenAI Gym [@1606.01540] and the MuJoCo simulator [@todorov2012mujoco]: MountainCarContinuous, InvertedPendulum, InvertedDoublePendulum, Hopper, Walker2d, HalfCheetah. To generate the demonstration data, we first trained an expert agents using pure reinforcement learning (i.e., not from imitation). More specifically, we used proximal policy optimization () [@schulman2017proximal] and the ground truth reward function provided by OpenAI Gym. After the expert agents were trained, we recorded $64 \times 64$, $30$-fps video demonstrations of their behavior. We compared the proposed method with three other imitation from observation algorithms that do *not* exploit the imitator’s proprioceptive state information: Time Contrastive Networks (TCN) [@sermanet2017time], Behavioral Cloning from Observation (BCO) [@torabi2018behavioral], Generative Adversarial Imitation Learning (GAIfO) [@torabi2018generative; @torabi2019adversarial][^2]. Results ------- We hypothesized that our method would outperform the baselines with respect to two criteria: (1) the final performance of the trained imitator, i.e., how the imitator performs the task compared to the demonstrator (as measured by the ground truth reward functions), and (2) the speed of the imitation learning process as measured by number of learning iterations. The results shown here were generated using ten independent trials, where each trial used a different random seed to initialize the environments, model parameters, etc. Figure \[fig:performance\] depicts our experimental results pertaining to the first criterion, i.e., the final task performance of trained imitating agents in each domain. The rectangular bars and error bars represent the mean return and the standard error, respectively, as measured over $1000$ trajectories. We report performance using a normalized task score, i.e., scores are scaled in such a way that the demonstrating agent’s performance corresponds to $1.0$ and the performance of an agent with random behavior corresponds to $0.0$. The x-axis represents the number of demonstration trajectories, i.e., videos, available to the imitator. In general, it can be seen that the proposed method indeed outperforms the baselines in almost all cases, which shows that using the available proprioceptive state information can make a remarkable difference in the final task performance achieved by imitation learning. In the particular case of InvertedPendulum, both  and the proposed method achieve a final task performance equal to that of the demonstrator, likely due to the simplicity of the task. However, for the rest of the tasks, it can be clearly seen that the proposed approach performs better than [^3]. Further, we can see that increasing the number of demonstrated trajectories results in increased task performance. To validate our hypothesis with respect to learning speed, we also studied the transient performance of the various learning algorithms. Because only one other method, , performed as well as the expert in only one domain, InvertedPendulum, Figure \[fig:process\] only depicts the results for these algorithms in that domain. The x-axis shows the number of iterations, i.e., the number of update cycles for both the policy and the discriminator. Since updating the policy requires interaction with the environment, a smaller number of iterations also corresponds to less overhead during the learning process. As shown in the figure, our method converges to expert-level performance much faster than , which supports our hypothesis that leveraging proprioception speeds the imitation learning process. In Figure \[fig:performance\], we can see that two of the baseline methods— and —do not achieve task performance anywhere near that of the expert. ![The rectangular bars and error bars represent the mean normalized return and the standard error, respectively, as measured over 1000 trials. The normalized values have been scaled in such a way that expert and random performance are $1.0$ and $0.0$, respectively. The x-axis represents the number of available video demonstration trajectories.[]{data-label="fig:performance"}](performance-bar.pdf){width="\linewidth"} For InvertedPendulum and InvertedDoublePendulum, we suspect that  performs poorly due to possible overfitting of the learned state embedding to the specific demonstrations and, therefore, does not generalize well toward supporting the overall goal of keeping the pendulum balanced above the rod. For Hopper, Walker2d, and HalfCheetah, the poor performance of  may be due to the fact that the tasks are cyclical in nature and therefore not well-suited to the time-dependent learned state embedding.  performs relatively better in MountainCarContinuous, compared to other domains because this domain does have the properties required by . As for , we posit that the low performance is due to the well-known compounding-error issue present in behavioral cloning. One interesting thing to note is that Walker2d results in larger error bars for our technique than those seen for any of the other domains. We hypothesize that the reason for this is that the video frames provide very poor information regarding the state of the demonstrator—here, the agent has two legs, which sometimes results in occlusion and, therefore, uncertainty regarding which action the agent should take. Finally, we can see that the proposed technique performs the most poorly in the HalfCheetah domain. We hypothesize that this is due to the speed at which the demonstrator acts: frame-to-frame differences are large, e.g., three to four consecutive frames cover a complete cycle of the agent jumping forwards. This rate of change may make it difficult for our discriminator to extract a pattern of behavior, which, consequently, would make it much more difficult for the agent to move its behavior closer to that of the demonstrator. Therefore, one way that performance might be improved is to increase the frame rate at which the demonstrations are sampled. Another way, as suggested by Figure \[fig:performance\], would be to increase the number of demonstration trajectories beyond what is shown here. ![Performance of imitation agents with respect to the number of iterations (N). Solid colored lines represent the mean return and shaded areas represent standard errors. The returns are scaled so that the performance of the expert and random policies be zero and one, respectively.[]{data-label="fig:process"}](performance-process.pdf) Conclusion and Future Work ========================== In this paper, we hypothesized that including proprioception would be beneficial to the learning process in the  paradigm. To test this hypothesis, we presented a new imitation from observation algorithm that leverages both available visual and proprioceptive information. It uses visual information to compare the imitator’s behavior to that of the demonstrator, and uses this comparison as a reward function for training a policy over proprioceptive states. We showed that leveraging this state information can significantly improve both the performance and the efficiency of the learning process. However, to achieve the end-goal of true imitation from observation, several challenges remain. For example,  algorithms should be able to overcome embodiment mismatch (the imitator and the demonstrator have different embodiments), and viewpoint mismatch (the visual demonstrations are recorded from different viewpoints.). Resolving these limitations is a natural next step for extending this research. Another way to improve upon the proposed method is to attempt to make the training more reliable by incorporating techniques developed to improve the stability of s, such as the work of @arjovsky2017wasserstein . Further, to the best of our knowledge, nobody has been able to deploy -like methods on real robots due to high sample complexity. Therefore, techniques that seek to improve the learning process with respect to this metric should also be investigated further. Acknowledgments {#acknowledgments .unnumbered} =============== This work has taken place in the Learning Agents Research Group (LARG) at the Artificial Intelligence Laboratory, The University of Texas at Austin. LARG research is supported in part by grants from the National Science Foundation (IIS-1637736, IIS-1651089, IIS-1724157), the Office of Naval Research (N00014-18-2243), Future of Life Institute (RFP2-000), Army Research Lab, DARPA, Intel, Raytheon, and Lockheed Martin. Peter Stone serves on the Board of Directors of Cogitai, Inc. The terms of this arrangement have been reviewed and approved by the University of Texas at Austin in accordance with its policy on objectivity in research. [^1]: <https://bit.ly/2quPG6O> [^2]: The considered domains, methods, and implementations are presented in more detail in the longer version of the paper on arXiv [@torabi2019imitation] [^3]: Note that the performance of  on Hopper is different from what was presented in the  paper [@torabi2018generative]. We hypothesize that the reason is twofold: (1) different physics engines—MuJoCo is used in this paper, but in the previous work [@torabi2018generative] Pybullet [@coumans2016pybullet] was used , and (2) differences in video appearance—in this work we do not alter the default simulator parameters, whereas in the previous work [@torabi2018generative] some of the parameters were modified such as the colors used in the video frames in order to increase the contrast between the agent and the background.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Swimmers and self-propelled particles are physical models for the collective behaviour and motility of a wide variety of living systems, such as bacteria colonies, bird flocks and fish schools. Such artificial active materials are amenable to physical models which reveal the microscopic mechanisms underlying the collective behaviour. Here we study colloids in a DC electric field. Our quasi-two-dimensional system of electrically-driven particles exhibits a rich and exotic phase behaviour. At low field strengths, electrohydrodynamic flows lead to self-organisation into crystallites with hexagonal order. Upon self-propulsion of the particles due to Quincke rotation, we find an ordered phase of active matter in which the motile crystallites constantly change shape and collide with one another. At higher field strengths, this “dissolves” to an active gas. We parameterise a particulate simulation model which reproduces the experimentally observed phases and, at higher field strengths predicts an activity-driven demixing to band-like structures.' author: - 'Abraham Mauleon-Amieva' - Majid Mosayebi - 'James E. Hallett' - Francesco Turci - 'Tanniemola B. Liverpool' - 'Jeroen S. van Duijneveldt' - 'C. Patrick Royall' bibliography: - 'amoeba.bib' title: 'Competing Active and Passive Interactions Drive Amoeba-like Living Crystallites and Ordered Bands' --- Introduction {#introduction .unnumbered} ============ From living organisms to synthetic colloidal particles, active systems display exotic phenomena not attainable by matter at thermal equilibrium [@marchetti2013; @schweitzer2002; @ramaswamy2010; @bechinger2016], such as swarming [@ariel2015; @narayan2007], cluster-formation [@theurkauff2012; @palacci2013] or phase separation in the absence of attractions [@cates2015; @buttinoni2013; @schwarzlinek2012], banding [@chate2008], and unusual crystallisation behaviour [@briand2016]. This is due to continuous energy consumption which occurs in a wide range of systems at very different lengthscales, from the cell cytoskeleton [@julicher2007; @sanchez2012], tissues [@park2014] and bacterial colonies [@pedley1992; @zhang2010; @lushi2014; @petroff2015] to larger scales such as insect swarms [@sinhuber2016], fish schools [@katz2011] and bird flocks [@cavagna2010]. Artificial active materials, composed of microswimmers, active colloids or vibrating granular particles [@zhang2010; @volpe2011; @theurkauff2012; @bechinger2016; @briand2016], or even synthetically modified living systems such as bacteria [@schwarzlinek2012], provide a suitable testing ground where the behaviour of active matter may be carefully probed to extract the new physical principles of this emergent class of matter. While simple models of active particles capture some of the complex behaviour observed experimentally, for example collective motion [@vicsek1995; @gregoire2004; @redner2013; @fodor2016; @zottl2014], the link between experiment and theory in active matter is often rather qualitative. As a result, a comprehensive understanding of how and which microscopic mechanisms lead to the emergence of complex structures in experimental active systems remains elusive. Here we implement a theoretical description which is able to predict the behaviour observed in experiments. In particular, we parameterise our experimental system at the microscopic level of the interacting particles. We combine this model with particle-resolved studies of so-called Quincke rollers, active colloids which exhibit swarming and flocking [@bricard2013; @bricard2015]. At low–to–moderate motility, we reveal the importance of competing passive interactions (long-ranged attractions) driving crystallisation and activity which leads to melting–like and evaporation–like behaviour. At high motility, the role of passive and active interactions is *reversed*: activity drives demixing resulting in a banding phase, whose ordered local structures result from the repulsive core of the particles. This importance of competition between passive and active interactions is reminiscent of well-known systems such as amphiphiles, block copolymers and mixtures of charged colloids and non-absorbing polymer where competing interactions lead to modulated phases such as lamellae [@andelman1995; @ciach2008], whose structures indeed resemble some we find here. Our approach shows how one may build bottom-up designs of particulate active matter with precisely controllable macroscopic behaviour. In the Quincke rollers we study, the application of a uniform DC electric field above a critical field strength $E_Q$ induces the directed motion of spherical colloids by coupling their translation and diffusion near a surface [@jakli2008; @bricard2013]. In the absence of a field, the particles behave as conventional passive Brownian colloids. At low field strengths, while remaining non-motile, particles agglomerate into crystals due to to long-ranged attractive interactions which arise from electro-osmotic flows (Fig.\[figPhaseSetUp\]**A**.) [@yeh2000; @ristenpart2004; @zhang2004]. Above the critical field strength $E_Q$, the particles undergo Quincke rotation [@quincke1896; @pannacci2007; @das2013] and become motile (Fig.\[figPhaseSetUp\]**B**) so that the electro-osmotically generated crystallites transition into a highly mobile active state reminiscent of amoebae (see Supplementary Movies 1-3, available online in *https://abrhmma0.wixsite.com/website*). Unlike “living crystals” [@palacci2013], they are motile and characterised by a highly dynamic outer surface. These crystallites then dissolve into an isotropic active gas as we increase the field strength. Finally at very high field strengths our simulations predict that the system undergoes banding which involves local ordering. We investigate the rich structural and dynamical properties of our system using a range of static and dynamic order parameters and in particular consider the coalescence and division of our amoeba-like motile crystals. ![image](figPhaseSetUp){width="99.00000%"} \[figPhaseSetUp\] Results {#results .unnumbered} ======= Experiments {#experiments .unnumbered} ----------- A more complete description of the experimental setup shown schematically in Fig.\[figPhaseSetUp\]**A**. is included in the Methods section. Briefly, we use a suspension of colloidal particles of diameter $\sigma=2.92$ $\mu$m in a non-aqueous ionic solution. Sedimentation results in a quasi 2D system with area fraction $\phi_\mathrm{exp}\approx0.05$. The sample cell is made from two indium-tin-oxide (ITO) coated glass slides separated with UV-cured resin containing spacer beads to allow the application of the electric field. The samples are imaged with bright-field optical microscopy for particle tracking. A uniform field $E_0$ is applied perpendicular to the two slides as represented in Fig.\[figPhaseSetUp\]**A**, in order to enable self-propulsion due to Quincke rotation. We then translate the resultant field-dependent activity to dimensionless Péclet numbers and characterise the static and dynamic behaviour of the system. Throughout, we use the Brownian time for a colloid to diffuse its own radius in 2D, $\tau=\sigma^2/D_t \approx 9$s, as the unit of time, where $D_t$ is the translational diffusion constant. Simulations {#simulations .unnumbered} ----------- The Quincke rollers are subject to forces and torques due to excluded volume repulsions, as well as self-propulsion, alignment and attractions generated by the electro-hydrodynamic interactions of the particles with their environment and each other [@nadal2002; @bricard2013]. They can be modelled as *active Brownian particles* with an additional [*active aligning torque*]{}, whose active/passive forces and torques can be quantitatively specified. We implement Brownian dynamics simulations, with the following equations of motion for positions and orientations ${\mathbf{r}}_i,\theta_i$. $$\begin{aligned} \dot{\mathbf{r}}_i &=&\frac{D_t}{k_BT} [ {\mathbf{F}}_i + f^p \hat{\mathbf{P}}_i ] + {\sqrt{2D_{t}}} \bm{\xi}_i^{t}, \\ \dot{\theta}_i &=& \frac{D_r}{k_BT} {\mathcal{T}}_i + \sqrt{2D_r} \xi_i^r, \label{eqABP}\end{aligned}$$ where $\mathbf{F}_i$ is the interparticle force on the $i$th roller, $f^p$ is the magnitude of the active force, $\hat{\mathbf{P}}_i = ( \cos \theta_i, \sin \theta_i )$ is the direction of motion of the $i$th roller, ${\mathcal{T}}_i$ is the torque on the $i$th roller which incorporates alignment terms, and $\xi_i^{t,r}$ is a Gaussian white noise of zero mean and unit variance. $D_r$ is the rotational diffusion constant. The direct interactions $\mathbf{F}_i$ include a “hard” core and long-range attraction, the latter to model the electrohydrodynamic contribution. Further details of the model and the simulation parameters, and the procedure by which the parameters were mapped to the experiment may be found in the Methods and Supplementary Materials (SM). Determining the Péclet number {#determining-the-péclet-number .unnumbered} ----------------------------- Before moving to the discussion of our results, we first describe our mapping of field strength to Péclet number between experiment and simulation. We obtain the bare translational diffusion coefficient of the passive system $D_{t}$ measured at equilibrium. Particle velocity $\upsilon$, and the characteristic time scale for the rotational diffusion $\tau_{r} = D_{r}^{-1}$ for a dilute sample with with area fraction $\phi \approx 0.001$ are obtained from the fitting to the mean square displacement (MSD) of active particles in the dilute (gas) regime, $$\label{eqMsd} \langle \Delta r^{2} (t) \rangle = 4D_{t}t + \frac{\upsilon^{2}\tau_{r}^{2}}{3}\Bigg[\frac{2t}{\tau_{r}} + \exp\Bigg(\frac{-2t}{\tau_{r}}\Bigg) -1 \Bigg].$$ To extract the parameters of Eq. \[eqMsd\] from the experiments we consider a series of similarly dilute samples. We estimate the dimensionless Péclet number as ${\operatorname{\mathit{P\kern-.08em e}}}= 3\upsilon \tau_{r}/\sigma$, for each measured velocity in the different states obtained in the experiment. The Péclet number is significantly enhanced by Quincke rotation. However, since this is related to the threshold field strength $E_Q$ where Quincke rotation is initiated, we find that for low field strengths, ${\operatorname{\mathit{P\kern-.08em e}}}$ is small and only weakly dependent on the field, $E_0 \ll E_Q, \; {\operatorname{\mathit{P\kern-.08em e}}}\sim 0$. Once the particles become motile, for our system the two appear to be roughly linearly coupled ($E_0 > E_Q, \; {\operatorname{\mathit{P\kern-.08em e}}}\sim E_0$, see SM Fig.S1). We now present our main findings. First we consider the phase behaviour of the system as a function of the activity, represented by the Péclet number which we obtain from measuring particle mobility. At zero field strength (${\operatorname{\mathit{P\kern-.08em e}}}$ = 0), we obtain Brownian hard discs which form a dilute fluid with area fraction $\phi\approx0.05$. Upon increasing the field strength, the system exhibits a novel phase behaviour owing to a coupling between non-equilibrium electrohydrodynamic interactions due to solvent flow and electrically induced activity (Quincke rotation). Crystallisation {#crystallisation .unnumbered} --------------- Particle condensation to form crystallites emerges at low field strength, e.g. ${\operatorname{\mathit{P\kern-.08em e}}}\approx 1\times 10^{-4}$ ($E_0=9.9E_{Q}$). This is due to the long-ranged electrohydrodynamic interactions (Fig.\[figPhaseSetUp\]**A**) [@nadal2002]. In our experiments, colloids act as dielectric regions perturbing the electric charge distribution, therefore inducing a flow of ions with a component tangential to the substrate [@nadal2002]. In the vicinity of such an electro-osmotic flow, the particles experience transverse motion leading to the formation of crystallites (Fig.\[figPhaseSetUp\]**A**). Activity-induced phase transitions {#activity-induced-phase-transitions .unnumbered} ---------------------------------- Upon increasing the field strength, we can exploit the Quincke mechanism that triggers spontaneous rotation (Fig.\[figPhaseSetUp\]**B**) to study the behaviour of self-propelled rollers. For this to occur, the viscous torque acting on the particle must be overcome, hence the field needs to be sufficient to initiate rolling ($>E_Q$). When increasing the activity above ${\operatorname{\mathit{P\kern-.08em e}}}$ = 1.5 ($E_0 = 19.4E_{Q}$), we observe crystallite motility, coalescence and splitting, yet the local hexagonal symmetry remains, as can be seen in certain bacteria colonies [@petroff2015] and chiral swimmers [@shen2019]. We term this an “amoeba phase”, since the motility leads the aggregate to constantly reshape in a fashion reminiscent of the motion of amoebae, as shown by the time sequence in Fig. \[figPhaseSetUp\]**c** (also see Supplementary Movies 1-3). On further increasing the field to $E_0 = 29.8E_{Q}$ (${\operatorname{\mathit{P\kern-.08em e}}}$ = 7.8), Quincke rotation triggers breakdown of the active crystallites into an “active gas” of colloidal rollers undergoing displacement in random directions, Fig.\[figPhaseSetUp\]**C**. Previously, it was shown experimentally that the increase in area fraction results in homogeneous polar phases and vortices [@bricard2013; @bricard2015]. At higher field strength, our simulations predict that around ${\operatorname{\mathit{P\kern-.08em e}}}\gtrsim 35$, the system exhibits a non-equilibrium phase transition to a banded state. These bands form perpendicular to the direction of particle motion (which self-organizes into a strongly preferred direction). (see Supplementary Movies 4 and 5). This is reminiscent of banding observed in earlier experiments [@bricard2013], but here the area fraction is very much higher, leading to local hexagonal order. In our simulations, we see one band in the box. We leave the analysis of whether this is activity-driven micro-phase separation, or full demixing for a later finite-size scaling analysis. This local hexagonal order within the bands contrasts with the unstructured bands seen in the Vicsek model [@chate2008]. ![image](figStructure){width="99.00000%"} Analysis of the Local Structure {#analysis-of-the-local-structure .unnumbered} ------------------------------- Having qualitatively introduced the behaviour we encounter in our system in Fig.\[figPhaseSetUp\], we now proceed to consider the phase transitions in more detail. In order to determine the nature of the transitions we require suitable order parameters. We first consider the structural properties of the phases we encounter — passive fluid, passive crystal, active crystallite (“amoebae”), active gas and bands. Given the richness of the phase behaviour, it is unlikely that one single order parameter will prove sufficient, and we find this to be the case. We begin with the 2D bond-orientational order parameter, $ \psi_6 =(1/N)\sum_{i=1}^{N}|\psi_6^i|$. Perfect hexagonal ordering is indicated by $\psi_{6} =1$, whereas a completely disordered configuration gives $\psi_{6} =0$. See Methods for more details of $\psi_6$. In Fig. \[figStructure\]**A**, we plot the average $\psi_{6}$ as a function of ${\operatorname{\mathit{P\kern-.08em e}}}$ for both experiment and simulation, under our mapping of ${\operatorname{\mathit{P\kern-.08em e}}}$ between the two and in the inset we show the data with respect to the applied electric field. In the SM, we show $\psi_6$ with respect to the electric field strength $E_0$. We emphasise that, given the simplicity of our model, and of our mapping, the agreement between experiment and simulation is remarkable in the experimentally accessible regime (${\operatorname{\mathit{P\kern-.08em e}}}$ $\lesssim 8$). We find almost no ordering for the passive Brownian system (at $E_0=0$ or ${\operatorname{\mathit{P\kern-.08em e}}}$ $=0$). With a slight increase in the field strength to $E_0=9.9E_{Q}$, we observe a rapid rise in $\psi_{6}$ to $\approx 0.9$ that corresponds to the crystallisation transition driven by the electrohydrodynamic interactions. In this regime, the system is composed of many crystallites that barely move. It is possible that there may be a condensed liquid (or hexatic) phase [@bernard2011], although this is not apparent in our data, and the transition appears first-order within the field strengths we have sampled. We believe this to be similar to equilibrium 2D attractive systems undergoing crystallisation and move on to consider the activity-driven transitions. Increasing the activity further into the amoeba phase, $\psi_{6}$ starts to decrease. However, $\psi_6$ remains significantly above zero indicating the amoeba clusters are crystal-like. While this state is far-from-equilibrium, the $\psi_6$ value exhibits temporal fluctuations consistent with a steady state (Supplementary Fig. S2). We infer that to distinguish the (passive) crystallites from the amoebae, some kind of dynamic order parameter may prove suitable, and return to this below. At larger ${\operatorname{\mathit{P\kern-.08em e}}}$ ($10\lesssim\text{${\operatorname{\mathit{P\kern-.08em e}}}$}\lesssim40$), the value of $\psi_6$, drops markedly, as the amoebae “dissolve” into the active gas, apparently in a continuous fashion. Finally at very high ${\operatorname{\mathit{P\kern-.08em e}}}$ (${\operatorname{\mathit{P\kern-.08em e}}}\gtrsim40$), upon the emergence of banding, a form of motility-driven phase separation, the value of $\psi_{6}$ again shows signs of increase for $\phi < 0.16$. To gain further insight into these transitions, in Fig.\[figStructure\]**B** we plot the fluctuations in the hexatic bond-orientation order parameter which we take as $\chi_6 = \langle \psi_6^2 \rangle - \langle\psi_6\rangle^2$ where the average is over different snapshots. Further details are provided in the Methods. At low Péclet numbers, we see good agreement between the experiment and simulation, but when the motility is higher, the simulations decay towards the active gas faster than the experiments. However, we find no enhancement in $\chi_6$ around the amoeba-gas phase boundaries, indicating that the transition is a cross-over rather than a first-order-like transition between different phases. To quantify the spatial correlations in $\psi_6$, in Fig.\[figStructure\]**C**, we plot $g_6(r)$ defined as, $$g_6(r=|\mathbf{r}_i - \mathbf{r}_j|) = \langle {\psi_6^i}^* {\psi_6^j} \rangle$$ where $\psi_6^i$ is the (complex) value of the hexatic bond-orientation order parameter for particle $i$ at position $\mathbf{r}_i$. At low ${\operatorname{\mathit{P\kern-.08em e}}}$, we observe long-ranged orientational correlations in the crystal and amoeba regimes. Such correlations are significantly shorter-ranged for the active gas. Interestingly, for the largest ${\operatorname{\mathit{P\kern-.08em e}}}$ in the banding regime, we find that the bond-orientational order parameter is correlated over a larger domain than in the gas regime. Therefore, formation of the bands not only increases $\psi_6$, but also enhances its spatial correlations. ![image](figDynamics){width="90.00000%"} Dynamical Analysis {#dynamical-analysis .unnumbered} ------------------ In our analysis of the local structure in Fig. \[figStructure\], we noted that some kind of dynamical order parameter would be appropriate to distinguish the crystallites from the amoebae. In Fig. \[figDynamics\], we use such an order parameter to perform this analysis, the overlap [@briand2016], $$\label{qt} Q(t) = \Bigg\langle \frac{1}{N}\sum^{N}_{i=1} \exp - \Bigg( \frac{\big[\textbf{r}_{i}(t'+t) - \textbf{r}_{i}(t')\big]^2}{a^2} \Bigg)\Bigg\rangle_{t'},$$ which we evaluate at $a=\sigma$. We fit the resulting dynamic correlation functions with a stretched exponential form, $Q(t) = \exp[-(t/\tau_\alpha)^b]$, as shown in Fig. \[figDynamics\]**A** to determine a timescale for relaxation in our system, $\tau_\alpha$. We plot this timescale against the Péclet number in Fig.\[figDynamics\]**B**. Most striking in the crystal-amoeba transition is the massive drop in relaxation time, Fig. \[figDynamics\]**B**: at a total of *six decades*, this is a very substantial change dynamical change for particle-resolved studies of colloids, active or passive [@ivlev]. The crystallites are effectively solids, while the amoebae exhibit timescales of colloidal liquids, even though their structure is crystalline. Despite this precipitous drop in relaxation time, we find that the transition from crystallites to amoeba is apparently continuous in nature. We thus conclude that the crystallite-amoebae and amoeba-active gas transitions we have found are both continuous, at least insofar as we can detect. ![image](figClusterDipole){width="\textwidth"} Characteristics of the Active and Passive Crystallites {#characteristics-of-the-active-and-passive-crystallites .unnumbered} ------------------------------------------------------ In Fig.\[figStructure\], our $\psi_6$ bond-orientational order parameter gave somewhat limited insight as to the nature of the crystallite-amoeba transition, as both exhibit hexagonal local symmetry. Therefore we now seek other structural measures. In Fig.\[figClusterDipole\]**A**, we consider the fractal nature of the crystallites formed, be they passive or active. The images in Fig. \[figPhaseSetUp\] and Supplementary Movies 1-3 hint that, if the amoebae are rather mobile, then the “interface” between the “amoeba” and its surroundings may be broadened, leading to more particles at the interface and a lower fractal dimension. Specifically, for each cluster we count the number of particles identified on the boundary $N_b$. In the case of compact clusters, this should scale with the number of particles in the cluster as $N^\nu$ with $\nu=1/2$. Figure \[figClusterDipole\]**A** shows the different nature of the passive and active clusters with $\nu \approx$ 3/4 and 5/6 respectively, indicating a rougher boundary for the active clusters. Figure \[figClusterDipole\]**B** shows how the mean cluster size varies in different regimes. The system is composed of a few large clusters at very low ${\operatorname{\mathit{P\kern-.08em e}}}$. Upon increasing the activity, those big clusters break up to smaller ones until in the gas regime where the system is dominated by monomers. Consistent with our previous observations, the cluster size is non-monotonic, with an increase of the mean cluster size with the formation of the bands at large ${\operatorname{\mathit{P\kern-.08em e}}}$. Note that in the regime where our simulations indicate banding, finite size effects in the simulations (which have $N=10000$ particles) may influence the cluster size somewhat as the bands span the simulation box. The same holds for the passive crystals at low field strength. Nature of the Transitions at Higher Activity: Amoeba to Active Gas and Active Gas to Ordered Bands {#nature-of-the-transitions-at-higher-activity-amoeba-to-active-gas-and-active-gas-to-ordered-bands .unnumbered} -------------------------------------------------------------------------------------------------- In addition to the transitions we have already discussed, we encounter more at higher field strength. Firstly, the amoebae “dissolve” to form an “active gas”. At the densities we consider, this transition is characterised by a substantial – but continuous – drop in the $\psi_6$ bond-orientational order parameter (Fig. \[figStructure\]**A**) consistent with our discussion of the continuous change in dynamics above. At higher field strengths, we encounter banding, strong density fluctuations perpendicular to the preferred direction of travel. Interestingly, these bands exhibit some degree of local order, as the value of the bond-orientational order parameter $\langle \psi_6 \rangle \approx 0.2$. While far from indicating full hexagonal order ($\langle \psi_6 \rangle = 1$), this is nevertheless significantly larger than zero. Furthermore, as we can see in Fig. \[figPhaseSetUp\]**C**, some particles are in a very high state of crystalline order (appearing blue), although most are not. Rather striking, in the case of the transition to the banded phase is the alignment between the dipoles of the Quincke rollers, which defines the direction of rotation (see SM for details). In Fig. \[figClusterDipole\]**C**, we see a very strong increase in the alignment upon banding, suggesting that this is a suitable order parameter in this case. ![image](figAmoebaMergingBreaking){width="75.00000%"} Amoeba Coalescence and Splitting {#amoeba-coalescence-and-splitting .unnumbered} -------------------------------- As Supplementary Movies 1 and 3 show, the amoebae are highly dynamic. Here we analyse their coalescence and splitting. It is intriguing to compare it with coalescence and splitting (or break-off) of liquid droplets [@eggers1999; @eggers2008]. However there are several key differences between the two situations. Both coalescence and break-off of liquid drops in passive systems are macroscopic driven processes that are clearly [*not*]{} in a steady state [@eggers1999; @eggers2008], while here the amoeba phase is apparently a non-equilibrium steady state. Furthermore here the clusters are locally ordered. We consider the behaviour at the particle scale and emphasise the 2d nature of the system. Nevertheless one may ask if insights about the steady-state behaviour of our active system can be obtained by considering the behaviour of a driven passive system that is not in a steady state. It is noteworthy that in passive systems there is a strong asymmetry between liquid drop coalescence and break-off – break-off dynamics is very distinct from that of coalescence, reflecting the fact that end-points of the two processes are rather different. In contrast, thermodynamic equilibrium is a steady state with vanishing currents and hence one expects symmetry between coalescence and break-off at equilibrium. However it is not at all obvious if there would be a symmetry between the dynamics of coalescence and break-off in the amoeba phase because it is a non-equilibrium steady-state. That is to say, while of course the microscopic equations of motion in this non-equilibrium system are not expected to exhibit time-reversal symmetry, we enquire whether this is apparent at the level of the coalescence and splitting behaviour in the Amoeba phase. In particular, we consider whether we can distinguish the pathways by which coalescence and splitting occurs. We analyse the coalescence and break-off events in the amoeba phase as follows. First distinct “amoebae” are identified (See SM). To investigate the morphological changes, we account for the change in number of particles per “amoeba” and measure the nearest distance between boundary particles in different clusters. This yields the particles which first form the link between two “amoebae” in the case of coalescence and those where the last point of contact remains in the case of splitting (Fig. \[figAmoebaMerging\]**A**, inset). Having the pair of particles involved in coalescence and splitting, we identify the number of particles within a distance of 1.1 $\sigma$ and analyse the local $\psi_{6}$. In Fig. \[figAmoebaMerging\], we plot the resulting distributions of the number of particles $n$ in the neighbourhood and the local $\psi_{6}$ with time running forwards and backwards. Since the distributions appear rather similar within our statistics, we infer that our analysis does not reveal any breaking of time-reversal symmetry, consistent with recent work with active Janus colloids which considered aggregation and fragmentation rates [@ginot2018]. Thus this non-equilibrium steady state is fundamrentally different to the highly asymmetric case of droplet coalescence and break-off in driven passive liquids [@eggers1999; @eggers2008]. Discussion {#discussion .unnumbered} ========== In conclusion, we have shown that the Quincke roller system exhibits a rich and complex phase behaviour. We have also developed a minimal model of the experimental system which captures all the phenomena observed in experiment, and in fact predicts further phase transitions at activities beyond the experimental regime. We find transitions between passive fluid, crystal, amoeba-like active crystallites, active gas and an ordered banding phase. We have used a variety of static and dynamic order parameters to probe the nature of these transitions, and find that they are continuous in nature except the (passive) fluid-crystal transition which is consistent with first-order. We have analysed coalescence and splitting events in the “amoeba” phase, and find that it does not show significant deviations from time-reversal symmetry. For our simulation model, we have quantitatively parameterised the components of the Quincke roller system by treating the electrohydrodynamic attraction with a long-ranged potential, “hard” core, active force and electrohydrodynamic alignment terms. Remarkably, when we rescale our results to compare the same Péclet numbers in experiments and simulations, we obtain a quantitative agreement between the two. With our model, we have revealed that a key ingredient of the phase behaviour is the interplay between *active* and *passive* interactions. At low mobility, (electrohydrodynamic) passive attractions lead to condensation into crystals, which melt with an increase in activity. Conversely, at high activity, the situation is reversed: activity leads to banding where the (passive) repulsive core controls the local structure. Our work opens the way to using simple, intuitive minimal models which correctly capture the competition between active and passive interactions to describe, *quantitatively*, the macroscopic physical behaviour of complex active systems which are far-from-equilibrium. Materials and Methods {#materials-and-methods .unnumbered} ===================== Experimental setup {#experimental-setup .unnumbered} ------------------ Our experimental model consists of poly(methyl methacrylate) (PMMA) spheres of diameter, $\sigma=2.92\,\mu\mathrm{m}$ determined with SEM. These are suspended in a 5 mM solution of dioctyl sulfosuccinate sodium (AOT) in hexadecane. Imaging and DC field application take place in sample cells made of two indium tin oxide (ITO)-coated glass slides (Diamond Coatings, BO-X-20, 100 nm thick). A separating layer of $H=16.2\,\mu\mathrm{m}$ between the slides is made using larger polymer beads and UV-curable adhesive (Norland 81). The uniform field is applied by connecting the slide to a power supply (Elektro Automatik, PS-2384-05B). Image sequences are obtained using brightfield microscopy (Leica DMI 3000B) with a 20x objective and a frame rate of 100 fps. Individual colloids are identified and particle co-ordinates are tracked using standard methods [@crocker1996]. Determination of the critical strength {#determination-of-the-critical-strength .unnumbered} -------------------------------------- We follow the description of Lemaire and co-workers [@pannacci2007; @pannacci2009] to estimate the critical field strength, $E_{Q}$. The spontaneous rotation of particles, known as Quincke rotation, strongly depends on the charge distribution at the particle-liquid interface and the respective charge relaxation times, given by $\tau_{p, l} = \epsilon_{p, l}/s_{p, l}$, where $\epsilon_{p,l}$ and $s_{p,l}$ are the dielectric constant and conductivity of the particle and the liquid respectively. In the case of having $\tau_{l} > \tau_{p}$, the induced dipole $\mathbf{P}_\mathrm{exp}$ is stable with respect of field direction. On the other hand with $\tau_{p} > \tau_{l}$, $\mathbf{P}_\mathrm{exp}$ is unstable with respect to the field direction (see Fig. 1**A** in main text), and any perturbation results in an electrostatic torque $\mathcal{T}^{\text{e}} = \mathbf{P}_\mathrm{exp} \times \mathbf{E}$, from the dipole rotation. Nevertheless, even if $\tau_{p} > \tau_{l}$ is satisfied, $\mathcal{T}^{\text{e}}$ needs to overcome the viscous torque exerted on the particle by the liquid to initiate rotation, $\mathcal{T}^{\text{H}} = -\alpha \omega$, where the angular velocity is given by $\omega$ and $\alpha = \pi\eta \sigma^3$ is the rotational friction coefficient. We use polymethyl methacrylate colloids of diameter $\sigma = 2.92 \mu$m, with $\epsilon_{p} = 2.6\epsilon_{0}$, and a 5 mM AOT/hexadecane solution with $\eta = 2.78$ mPA, $s_{l} \approx 10^{-8}\, \Omega^{-1}\,\text{m}^{-1}$ [@schmidt2012], $s_{p} \approx 10^{-14}\,\Omega^{-1}\,\text{m}^{-1}$ [@pannacci2007] and $\epsilon_{l} \approx 2\epsilon_{0}$ for our system. The critical threshold is given by $$\label{eqEQ} E_{{\rm{Q}}}=\frac{1}{2}[\pi\epsilon_{l}\sigma^{3}(\chi^{0}-\chi^{\infty})\tau_{\mathrm{MW}}\alpha^{-1}]^{1/2},$$ where the polarisability factors $$\label{eqChi0} \chi^{0}=\frac{s_{p}-s_{l}}{s_{p}+2s_{l}}$$ and $$\label{eqInfty} \chi^{\infty}=\frac{\epsilon_{p}-\epsilon_{l}}{\epsilon_{p}+2\epsilon_{l}}$$ account for the conductivities and permittivities of the particle and liquid respectively. The characteristic dipole relaxation timescale is given by the Maxwell-Wagner time, $$\label{tauMW} \tau_{\mathrm{MW}}=\frac{\epsilon_{p}+2\epsilon_{l}}{s_{p}+2s_{l}}.$$ Microscopic model of effective interactions in Quincke rollers {#microscopic-model-of-effective-interactions-in-quincke-rollers .unnumbered} -------------------------------------------------------------- Following Ref. [@bricard2013], we consider a pairwise alignment interaction between rollers that leads to a torque on particle $i$ $$\mathcal{T}_i = - \frac{\partial \mathcal{R}_{\rm align}} {\partial \theta_i} \; ; \;$$ $$\mathcal{R}_{\rm align} = - \sum_{\begin{array}{c} j , |\mathbf{r}_{ij}| \le r_{c1} \end{array}} \left( A_1 \hat{\mathbf{P}}_i \cdot \hat{\mathbf{P}}_j + A_2 (\hat{\mathbf{P}}_i - \hat{\mathbf{P}}_j)\cdot{\hat{\mathbf{r}}}_{ij} + A_3 \hat{\mathbf{P}}_j \cdot (2\hat{\mathbf{r}}_{ij} \hat{\mathbf{r}}_{ij} - {\mathbf{I}}) \cdot \hat{\mathbf{P}}_i \right) \; \label{eqH1}$$ where $\hat{\mathbf{P}}_i = ( \cos \theta_i, \sin \theta_i )$ is the direction of motion of the $i$th roller, and ${\bf r}_{ij}$ is the separation between rollers $i$ and $j$. This has the minimum number of terms required to describe the electro-hydrodynamically induced alignment interactions with the correct symmetry and whose range is set by the distance between plates in the experimental setup. We truncate $\mathcal{R}_{\rm align}$ at $r_{c1} = 3.0\,\sigma$, where $\sigma$ is the particle diameter. We note that angular momentum is not conserved by these dynamics. The electro-osmotic long-ranged attraction [@nadal2002] is modelled by a turncated and shifted (at $r_{c2} = 5.0\,\sigma$) potential of the form $\mathcal{H}_{\text {attr}} = -A_4 \exp(-\kappa r)/r^2$, where $\kappa = 1/3\,\sigma^{-1}$ is the inverse screening length. The excluded volume interactions between rollers are represented by a repulsive Weeks-Chandler-Anderson (WCA) interaction of the form $\mathcal{H}_{\text{exc}}=4\epsilon((\sigma/r)^{12}-(\sigma/r)^{6})+\epsilon$, where $\epsilon=k_{\rm B}T$ is the energy unit of the model. The WCA potential is truncated at $r_{c3}=2^{1/6}\sigma$. The coupling parameters in the alignment interactions are estimated to be $A_1=0.93k_BT ,~A_2=0.33k_BT $ and $~A_3=0.48k_BT$ for our experimental conditions (see SM for more details), and we chose the attraction strength to be $A_4=10k_{\rm B}T$. We verified that the qualitative phase behaviour of the model remains the same if we vary the strength of the long-ranged attraction. We note that we have parametrised $A_1,A_3$ from the single particle dynamics in the dilute gas phase, the attractive interactions $A_2,A_4$ are determined from the experimental parameters. Further details are given in the SM. #### Simulation details. — Brownian dynamics simulations were performed on a 2D system composed of $N=10000$ interacting Quincke rollers. We integrate the over-damped Langevin equations (Eqs. \[eqABP\]) using the stochastic Euler scheme with a time step of $dt=10^{-5} \tau$. In our simulations, the interparticle force on the $i$th roller $\mathbf{F}_i = -\nabla_i (\mathcal{H}_{\text {attr}} +\mathcal{H}_{\text {exc}})$ while the torque on the $i$th roller $\mathcal{T}_i = -\partial \mathcal{R}_{\rm align} / \partial \theta_i$. The particle diameter $\sigma$, thermal energy $\epsilon=k_{\rm B}T$ and Brownian time $\tau=\sigma^2/D_t$ are chosen as basic units for length, energy and time, respectively. We take $D_r=3D_t/\sigma^2$, as expected for an spherical particle in the low-Reynolds-number regime. We study the phase behaviour of the system as a function of two dimensionless parameters; Péclet number $\text{${\operatorname{\mathit{P\kern-.08em e}}}$} = f^p \sigma / k_{\text{B}}T$ and the area fraction ${\displaystyle}\phi=\frac{N\pi\sigma^2}{4L^2}$, where $L$ is the linear size of the simulation box. #### Order parameter details. — Here we take the mean of the bond-orientational order parameter $\psi_6$ across $N$ particles $$\label{eqPsi6Mean} \psi_6 =\frac{1}{N}\sum_{j=1}^{N}|\psi^{j}_{6}|.$$ The value of the order parameter for each particle is $$\label{eqPsi6} \psi_{6}^j \equiv \frac{1}{Z_{j}}\sum_{k=1}^{Z_{j}}\exp\Big(i6\theta_{k}^{j}\Big)$$ where $Z_{j}$ is the co-ordination number of particle $j$ obtained from a Voronoi construction and $\theta_{k}^{j}$ is the angle made between a reference axis and the bond between particle $j$ and its $k$th neighbour. $\psi_{6}=1$ indicates perfect hexagonal ordering, whereas completely disordered structures give $\psi_{6}=0$. Fig.\[figStructure\]**A** shows that for a passive Brownian system there is almost no hexatic order. We quantify the fluctuations in $\psi_6$ by defining the susceptibility $$\label{eqChi6} \chi_{6} \equiv \langle \psi_6^2 \rangle - \langle\psi_6\rangle^2$$ where $\psi_6^2 = 1/N \sum_{j=1}^{N} |\psi_6^j|^2$.\ Supplementary Materials {#supplementary-materials .unnumbered} ----------------------- ![image](sFigBoopPeField.pdf){width="35.00000%"} ![image](sFigAmoebaBoop){width="95.00000%"} . \[sFigAmoebaBoop\] ![image](sFigAmoebaClusters.pdf){width="85.00000%"} Microscopic model of Alignment Interactions in Quincke Rollers {#microscopic-model-of-alignment-interactions-in-quincke-rollers .unnumbered} ============================================================== The following description is based on a microscopic model describing the dynamics of a population of colloidal rollers due to Quincke rotation. The direct interactions are detailed in the Methods, and are captured in the force $\mathbf{F}_i$ in Eq. \[eqTranslation\]. Here we consider the alignment terms. The equations of motion for the $i$th self-propelled particle are given by the following Langevin equations, where for the rotational case we have rewritten the version in the main text to explicitly consider the effective alignment interaction. $$\label{eqTranslation} \dot{\mathbf{r}}_i = \frac{D_t}{k_BT} [ {\mathbf{F}}_i + f^p \hat{\mathbf{P}}_i ] + {\sqrt{2D_t}} \bm{\xi}_i^{t}$$ and $$\label{eqRotation} \dot{\theta_i}=-\frac{D_r}{k_BT}\frac{\partial}{\partial\theta_{i}}\sum_{j\not=i}\mathcal{R}_{\rm{align}}({\mathbf{r}_{ij}},{\mathbf{\hat{P}}}_{i},{\mathbf{\hat{P}}}_{j})+\sqrt{2D_{r}}\xi_{i}^r$$ where the particle $i$ is subject to a propulsion force of magnitude $f^p$ whose direction changes due to the alignment interaction and noise $\xi_{i}$. Note that because the simulations are strictly in 2D, the direction of the dipole $\mathbf{P}$ in Eq. \[eqRotation\] is that of the rotation, *i.e.* the direction of self-propulsion, rather than the (3D) induced dipole of the experimental system $\mathbf{P}_\mathrm{exp}$ mentioned above. Introduced by Caussin and Bartolo [@bricard2013], the effective alignment interaction $\mathcal{R}_{\rm{align}}$ reads $$\label{potential} \mathcal{R}_{\rm{align}}({\mathbf{r}},{\mathbf{\hat{P}}}_{i},{\mathbf{\hat{P}}}_{j})=-A_1(r){\mathbf{\hat{P}}}_{i}\cdot{\mathbf{\hat{P}}}_{j}-A_2(r)\mathbf{\hat{r}}\cdot(\mathbf{\hat{P}}_{i}-\mathbf{\hat{P}}_{j})-A_3(r){\mathbf{\hat{P}}}_{j}\cdot(2{\mathbf{\hat{r}\hat{r}-I}})\cdot{\mathbf{\hat{P}}}_{i}$$ having ${\mathbf{\hat{r}}}\equiv{\mathbf{r}}/r$. The coefficients $A_1(r), A_2(r)$ and $A_3(r)$ incorporates the microscopic parameters and are given by: $$\label{eqA1} \renewcommand{\theequation}{\theparentequation. \arabic{equation}} A_1(r)=3\tilde{\mu}_{s}\frac{\sigma^3}{8r^3}\Theta(r)+9\Bigg(\frac{\mu_{\perp}}{\mu_{r}}-1\Bigg)\Bigg(\chi^{\infty}+\frac{1}{2}\Bigg)\Bigg(1-\frac{E_{{\rm{Q}}}^{2}}{E_0^2}\Bigg)\frac{\sigma^{5}}{32r^5}\Theta(r)$$ accounting for the short-ranged hydrodynamic interactions and electrostatic couplings that promote the alignment of directions between particles $i$ and $j$. Here, $\mu_{\perp}$ and $\mu_{r}$ are the mobility coefficients depending on the liquid viscosity and the distance $d$ between the surface and particle respectively. From the expressions in [@ONeill1967; @Goldman1967a; @Goldman1967; @LIU2010] we obtain $\chi^{\infty}=0.08$, $\tilde{\mu}_{s}=11$ and $\mu_{\perp}/\mu_{r}=1.5$.\ The electrostatic repulsion and the electro-hydrodynamic interactions coupling are encoded in the $A_2(r)$ and $A_3(r)$ coefficients respectively, $$\label{eqA2} \renewcommand{\theequation}{\theparentequation. \arabic{equation}} A_2(r)=6\Bigg(\frac{\mu_{\perp}}{\mu_{r}}-1\Bigg)\sqrt{\frac{E_0^{2}}{E_{\rm{Q}}^{2}}-1}\Bigg[\Bigg(\chi^{\infty}+\frac{1}{2}\Bigg)\frac{E_0^{2}}{E_{\rm{Q}}^{2}}-\chi^{\infty}\Bigg]\frac{\sigma^{4}}{16r^4}\Theta(r)$$ $$\label{eqA3} \renewcommand{\theequation}{\theparentequation.\arabic{equation}} A_3(r)=2\tilde{\mu_{s}}\frac{\sigma^{2}}{4r^{2}}\frac{\sigma}{2H} + \Bigg[\tilde{\mu_{s}}\frac{\sigma^{3}}{8r^{3}} + 5\Bigg(\frac{\mu_{\perp}}{\mu_{r}}-1\Bigg)\Bigg(\chi^{\infty}+\frac{1}{2}\Bigg)\Bigg(1-\frac{E_{{\rm{Q}}}^{2}}{E_0^2}\Bigg)\frac{\sigma^{5}}{32r^{5}}\Bigg]\Theta(r)$$ where the hydrodynamic and electrostatic couplings are screened over distances proportional to the chamber distance, H = 16.2 $\mu$m. A more detailed description can be found in Refs. [@bricard2013], and [@bricard2015]. We estimate such coefficients considering the experimental field intensity under which we observe the active gas phase ($1.85 \times 10^{6}V $m$^{-1}$), and average them over distances $r\in[\sigma,3\sigma]$. For convenience we approximate the screening function as $\Theta(r)=1$ if $r \leq H/\pi $ and $\Theta(r)=0$ otherwise. Under these assumptions, we obtain $$\label{A3} \renewcommand{\theequation}{\theparentequation.\arabic{equation}} A_1=0.93 k_BT$$ $$\label{B3} \renewcommand{\theequation}{\theparentequation.\arabic{equation}} A_2=0.33 k_BT$$ $$\label{C3} \renewcommand{\theequation}{\theparentequation.\arabic{equation}} A_3=0.48 k_BT$$ Supplementary Movies -------------------- Supplementary Movie 1 ===================== *Amoeba aggregates —* Finite-size amoeba clusters displaying collective rotation. The interaction between amoeba aggregates leads to merging. Movie is coloured with a glow effect for clarity. Colloid diameter $\sigma = 2.92\,\mu m$. Field strength, $E_0 = 19.4 E_{Q}$, and Pe = 1.5. Frame acquisition at 100 fps, movie played at 34 fps [@movies]. Supplementary Movie 2 ===================== *Amoeba phase —* Movie shows the experimental trajectory of an amoeba-like aggregate. The colourbar indicates the local hexagonal order parameter $\psi^{i}_{6}$ for each particle. Fluctuations of the order parameter result as the aggregates merge and break. White arrows indicate the instantaneous collective displacement. Colloids diameter $\sigma = 2.92\,\mu m$. Field strength, $E_0 = 19.4 E_{Q}$, and Pe = 1.5. Frame acquisition at 100 fps, movie played at 17 fps [@movies]. Supplementary Movie 3 ===================== *Experimental phase transition —* Transition of an isolated cluster with the increase on $E_0$. The transition goes from a highly ordered and dynamically arrested state to an isotropic state of Quincke rollers. The colourbar indicates the local hexagonal order parameter $\psi^{i}_{6}$. Particles in blue posses high hexagonal order, whereas the order is poor in particles in red. Colloids of diameter $\sigma = 2.92\,\mu m$ Field strength $E_0\,\in [9.9,29.8]E_{Q}$, and Pe $\in[10^{-4},11.3]$. Frame acquisition at 100 fps, movie played at 17 fps [@movies]. Supplementary Movie 4 ===================== *Onset of banding —* Thick bands (of tens of particles) form as particle trajectories undergo alignment. The bands propagate through an isotropic state. Pe = 50 [@movies]. Supplementary Movie 5 ===================== *Onset of banding —* Same data as Supplementary Movie 4, zooming out sequence. Pe = 50 [@movies].\ Acknowledgements {#acknowledgements .unnumbered} ---------------- The authors would like to thank Denis Bartolo, Olivier Dauchot, Jens Eggers, Mike Hagan, Rob Jack, Cristina Marchetti, Sriram Ramaswamy, Thomas Speck and Chantal Valeriani for helpful discussions. CPR, JH and FT would like to acknowledge the European Research Council under the FP7 / ERC Grant agreement n$^\circ$ 617266 “NANOPRS”. AMA is funded by CONACyT. TBL and MM are supported by BrisSynBio, a BBSRC/EPSRC Advanced Synthetic Biology Research Center (grant number BB/L01386X/1). Part of this work was carried out using the computational facilities of the Advanced Computing Research Centre, University of Bristol. References and Notes --------------------
{ "pile_set_name": "ArXiv" }
--- abstract: 'We establish a relation between the equation of state of nuclear matter and the fourth-order symmetry energy $a_{\rm{sym,4}}(A)$ of finite nuclei in a semi-empirical nuclear mass formula by self-consistently considering the bulk, surface and Coulomb contributions to the nuclear mass. Such a relation allows us to extract information on nuclear matter fourth-order symmetry energy $E_{\rm{sym,4}}(\rho_0)$ at normal nuclear density $\rho_0$ from analyzing nuclear mass data. Based on the recent precise extraction of $a_{\rm{sym,4}}(A)$ via the double difference of the “experimental” symmetry energy extracted from nuclear masses, for the first time, we estimate a value of $E_{\rm{sym,4}}(\rho_0) = 20.0\pm4.6$ MeV. Such a value of $E_{\rm{sym,4}}(\rho_0)$ is significantly larger than the predictions from mean-field models and thus suggests the importance of considering the effects of beyond the mean-field approximation in nuclear matter calculations.' author: - Rui Wang - 'Lie-Wen Chen[^1]' title: 'Empirical information on nuclear matter fourth-order symmetry energy from an extended nuclear mass formula' --- Introduction ============ The determination of the isospin dependent part of nuclear matter equation of state (EOS) has become a hot topic in both nuclear physics and astrophysics during the last decades [@LiBA98; @Dan02; @Lat04; @Ste05; @Bar05; @LCK08; @Tra12; @Hor14; @LiBA14; @Bal16; @Oer17; @LiBA17]. The nuclear matter EOS tells us its energy per nucleon $E(\rho, \delta)$ as a function of density $\rho$ $=$ $\rho_{\rm{n}}$ $+$ $\rho_{\rm{p}}$ and isospin asymmetry $\delta$ $=$ $(\rho_{\rm{n}} - \rho_{\rm{p}})/\rho$ with $\rho_{\rm{n}}$ ($\rho_{\rm{p}}$) being the neutron (proton) density. The parabolic approximation to nuclear matter EOS, i.e., $E(\rho, \delta)$ $\approx $ $E(\rho,\delta=0)$ $+$ $E_{\rm{sym}}(\rho)\delta^2$, is adopted widely with the symmetry energy defined as $E_{\rm{sym}}(\rho)=$ $\frac{1}{2!}\frac{\partial^2 E(\rho,\delta)}{\partial\delta^2}\big|_{\delta = 0}$. The feasibility of the parabolic approximation is practically justified in various aspects of nuclear physics, especially in finite nuclei where the $\delta^2$ value is usually significantly less than one. Nevertheless, in neutron stars where the $\delta$ could be close to one, a sizable higher-order terms of isospin dependent part of nuclear matter EOS, e.g., the term $E_{\rm{sym,4}}(\rho)\delta^4$ with the fourth-order symmetry energy defined as $E_{\rm{sym,4}}(\rho)=$ $\frac{1}{4!}\frac{\partial^4 E(\rho,\delta)}{\partial\delta^4}\big|_{\delta = 0}$, may have substantial effects on the properties such as the proton fraction at beta-equilibrium, the core-crust transition density and the critical density for the direct URCA process [@Zha01; @Ste06; @Xu09; @Cai12; @Sei14]. To the best of our knowledge, unfortunately, there is so far essentially no experimental information on the magnitude of $E_{\rm{sym,4}}(\rho)$, even at normal nuclear density $\rho_0$. Theoretically, the mean-field models generally predict the magnitude of $E_{\rm{sym,4}}(\rho_0)$ is less than $2$ MeV [@ChenLW09; @Cai12; @Arg17; @PuJ17]. A value of $E_{\rm{sym,4}}(\rho_0)=1.5$ MeV is obtained from chiral pion-nucleon dynamics [@Kai15]. The recent study [@Nan16] within the quantum molecular dynamics (QMD) model indicates that the $E_{\rm{sym,4}}(\rho_0)$ could be as large as $3.27 \sim 12.7$ MeV depending on the interactions used. Based on an interacting Fermi gas model, a significant value of $7.18\pm2.52$ MeV [@Cai15] is predicted for the kinetic part of $E_{\rm{sym,4}}(\rho_0)$ by considering the high-momentum tail [@Hen14] in the single-nucleon momentum distributions that could be due to short-range correlations of nucleon-nucleon interactions. In addition, the divergence of the isospin-asymmetry expansion of nuclear matter EOS in many-body perturbation theory is discussed in Refs. [@Kai15; @Wel16]. Therefore, the magnitude of $E_{\rm{sym,4}}(\rho_0)$ is currently largely uncertain and it is of critical importance to obtain some experimental or empirical information on $E_{\rm{sym,4}}(\rho_0)$. Conventionally nuclear matter EOS is quantitatively characterized in terms of a few characteristic coefficients through Taylor expansion in density at $\rho_0$, e.g., $E(\rho, \delta = 0)$ $=$ $E_0(\rho_0)$ $+$ $\frac{1}{2!}K_0\chi^2$ $+$ $\frac{1}{3!}J_0\chi^3$ $+$ ${\cal O}(\chi^4)$ and $E_{\rm{sym}}(\rho)$ $=$ $E_{\rm sym}(\rho_0)$ $+$ $L\chi$ $+$ $\frac{1}{2!}K_{\rm{sym}}\chi^2$ $+$ ${\cal O}(\chi^3)$ with $\chi$ $=$ $\frac{\rho - \rho_0}{3\rho_0}$. The density in the interior of heavy nuclei is believed to nicely approximate to saturation density of symmetric nuclear matter (nuclear normal density) $\rho_0$ and the empirical value of $\rho_0\approx$ $0.16$ fm$^{-3}$ has been obtained from measurements on electron or nucleon scattering off heavy nuclei [@Jac74]. Our knowledge on nuclear matter EOS largely stems from nuclear masses based on various nuclear mass formulae. By analyzing the data on nuclear masses with various nuclear mass formulae (see, e.g., Ref. [@Mol12]), consensus has been reached on $E_0(\rho_0)$ and $E_{\rm sym}(\rho_0)$ with $E_0(\rho_0)$ $\approx$ $-16.0~\rm MeV$ and $E_{\rm sym}(\rho_0)$ $\approx$ $32.0~\rm MeV$. These empirical values on $E_0(\rho_0)$ and $E_{\rm sym}(\rho_0)$ are of critical importance for our understanding on nuclear matter EOS. Generally speaking, it is very hard to determine the higher-order parameter $E_{\rm{sym,4}}(\rho_0)$ and the fourth-order symmetry energy $a_{\rm sym,4}(A)$ of finite nuclei from simply fitting nuclear masses within nuclear mass formulae since the term $a_{\rm sym,4}(A)I^4$ ($I =\frac{N-Z}{A}$ with $N$ and $Z$ being the neutron and proton number, respectively, and $A=$ $N$ $+$ $Z$ is mass number) is considerably small compared to other lower-order terms in the mass formula for known nuclei, even for the predicted dripline nuclei [@WangR15]. Recently, however, by approximating $a_{\rm sym,4}(A)$ to a constant $c_{\rm sym,4}$ in the mass formula, several studies [@Jia14; @Jia15; @Wan15; @Tia16] have been performed to extract $c_{\rm sym,4}$ from analyzing the double difference of the “experimental” symmetry energy extracted from nuclear mass data, and robust results with high precision have been obtained, i.e., a sizable positive value of $c_{\rm sym,4}=$ $3.28\pm0.50~\rm MeV$ or $8.47\pm0.49$ MeV is obtained in Ref. [@Jia14], depending on the Wigner term form in the mass formula. More recently, a value of $c_{\rm sym,4}=$ $8.33\pm1.21~\rm MeV$ is extracted in Ref. [@Tia16] using similar analysis on nuclear masses. These results provide the possibility to extract information on $E_{\rm{sym,4}}(\rho_0)$. In this work, by self-consistently considering the bulk, surface and Coulomb contributions to the nuclear mass, we extend the mass formula of Ref. [@Dan03] to additionally include the corrections due to central density variation of finite nuclei and the higher-order fourth-order symmetry energy term $a_{\rm{sym,4}}(A)I^4$. In this extended mass formula, a explicit relation between $a_{\rm{sym,4}}(A)$ and $E_{\rm{sym,4}}(\rho_0)$ is obtained. We demonstrate for the first time that the precise value of $c_{\rm sym,4}$ obtained recently from nuclear mass analysis allows us to estimate a value of $E_{\rm{sym,4}}(\rho_0) = 20.0\pm4.6$ MeV. Nuclear mass formula ==================== There have been a number of nuclear mass models which aim to describe the experimental nuclear mass database and predict the mass of unknown nuclei. Nowadays, some sophisticated mass formulae [@Roy08; @Mol12; @Wan10; @Wan14] (with shell and pairing corrections) can reproduce the measured masses of more than $2000$ nuclei with a root-mean-square deviation of merely several hundred $\rm{keV}$s. These mass formulae provide us empirical information about the EOS of nuclear matter, especially its lower-order characteristic parameters $E_0(\rho_0)$, $E_{\rm sym}(\rho_0)$ and so forth. To relate the coefficients in the mass formula to the EOS of nuclear matter, one can express the binding energy $B(N,Z)$ of a nucleus with $N$ neutrons and $Z$ protons in terms of the bulk energy of nuclear matter in the interior of the nucleus plus surface corrections and Coulomb energy. Based on such an argument, Danielewicz [@Dan03] developed a mass formula with a self-consistent $A$-dependent symmetry energy $a_{\rm sym}(A)$ of finite nuclei. Considering that the central density $\rho_{\rm cen}$ in nuclei generally depends on $N$ and $Z$ and deviates from $\rho_0$, we here extend the mass formula of Ref. [@Dan03] by considering the deviation of $\rho_{\rm cen}$ from $\rho_0$, and additionally including the higher-order $I^4$ terms. In such a framework, a nucleus with $N$ neutrons and $Z$ protons is assumed to localize inside an effective sharp radius $R$, i.e., $$R = r_0\big[1 + 3\chi_{\rm{cen}}(N,Z)\big]^{-1/3}A^{1/3}, \label{radius}$$ where $r_0$ is a constant satisfying $\frac{4}{3}\pi\rho_0r_0^3 = 1$ and $\chi_{\rm cen}$ $=$ $(\rho_{\rm cen} - \rho_0)/3\rho_0$ is a dimensionless variable characterizing the deviation of $\rho_{\rm cen}$ from $\rho_0$. Furthermore, we denote the volume (surface) neutron excess as $\Delta_{\rm v}$ $=$ $N_{\rm v} - Z_{\rm v}$ ($\Delta_{\rm s}$ $=$ $N_{\rm s} - Z_{\rm s}$), where $N_{\rm v}$ ($Z_{\rm v}$) and $N_{\rm s}$ ($Z_{\rm s}$) represent the neutron (proton) number in the volume and surface regions of the nucleus, respectively, with $N_{\rm v}$ $+$ $N_{\rm s}$ $=$ $N$ and $Z_{\rm v}$ $+$ $Z_{\rm s}$ $=$ $Z$. Generally, $\chi_{\rm cen}$ and $\Delta_{\rm v}$ ($\Delta_{\rm s}$) depend on $N$ and $Z$ of the nucleus and can be determined from equilibrium conditions, and this is consistent with the argument of the droplet model (see, e.g., Ref. [@Rei06]). In the present work, the nuclear binding energy consists of volume term $B_{\rm v}$, surface term $B_{\rm s}$ and Coulomb term $B_{\rm c}$. The volume part of the binding energy can be treated in nuclear matter approximation, i.e., $$\begin{split} B_{\rm v} & \approx A\Big[E_0(\rho_0) + \frac{1}{2}K_0\chi_{\rm cen}^2 + E_{\rm sym}(\rho_0)\big(\frac{\Delta_{\rm v}}{A}\big)^2\\ & + L\chi_{\rm cen}\big(\frac{\Delta_{\rm v}}{A}\big)^2 + E_{\rm sym,4}(\rho_0)\big(\frac{\Delta_{\rm v}}{A}\big)^4\Big]. \label{BE-V} \end{split}$$ The surface term comes from surface tension and symmetry potential (detailed argument can be found in Ref. [@Dan03]), and it can be expressed as $$\begin{aligned} B_{\rm s} & = & \Big[\sigma_0 - \sigma_{\rm I} \big(\frac{\Delta_{\rm s}}{S}\big)^2\Big]4\pi R^2 + \frac{2\sigma_{\rm I}}{4\pi R^2}\Delta_{\rm s}^2\nonumber \\ & \approx & E_{\rm s0}(1-2\chi_{\rm cen})A^{\frac{2}{3}} + \beta(1+2\chi_{\rm cen})A^{\frac{4}{3}}\big(\frac{\Delta_{\rm s}}{A}\big)^2, \label{BE-S}\end{aligned}$$ where $\sigma_0$ ($\sigma_{\rm I}$) represents the isospin independent (dependent) surface tension, $S = 4\pi R^2$ is the surface area of the nucleus, and we define $E_{\rm s0} = 4\pi r_0^2\sigma_0$ and $\beta = \frac{\sigma_{\rm I}}{4\pi r_0^2}$. Eq. (\[radius\]) has been used to obtain the second line in Eq. (\[BE-S\]). For Coulomb energy, for simplicity we adopt the following simple form without exchange term, i.e., $$B_{\rm c} = \frac{3}{5}\frac{e^2}{4 \pi \epsilon_0}\frac{1}{R}Z^2 \approx a_{\rm c} A^{-1/3}Z^2(1 + \chi_{\rm cen}), \label{BE-C}$$ with $a_c=\frac{3}{5}\frac{e^2}{4\pi\epsilon_0r_0}$. The equilibrium condition of nuclei can be obtained from variations of the binding energy $B(N,Z)$ of the nucleus with respect to $\chi_{\rm cen}$ and $\Delta_{\rm v}$, i.e., $$\frac{\partial B(N,Z)}{\partial\chi_{\rm cen}} = 0, \qquad \frac{\partial B(N,Z)}{\partial\Delta_{\rm v}} = 0, \label{Variation}$$ from which we can obtain $\chi_{\rm cen}$ and $\Delta_{\rm v}$ ($\Delta_{\rm s}$) for different $A$ and $Z$. The first equation means the mechanical equilibrium and tells us how the surface energy, Coulomb energy and the isospin dependent part of volume energy affect the value of $\rho_{\rm cen}$, while the second equation represents the balance of the isospin asymmetry chemical potential between the volume and surface regions. To solve Eq. (\[Variation\]), we expand $\chi_{\rm cen}$ in terms of $\frac{\Delta_{\rm v}}{A}$, and then expand $(\frac{\Delta_{\rm v}}{A})^2$ in terms of $I$, i.e., $$\begin{aligned} \chi_{\rm cen} & =& \chi_0 + \chi_2 \big(\frac{\Delta_{\rm v}}{A}\big)^2 + {\cal O}\Big[\big(\frac{\Delta_{\rm v}}{A}\big)^4\Big], \label{chi}\\ \big(\frac{\Delta_{\rm v}}{A}\big)^2 & = & D_2I^2 + {\cal O}(I^4), \label{Deltav}\end{aligned}$$ where the expansion coefficients $\chi_0$, $\chi_2$ and $D_2$ might depend on $A$ or $Z$, consistent with calculations from the droplet model [@Rei06] and the Thomas-Fermi approximation [@WangR17]. Using Eqs. (\[BE-V\]), (\[BE-S\]) and (\[BE-C\]) and substituting $B(N,Z)$ $=$ $B_{\rm v}$ $+$ $B_{\rm s}$ $+$ $B_{\rm c}$ into Eq. (\[Variation\]) leads to the following two equations $$\begin{aligned} \frac{\partial B}{\partial\chi_{\rm cen}} & = & A \Big[K_0 \chi_{\rm cen} + E_{\rm sym}(\rho_0)\big(\frac{\Delta_{\rm v}}{A}\big)^2\Big] - 2 E_{\rm s0} A^{\frac{2}{3}} + 2 \beta A^{\frac{4}{3}} \big(\frac{\Delta_{\rm s}}{A}\big)^2 + a_{\rm c} Z^2 A^{-\frac{1}{3}} = 0\label{vrchi},\\ \frac{\partial B}{\partial\Delta_{\rm v}} & = & 2\big(E_{\rm sym}(\rho_0) + L \chi_{\rm cen}\big)\frac{\Delta_{\rm v}}{A} + 4 E_{\rm sym,4}(\rho_0)\big(\frac{\Delta_{\rm v}}{A}\big)^3 - 2\beta A^{\frac{1}{3}}\frac{\Delta_{\rm s}}{A} (1 + 2 \chi_{\rm cen}) = 0\label{vrdlt}.\end{aligned}$$ By eliminating $\frac{\Delta_{\rm s}}{A}$ in Eq. (\[vrchi\]) and Eq. (\[vrdlt\]), we obtain $$\begin{split} & K_0 A \chi_{\rm cen} - 2 E_{\rm s0} A^{\frac{2}{3}} + a_{\rm c} Z^2 A^{-\frac{1}{3}} + {\cal O}(A^{\frac{1}{3}})\\ + & \big[LA + {\cal O}(A^{\frac{2}{3}})\big] \big(\frac{\Delta_{\rm v}}{A}\big)^2 + {\cal O}\Big[\big(\frac{\Delta_{\rm v}}{A}\big)^4\Big] = 0. \label{vrpls} \end{split}$$ From Eqs. (\[vrpls\]) and (\[chi\]), one can obtain $\chi_0$ and $\chi_2$, i.e., $$\chi_0(A,Z) = \frac{2E_{\rm s0}A^{2/3} - a_{\rm c}Z^2A^{-1/3}}{AK_0}, \label{chi0}$$ which represents the modification of the central density of finite nuclei due to the surface and Coulomb energy, and $$\chi_2 = -\frac{L}{K_0}, \label{chi2}$$ which determines the modification of nuclear central density due to the isospin dependent part of nuclear matter EOS. On the other hand, one can figure out the relation between $(\frac{\Delta_{\rm v}}{A})^2$ and $I$ from Eq. (\[vrdlt\]), i.e., $$\begin{aligned} I^2 & = & \big(\frac{\Delta_{\rm v}}{A} + \frac{\Delta_{\rm s}}{A}\big)^2 \nonumber \\ & \approx & \Big[\frac{\big(E_{\rm sym}(\rho_0) + \beta A^{\frac{1}{3}}\big)^2}{\beta^2A^{2/3}} + O(A^{\frac{1}{3}})\Big]\big(\frac{\Delta_{\rm v}}{A}\big)^2,\end{aligned}$$ from which, together with Eq. (\[Deltav\]), one then obtains $$D_2(A) = \frac{1}{(1 + \frac{E_{\rm sym}(\rho_0)}{\beta}A^{-1/3})^2}. \label{D2}$$ As can be seen from Eq. (\[Deltav\]), the $D_2(A)$ reflects the fraction of volume neutron excess in the total neutron excess of a nucleus with mass number $A$. The ratio $E_{\rm sym}(\rho_0)/\beta$ is the so-called symmetry volume-surface ratio [@Dan03], which can be considered as an independent parameter. From Eqs. (\[chi0\]), (\[chi2\]) and (\[D2\]), $\chi_{\rm cen}$ and $\Delta_{\rm v}$ in Eqs. (\[chi\]) and (\[Deltav\]) can be expressed in terms of $A$ and $Z$, and thus the nuclear binding energy $B(N,Z)$ $=$ $B_{\rm v}$ $+$ $B_{\rm s}$ $+$ $B_{\rm c}$ can be recast into $$\begin{split} B(A,Z) & =A\times \big[c_{00}(A,Z) + c_{01}(A,Z)A^{-\frac{1}{3}}\\ & + a_{\rm{sym}}(A,Z)I^2 + a_c(1+\chi_0)Z^2A^{-\frac{4}{3}}\\ & + a_{\rm{sym,4}}(A)I^4\big], \end{split} \label{MF}$$ where $c_{00}$ and $c_{01}$ characterize the isospin independent parts of $ B(A,Z)$ with $$\begin{aligned} c_{00}(A,Z) & = & E_0(\rho_0) + \frac{1}{2}K_0\chi_0^2(A,Z),\\ c_{01}(A,Z) & = & E_{\rm{s}0}\big[1 - 2\chi_0(A,Z)\big],\end{aligned}$$ while the symmetry energy $a_{\rm sym}(A,Z)$ and the fourth-order symmetry energy $a_{\rm{sym,4}}(A)$ of finite nuclei can be expressed, respectively, as $$\begin{aligned} a_{\rm{sym}}(A,Z) & = & D_2(A)\Big[E_{\rm sym}(\rho_0) + a_{\rm c}Z^2\chi_2A^{-\frac{4}{3}}\nonumber\\ & + & \Big(\frac{E_{\rm sym}^2(\rho_0)}{\beta}-2E_{\rm{s}0}\chi_2\Big)A^{-\frac{1}{3}}\Big]\label{asym},\\ a_{\rm{sym,4}}(A) & = & D_2^2(A)\Big(E_{\rm sym,4}(\rho_0)-\frac{L^2}{2K_0}\Big)\label{asym4}.\end{aligned}$$ Noting that $a_{\rm{sym}}(A,Z)$ in Eq. (\[asym\]) includes a small $Z$-dependent term $a_{\rm{c}}A^{-4/3}Z^2\chi_2D_2$, which comes from the modification of $\rho_{\rm{cen}}$ due to the Coulomb energy. The mass formula Eq. (\[MF\]) is an extended form of the mass formula of Ref. [@Dan03] and the latter can be obtained from the former by omitting the corrections due to the central density variation of finite nuclei (i.e., setting $\chi_0=$ $\chi_2 =0$) and the higher-order $I^4$ term. In particular, by setting $\chi_2$ $=$ $0$, $a_{\rm{sym}}(A,Z)$ is then reduced to a simpler form of $a_{\rm sym}(A)$ [@Dan03], i.e., $$a_{\rm sym}(A) = E_{\rm sym}(\rho_0)\Big/\Big(1 + \frac{E_{\rm sym}(\rho_0)}{\beta}A^{-\frac{1}{3}}\Big). \label{asym-3}$$ In addition, in the limit of $A$ $\rightarrow$ $\infty$, both the $\chi_0$ (see Eq. (\[chi0\])) and the $D_2$ (see Eq. (\[D2\])) become to zero, Eq. (\[MF\]) is then reduced to the binding energy per nucleon of asymmetric nuclear matter at saturation point where the $B(A,Z)/A$ reaches its minimum value, i.e., $$\begin{split} E_{\rm sat}(\delta) & = B(A,Z)/A = E_0(\rho_0) + E_{\rm sym}(\rho_0)\delta^2\\ & + \Big(E_{\rm sym,4}(\rho_0) - \frac{L^2}{2K_0}\Big)\delta^4 + {\cal O}(\delta^6), \end{split} \label{Esat}$$ In the above, the Coulomb interaction is neglected and the $I$ is replaced by $\delta$. It is seen that Eq. (\[Esat\]) is exactly the same as the expression obtained in Ref. [@ChenLW09] for the binding energy per nucleon of asymmetric nuclear matter at saturation point. It should be pointed out that the $A$-dependence of $a_{\rm sym}(A,Z)$ and $a_{\rm sym,4}(A)$ mainly comes from $D_2(A)$ in Eq. (\[D2\]), and the same form of $D_2(A)$ has been obtained in Ref. [@Dan03]. In nuclear mass formula, the nuclear binding energy per nucleon is usually expanded in two small quantities $A^{-1/3}$ and $I^2$ [@Mye69; @Mye96] with the coefficient of each term being determined by fitting nuclear masses with optimization. However, since the value of $\frac{E_{\rm sym}(\rho_0)}{\beta}$ is around $2.4$ [@Dan03; @Dan17], $\frac{E_{\rm sym}(\rho_0)}{\beta}A^{-1/3}$ in Eq. (\[D2\]) is thus not small enough to obtain a rapid converging $A^{-1/3}$ expansion. Several studies indicate the convergence of $A^{-1/3}$ expansion is not as good as that of $I^2$ (see, e.g., Ref. [@Rei06]). For example, the value of $a_{\rm sym}(A)$ (neglecting the small $Z$-dependent term) in the range of known nuclei is found to be very different from its asymptotic value at infinite $A$ (or $a_{\rm sym}(\infty)$ in some literatures) [@Cen09; @Liu10; @ChenLW11]. Considering the slow convergence of the $A^{-1/3}$ expansion, in our mass formula we do not expand $a_{\rm sym}(A,Z)$ and $a_{\rm sym,4}(A)$ in terms of $A^{-1/3}$ and exactly retain their $A$ dependence. We would like to point out when the nuclear binding energy $B(N,Z)$ is expanded in terms of $A^{-1/3}$ and $I^2$, the obtained expressions for the coefficients of the five leading-order terms (i.e., $A$, $A^{\frac{2}{3}}$, $A \times I^2$, $A^{\frac{2}{3}} \times I^2$, $A \times I^4$) in the mass formula Eq. (\[MF\]) are complete and self-contained, which means all other characteristic parameters of nuclear matter EOS and surface energy that are not show up in the mass formula (e.g., the higher-order $J_0$ and $K_{\rm{sym}}$) will not contribute to the coefficients of the five leading-order terms. Symmetry energy of finite nuclei and $E_{\rm sym}(\rho)$ ======================================================== When the semi-empirical mass formula was first introduced, the symmetry energy term has the simple form of $c_{\rm sym}(N-Z)^2/A$, with a constant symmetry energy coefficient $c_{\rm sym}$. By fitting the newly released nuclear mass table AME2012 [@AME2012] (all nuclei with $A$ $>$ 20 are considered) with the following simple Bethe-Weizs$\ddot{\rm a}$cker mass formula, i.e., $$\begin{split} B(N,Z) & = c_{\rm vol}A + c_{\rm sur}A^{2/3} + c_{\rm cou}\frac{Z^2(1 - Z^{-2/3})}{A^{1/3}}\\ & + c_{\rm sym}\frac{(N-Z)^2}{A} + c_{\rm p}\frac{(-1)^{N} + (-1)^Z}{A^{2/3}}, \end{split} \label{BW}$$ we obtain the constant symmetry energy coefficient $c_{\rm sym} = 22.2$ MeV, the surface coefficient $c_{\rm sur} = 17.33$ MeV and the Coulomb coefficient $c_{\rm cou} = 0.709$ MeV. Here the constant $c_{\rm sym}$ is just a parameter in mass formula and cannot be simply considered as $E_{\rm sym}(\rho_0)$ in the EOS of nuclear matter since the symmetry energy coefficient $a_{\rm sym}(A,Z)$ in the mass formula is sensitively dependent of the mass number $A$ in the mass region of known nuclei as shown in Eq. (\[asym\]). It is constructive to figure out the relation between $c_{\rm sym}$ and $a_{\rm sym}(A,Z)$. Since each nucleus was considered equally in our simple fitting, the $c_{\rm sym}$ can be treated as arithmetic average of $a_{\rm sym}(A,Z)$, i.e., $$c_{\rm sym} \approx \langle a_{\rm sym}\rangle = \sum\frac{1}{N_{\rm MN}}a_{\rm{sym}}(A,Z) \label{asymb}$$ where $N_{\rm MN}$ $=$ $2348$ is the number of measured nuclei we used in our simple fitting (i.e., the measured nuclei in AME2012 with 20 $<$ $A$ $<$ $270$) and the sum runs over all these nuclei. Substituting Eq. (\[asym\]) into Eq. (\[asymb\]), one can then obtain $$\begin{aligned} c_{\rm sym} & = & E_{\rm sym}(\rho_0)\sum\frac{1}{N_{\rm MN}}D_2(A)\nonumber\\ & - & \frac{a_{\rm c}L}{K_0}\sum\frac{1}{N_{\rm MN}}Z^2D_2(A)A^{-\frac{4}{3}}\nonumber\\ & + & \Big(\frac{E_{\rm sym}^2(\rho_0)}{\beta} +\frac{2E_{\rm s0}L}{K_0}\Big)\sum\frac{1}{N_{\rm MN}}D_2(A)A^{-\frac{1}{3}}\nonumber\\ & = & E_{\rm sym}(\rho_0)\langle D_2(A)\rangle -\frac{a_{\rm c}L}{K_0}\langle Z^2D_2(A)A^{-\frac{4}{3}}\rangle\nonumber\\ & + & \Big(\frac{E_{\rm sym}^2(\rho_0)}{\beta} +\frac{2E_{\rm s0}L}{K_0}\Big)\langle D_2(A)A^{-\frac{1}{3}}\rangle\label{asymb2},\end{aligned}$$ where the summations are the same as that in Eq. (\[asymb\]) and the average $\langle X(A,Z)\rangle$ is defined as $\sum X(A,Z)/N_{\rm MN}$. Similarly, one can obtain the following relations $$\begin{aligned} c_{\rm sur} & \approx & E_{\rm s0}(1 - 2\langle\chi_0(A,Z)\rangle)\nonumber\\ & = & E_{\rm s0}\Big[1 - 2\Big(\frac{E_{\rm s0}\langle2A^{-\frac{1}{3}}\rangle - a_{\rm c}\langle Z^2A^{-\frac{4}{3}}\rangle}{K_0}\Big)\Big]\label{Es}\end{aligned}$$ and $$\begin{aligned} c_{\rm cou} & \approx & a_{\rm c}(1+\langle\chi_0(A,Z)\rangle)\nonumber\\ & = & a_{\rm c}\Big[1 + \Big(\frac{E_{\rm s0}\langle2A^{-\frac{1}{3}}\rangle - a_{\rm c}\langle Z^2A^{-\frac{4}{3}}\rangle}{K_0}\Big)\Big]\label{ac}.\end{aligned}$$ Using $E_{\rm sym}(\rho_0)/\beta$ $=$ $2.4$ [@Dan03; @Tia16; @Dan17], we find $\langle D_2(A)\rangle = 0.45$, $\langle D_2(A)A^{-\frac{1}{3}}\rangle = 0.09$ and $\langle Z^2D_2(A)A^{-\frac{4}{3}}\rangle = 2.14$. Furthermore, from Eqs. (\[Es\]) and (\[ac\]) together with $c_{\rm sur}=17.33$ MeV and $c_{\rm cou}=0.709$ MeV obtained by the simple fitting as well as $K_0$ $=$ $240$ MeV [@Shl06], we find $E_{\rm s0}$ $=$ $17.96~\rm MeV$ and $a_{\rm c}$ $=$ $0.70~\rm MeV$. Combining these values of $E_{\rm s0}$ and $a_{\rm c}$ with the empirical value of $E_{\rm sym}(\rho_0)$ $=$ $32.0~\rm{MeV}$, $E_{\rm sym}(\rho_0)/\beta$ $=$ $2.4$, $K_0$ $=$ $240~\rm MeV$ and $L$ $=$ $45.2~\rm MeV$ [@ZhangZ13], we finally obtain $c_{\rm sym}$ $\approx$ $21.8~\rm MeV$, which is in good agreement with the value $22.2$ MeV obtained by simple fitting. Our results also indicate that the $E_{\rm s0}$ and $a_{\rm c}$ can be nicely approximated, respectively, by $c_{\rm sur}$ and $c_{\rm cou}$ in the simple Bethe-Weizs$\ddot{\rm a}$cker mass formula. The above demonstration suggests that information on $E_{\rm sym}(\rho_0)$ can be extracted inversely from the constant $c_{\rm sym}$. This feature is rather valuable in the case of $a_{\rm sym,4}$ and it provides an approach to extract $E_{\rm sym,4}(\rho_0)$ through the obtained constant fourth-order symmetry energy coefficient $c_{\rm sym,4}$ in the mass formula. \[S-asym\] Fourth-order symmetry energy of finite nuclei and $E_{\rm sym,4}(\rho)$ ======================================================================= Using the empirical values of $K_0 = 240$ MeV, $E_{\rm sym}(\rho_0)/\beta = 2.4$ and $L = 45.2$ MeV, we show in Fig. \[F-asym4\] the fourth-order symmetry energy $a_{\rm sym,4}(A)$ of finite nuclei as a function of mass number $A$ for $E_{\rm sym,4}(\rho_0) = $ $0.0$ MeV, $10.0$ MeV, $20.0$ MeV and $30.0$ MeV. Also included in Fig. \[F-asym4\] is the constant fourth-order symmetry energy coefficient $c_{\rm sym,4}=3.28\pm0.50$ MeV obtained in Ref. [@Jia14]. The inset of Fig. \[F-asym4\] displays the asymptotic behavior of the obtained $a_{\rm sym,4}(A)$ as a function of the mass number $A$ for $E_{\rm sym,4}(\rho_0) = 20.0$ MeV. As shown in the inset, the value of $a_{\rm sym,4}(A)$ in the region of $A$ $<$ $270$ is only about one fourth of $a_{\rm sym,4}(\infty)$, indicating the very slow convergence of the $A^{-1/3}$ expansion for $a_{\rm sym,4}(A)$. ![The fourth-order symmetry energy $a_{\rm sym,4}(A)$ of finite nuclei as a function of mass number $A$ with $E_{\rm sym,4}(\rho_0)=$ $0.0$ MeV, $10.0$ MeV, $20.0$ MeV and $30.0$ MeV. The gray band represents the constraint of $c_{\rm sym,4}=3.28\pm0.50$ MeV obtained in Ref. [@Jia14]. The inset shows the asymptotic behavior of $a_{\rm sym,4}(A)$ as a function of $A$ for $E_{\rm sym,4}(\rho_0) = 20.0$ MeV.[]{data-label="F-asym4"}](Asym4-A.eps){width="8.5cm"} Similarly as in our simple fitting of $c_{\rm sym}$ in section \[S-asym\], when extracting the constant fourth-order symmetry energy coefficient $c_{\rm sym,4}$ from analyzing nuclear masses in Ref. [@Jia14; @Jia15; @Wan15; @Tia16], each nucleus in AME2012 with $20<$ $A$ $<$ $270$ is also considered equally. Therefore, $c_{\rm sym,4}$ can be also treated as arithmetic average value of the $A$-dependent $a_{\rm{sym,4}}(A)$ (i.e., Eq. (\[asym4\])). Following the discussion in Section \[S-asym\], one can then extract information on $E_{\rm sym,4}(\rho_0)$ from $c_{\rm sym,4}$. It is interesting to see from Fig. \[F-asym4\] that although the $a_{\rm{sym,4}}(A)$ exhibits $A$-dependence, it changes not so rapidly in the mass range of $A$ $<$ $270$. Such a feature makes it more reliable for us to estimate the value of $E_{\rm sym,4}(\rho_0)$ through the obtained constrains on $c_{\rm sym,4}$. The $c_{\rm sym,4}$) can then be expressed as $$\begin{aligned} c_{\rm sym,4} & \approx & \langle a_{\rm sym,4}\rangle = \sum\frac{1}{N_{\rm MN}}a_{\rm sym,4}(A)\nonumber\\ & = & \Big(E_{\rm sym,4}(\rho_0)-\frac{L^2}{2K_0}\Big)\langle D_2^2(A)\rangle\label{asym4b},\end{aligned}$$ where the second equation is obtained by using Eq. (\[asym4\]). From Eq. (\[asym4b\]), one can then obtain $E_{\rm sym,4}(\rho_0)$ as $$E_{\rm sym,4}(\rho_0) = \frac{\langle a_{\rm sym,4}(A)\rangle}{\langle D_2^2(A)\rangle} + \frac{L^2}{2K_0}. \label{Esym4}$$ Since $\langle D_2^2(A)\rangle$ is a function of the symmetry volume-surface ratio $E_{\rm sym}(\rho_0)/\beta$, the $E_{\rm sym,4}(\rho_0)$ can then be determined by $L$, $K_0$, $\langle a_{\rm sym,4}\rangle$ and $E_{\rm sym}(\rho_0)/\beta$. The error of $E_{\rm sym,4}(\rho_0)$ can be estimated through the error transfer formula $$\triangle_{E_{\rm sym,4}(\rho_0)} = \sqrt{\sum_{\rm i}\Big(\frac{\partial E_{\rm sym,4}(\rho_0)}{\partial x_{\rm i}}\Big)^2\triangle_{x_{\rm i}}^2},$$ where $x_{\rm i}$ represents the quantities $L$, $K_0$, $\langle a_{\rm sym,4}\rangle$ and $E_{\rm sym}(\rho_0)/\beta$. To determine the detailed value of $E_{\rm sym,4}(\rho_0)$ through Eq. (\[Esym4\]), an ambiguity appears since there have three extracted values of $c_{\rm sym,4}$, namely $3.28\pm0.50$ MeV and $8.47\pm0.49$ MeV extracted in Ref. [@Jia14] and $8.33\pm1.21$ MeV extracted in Ref. [@Tia16]. Noting that the positive correlation between $\langle a_{\rm sym,4}\rangle$ and $E_{\rm sym,4}(\rho_0)$ in Eq. (\[Esym4\]), for a conservative estimate of $E_{\rm sym,4}(\rho_0)$ (by conservative here means the minimum value of $E_{\rm sym,4}(\rho_0)$), we use the smallest extracted value of $c_{\rm sym,4}$, namely $3.28\pm0.50$ MeV in Ref. [@Jia14] (i.e., the gray band shown in Fig. \[F-asym4\]) to estimate the magnitude of $E_{\rm sym,4}(\rho_0)$. Similar to the analysis of $a_{\rm sym}(A,Z)$ in section \[S-asym\], in Ref. [@Jia14], all measured nuclei in AME2012 with 20 $<$ $A$ $<$ $270$ are considered equally. Therefore, we have $N_{\rm MN}$ $=$ $2348$ and the sum in Eq. (\[asym4b\]) runs over these nuclei as well. Using $E_{\rm sym}(\rho_0)/\beta$ $=$ $2.4\pm0.4$ [@Dan03; @Dan17], then we find $\langle D_2^2(A)\rangle = 0.2$. Combining the empirical constrains of $L$ $=$ $45.2\pm10.0~\rm MeV$ [@ZhangZ13] and $K_0$ $=$ $240\pm40~\rm MeV$ [@Shl06], we then obtain $E_{\rm sym,4}(\rho_0) = 20.0\pm4.6$ MeV with the squared errors from $L$, $K_0$, $\langle a_{\rm sym,4}\rangle$ and $E_{\rm sym}(\rho_0)/\beta$ being $3.5$ MeV$^2$, $0.5$ MeV$^2$, $5.8$ MeV$^2$ and $11.3$ MeV$^2$, respectively. We would like to point out that the detailed value of $E_{\rm sym,4}(\rho_0) = 20.0\pm4.6$ MeV estimated above relies on the empirical values of the lower-order parameters $L$ and $K_0$ of nuclear matter EOS as well as the extracted values of $E_{\rm sym}(\rho_0)/\beta$ and $\langle a_{\rm sym,4}\rangle$ from analyzing nuclear mass data. For example, if we choose $L$ $=$ $58.7\pm28.1~\rm MeV$ [@Oer17] or $L$ $=$ $58.9\pm16.5~\rm MeV$ [@LiBA13], the obtained $E_{\rm sym,4}(\rho_0)$ changes to $23.0\pm8.1~\rm MeV$ or $23.0\pm5.9~\rm MeV$, respectively. Nevertheless, the choice of $L$ does not change the constrain on $E_{\rm sym,4}(\rho_0)$ much, and our results clearly indicate that a sizable positive $E_{\rm sym,4}(\rho_0)$ is necessary to describe the value of $c_{\rm sym,4}$ obtained from the double difference of the “experimental” symmetry energy extracted from the nuclear mass data. Considering the fact that the majority of nuclear energy density functionals based on mean-field models give a fairly small magnitude of $E_{\rm sym,4}(\rho_0)$ with its value less than $2~\rm{MeV}$ [@ChenLW09; @Cai12; @Arg17; @PuJ17] (Note: $E_{\rm sym,4}(\rho_0) = 2$ MeV leads to $c_{\rm sym,4} \approx \langle a_{\rm sym,4}\rangle = -0.45$ MeV), the effects beyond the mean-field approximation, such as the short-range correlation effects, might be needed to explain such a sizable $E_{\rm sym,4}(\rho_0)$. On the other hand, if $E_{\rm sym,4}(\rho_0)$ is indeed very small, then a novel mechanism is called for to explain the large value of $c_{\rm sym,4}$ from analyzing the data on nuclear masses. Conclusion and outlook ====================== By self-consistently considering the bulk, surface and Coulomb contributions to the nuclear mass, we have obtained an extended nuclear mass formula. In this mass formula, the symmetry energy $a_{\rm sym}(A,Z)$ and the fourth-order symmetry energy $a_{\rm{sym,4}}(A)$ of finite nuclei are related explicitly to the characteristic parameters of nuclear matter EOS. In particular, using the recently extracted constant fourth-order symmetry energy coefficient $c_{\rm{sym,4}}$ from analyzing the double difference of the “experimental” symmetry energy extracted from nuclear masses, we have estimated for the first time a value of $E_{\rm sym,4}(\rho_0) = 20.0\pm4.6$ MeV for nuclear matter fourth-order symmetry energy at nuclear normal density $\rho_0$. The significant value of $E_{\rm sym,4}(\rho_0) = 20.0\pm4.6$ MeV challenges the mean-field models which generally predict $E_{\rm sym,4}(\rho_0) \lesssim 2$ MeV. Therefore, it will be interesting to explore $E_{\rm sym,4}(\rho_0)$ within the framework of beyond the mean-field approximation (e.g., by considering the short range correlation effects). This would substantially improve our understanding on the properties of nuclear matter systems at extreme isospin, such as neutron stars. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank Hui Jiang, Kai-Jia Sun and Zhen Zhang for useful discussions. This work was supported in part by the Major State Basic Research Development Program (973 Program) in China under Contract Nos. 2013CB834405 and 2015CB856904, the National Natural Science Foundation of China under Grant Nos. 11625521, 11275125 and 11135011, the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning, Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry of Education, China, and the Science and Technology Commission of Shanghai Municipality (11DZ2260700). [99]{} B.A. Li, C.M. Ko, and W. Bauer, Int. J. Mod. Phys. E **7**, 147 (1998). P. Danielewicz, R. Lacey, and W. G. Lynch, Science **298**, 1592 (2002). J.M. Lattimer and M. Prakash, Science **304**, 536 (2004); Phys. Rep. **442**, 109 (2007). A.W. Steiner, M. Prakash, J.M. Lattimer, and P. J. Ellis, Phys. Rep. **411**, 325 (2005). V. Baran, M. Colonna, V. Greco, and M. Di Toro, Phys. Rep. **410**, 335 (2005). B.A. Li, L.W. Chen, and C.M. Ko, Phys. Rep. **464**, 113 (2008). W. Trautmann and H.H. Wolter, Int. J. Mod. Phys. E **21**, 1230003 (2012). C.J. Horowitz, E.F. Brown, Y. Kim, W.G. Lynch, R. Michaels, A. Ono, J. Piekarewicz, M.B. Tsang, and H.H. Wolter, J. Phys. G **41**, 093001 (2014). B.A. Li, A. Ramos, G. Verde, and I. Vidana, Eur. Phys. J. A **50**, 9 (2014). M. Baldo and G.F. Burgio, Prog. Part. Nucl. Phys. **91**, 203 (2016). M. Oertel, M. Hempel, T. Klähn, and S. Typel, Rev. Mod. Phys. **89**, 015007 (2017). B.A. Li, Nucl. Phys. News, in press, (2017) \[arXiv:1701.03564\] F.S. Zhang and L.W. Chen, Chin. Phys. Lett. **18**, 142 (2001). A.W. Steiner, Phys. Rev. C **74**, 045808 (2006). J. Xun, L.W. Chen, B.A. Li, and H.R. Ma, Phys. Rev. C **79**, 035802 (2009); Astrophys. J. **697**, 1549 (2009). B.J. Cai and L.W. Chen, Phys. Rev. C **85**, 024302 (2012). W.M. Seif and D.N. Basu, Phys. Rev. C **89**, 028801 (2014). L.W. Chen, B.J. Cai, C.M. Ko, B.A. Li, C. Shen, and J. Xu, Phys. Rev. C **80**, 014322 (2009). B.K. Agrawal, S.K. Samaddar, J.N. De, C. Mondal, and S. De, Int. J. Mod. Phys. E **26**, 1750022 (2017). J. Pu, Z. Zhang, and L.W. Chen, (2017), in preperation. N. Kaiser, Phys. Rev. C **91**, 065201 (2015). R. Nandi and S. Schramm, Phys. Rev. C **94**, 025806 (2016). B.J. Cai and B.A. Li, Phys. Rev. C **92**, 011601(R) (2015). O. Hen [*et al.*]{}, Science **346**, 614 (2014). C. Wellenhofer, J.W. Holt, and N. Kaiser, Phys. Rev. C **93**, 055802 (2016). D.F. Jackson, Rep. Prog. Phys. **37**, 55 (1974). P. M$\rm\ddot{o}$ller, W.D. Myers, H. Sagawa, and S. Yoshida, Phys. Rev. Lett. **108**, 052501 (2012). R. Wang and L.W. Chen, Phys. Rev. C **92**, 031303(R) (2015). H. Jiang, M. Bao, L.W. Chen, Y.M. Zhao, and A. Arima, Phys. Rev. C **90**, 064303 (2014). H. Jiang, N. Wang, L.W. Chen, Y.M. Zhao, and A. Arima, Phys. Rev. C **91**, 054302 (2015). N. Wang, M. Liu, H. Jiang, J.L. Tian, and Y.M. Zhao, Phys. Rev. C **91**, 044308 (2015). J.L. Tian, H.T. Cui, T. Gao, and N. Wang, Chin. Phys. C **40**, 094101 (2016). P. Danielewicz, Nucl. Phys. **A727**, 233 (2003). P.-G. Reinhard, M. Bender, W. Nazarewicz, and T. Vertse, Phys. Rev. C **73**, 014309 (2006). G. Royer, Nucl. Phys. **A807**, 105 (2008). N. Wang, Z. Liang, M. Liu, and X. Wu, Phys. Rev. C **82**, 044304 (2010). N. Wang, M. Liu, X.Z. Wu, and J. Meng, Phys. Lett. **B734**, 215 (2014). R. Wang and L.W. Chen, unpublished (2017). W.D. Myers and W.J. Swiatecki, Ann. Phys. **55**, 395 (1969). W.D. Myers and W.J. Swiatecki, Nucl. Phys. **A601**, 141 (1996). P. Danielewicz, P. Singh, and J. Lee, Nucl. Phys. **A958**, 147 (2017). M. Centelles, X. Roca-Maza, X. Vi$\rm{\tilde{n}}$as, and M. Warda, Phys. Rev. Lett **102**, 122502 (2009). M. Liu, N. Wang, Z.X. Li, and F.S. Zhang, Phys. Rev. C **82**, 064306 (2010). L.W. Chen, Phys. Rev. C **83**, 044308 (2011). M. Wang, G. Audi, A.H. Wapstra, F.G. Kondev, M. MacCormick, X. Xu and B. Pfeiffer, Chin. Phys. C **36**, 1603 (2012). S. Shlomo, V. Kolomietz, and G. Col$\rm{\grave{o}}$, Eur. Phys. J. A **30**, 23 (2006). Z. Zhang and L.W. Chen, Phys. Lett. **B726**, 234 (2013). B.A. Li and X. Han, Phys. Lett. **B727**, 276 (2013). [^1]: Corresponding author (email: lwchen$@$sjtu.edu.cn)
{ "pile_set_name": "ArXiv" }
--- abstract: 'We report on the measurement of the specific activity of $^{39}$Ar in natural argon. The measurement was performed with a 2.3-liter two-phase (liquid and gas) argon drift chamber. The detector was developed by the WARP Collaboration as a prototype detector for WIMP Dark Matter searches with argon as a target. The detector was operated for more than two years at Laboratori Nazionali del Gran Sasso, Italy, at a depth of 3,400m w.e. The specific activity measured for $^{39}$Ar is 1.01$\pm$0.02(stat)$\pm$0.08(syst)Bq per kg of $^{\rm nat}$Ar.' address: - 'INFN and Department of Physics, University of Pavia, Pavia, Italy' - 'Department of Physics, Princeton University, Princeton NJ, USA' - 'INFN and Department of Physics, University of Napoli “Federico II”, Napoli, Italy' - 'INFN and Department of Physics, University of L’Aquila, L’Aquila, Italy' - 'INFN, Gran Sasso National Laboratory, Assergi, Italy' - 'Institute of Nuclear Physics PAN, Kraków, Poland' author: - 'P. Benetti' - 'F. Calaprice' - 'E. Calligarich' - 'M. Cambiaghi' - 'F. Carbonara' - 'F. Cavanna' - 'A. G. Cocco' - 'F. Di Pompeo' - 'N. Ferrari' - 'G. Fiorillo' - 'C. Galbiati' - 'L. Grandi' - 'G. Mangano' - 'C. Montanari' - 'L. Pandola' - 'A. Rappoldi' - 'G. L. Raselli' - 'M. Roncadelli' - 'M. Rossella' - 'C. Rubbia' - 'R. Santorelli' - 'A. M. Szelc' - 'C. Vignoli' - 'Y. Zhao' title: 'Measurement of the specific activity of $^{39}$Ar in natural argon' --- , , , , , , , , , , , , , , , , , , , , , , $^{39}$Ar specific activity ,low-background experiments ,cosmogenic activation 23.40.-s ,27.40.+z ,29.40.Mc ,95.35.+d Introduction {#sec1} ============ A 2.3-liter two-phase (liquid and gas) argon drift chamber [@warp-prototype] was developed and built by the WARP collaboration as a prototype detector for WIMP Dark Matter searches with argon as a target [@warp-proposal]. The detector was operated for more than two years by the WARP collaboration at Laboratori Nazionali del Gran Sasso, Italy, at a depth of 3,400m w.e. One important by-product of the operation of the prototype WARP detector was the precise determination of the $^{39}$Ar specific activity in natural argon.\ $^{39}$Ar and $^{85}$Kr are two radioactive nuclides whose activity in the atmosphere is of the order of 10mBq/m$^{3}$ and 1Bq/m$^3$, respectively [@loosli; @kr85]. As a result of the liquid argon production process, they are both present in abundant quantities and are the two most significant radioactive contaminations in liquid argon. The two isotopes decay primarily by $\beta$ emission, and their presence can limit the sensitivity of experiments looking for low energy rare events (WIMP Dark Matter interactions, neutrinoless double beta decay) using liquid argon either as a target or as a shielding material.\ $^{85}$Kr is not a pure $\beta$ emitter, owing to the presence of a 0.43% branching ratio for decay with $\beta$ emission on a metastable state of $^{85}$Rb, which then decays by emitting a $\gamma$-ray of energy 514 keV, with a half-life of 1.01$\mu$s [@betaspec]. The coincidence between the $\beta$ and $\gamma$ emitted in a fraction of the $^{85}$Kr decays may ease in some cases the task of determining experimentally the activity of $^{85}$Kr in low-background detectors. The determination of the specific activity of $^{39}$Ar is intrinsically more challenging and not as widely discussed in the literature [@loosli; @ms]. A theoretical estimate is presented in Ref. [@cenniniAr].\ In the last twenty years liquid argon technology has acquired great relevance for astroparticle physics applications. Several experimental techniques, employing liquid argon as sensitive medium, have been proposed especially for rare events detection [@warp-proposal; @proposalRubbia; @xenon93; @icarust600; @lanndd; @snolab; @protonDecay; @arDM]. For WIMP Dark Matter direct detection, the discrimination of nuclear recoils from the $\beta$-$\gamma$ induced background plays a crucial role. The discrimination provided by the experimental technique must be sufficient to reduce the radioactive background below the very low interaction rates foreseeable for WIMP Dark Matter. A precise determination of the intrinsic specific activity of $^{39}$Ar is therefore of significant interest for the design of WIMP Dark Matter detectors employing argon as a target. The 2.3-liter WARP detector {#sec2} =========================== The detector consists of a two-phase argon drift chamber with argon as a target. Two-phase argon drift chamber was first introduced within the ICARUS program [@xenon93] in the framework of a wide-range study of the properties of noble gases.\ The drift chamber (see Figure \[fig:warp25\]) is operated at the argon boiling point (86.7K) at the atmospheric pressure of the Gran Sasso Laboratory (about 950 mbar) [@nist]. The cooling is provided by a thermal bath of liquid argon, contained in an open stainless steel dewar, in which the chamber is fully immersed. The pressure of the gas phase on the top of the chamber is naturally equalized to the surrounding atmospheric pressure.\ Ionizing events inside the liquid argon volume produce scintillation VUV light mainly at 128nm. The scintillation light is shifted to wavelengths in the blue by an organic wavelength shifter (TetraPhenilButadiene) covering the walls, and collected by photomultipler tubes (PMTs) located in the gas phase and facing the liquid volume below. The 2-inch PMTs are manufactured by Electron Tubes Ltd (model D757UFLA) and have a special photocathode that ensures functionality down to liquid argon temperatures. The quantum efficiency at the emission wavelength of TPB is about 18%. The material of the PMTs has been selected for high radiopurity: according to the supplier’s specifications, the total $\gamma$ activity above 100 keV is 0.2 Bq/PMT, dominated by the $^{232}$Th and $^{238}$U chains.\ A series of field-shaping rings surrounding the liquid phase superimpose an electric field of 1kV/cm. The electrons are drifted toward the anode (located atop the chamber) and then extracted from the liquid to the gaseous phase by a local extraction field provided by a couple of grids. The electrons are linearly multiplied in the gas phase by a second, stronger local field. The PMTs detect the primary scintillation light (directly produced by the ionizing event) and also the secondary scintillation light (produced by the electron multiplication process in the gas phase). The PMT signals are summed and sent to a multi-channel analyzer recording the pulse height spectrum. The liquid argon contained in the chamber is Argon 6.0 supplied by Rivoira S.p.A. The liquid argon is successively purified from electronegative impurities down to an equivalent contamination of less than 0.1ppb of O$_{2}$ by using the chemical filter Hopkalit from Air Liquid. The purity from electronegative elements is actively maintained by means of continuous argon recirculation through the chemical filter.\ The experimental set-up is located in the Laboratori Nazionali del Gran Sasso underground laboratory (3,400 m w.e. of rock coverage). The flux of cosmic ray muons is suppressed by a factor of $10^{6}$ with respect to the surface (residual flux 1.1$\mu$/(m$^2 \cdot$ h), average muon energy $320$GeV [@macro]). The detector is shielded by 10cm of lead, to reduce the external $\gamma$ background.\ The sensitive volume of the detector has the shape of frustum of cone and it is delimited by a stainless-steel cathode. The sensitive volume for the configuration under analysis is 1.86$\pm$0.07liter. The density of liquid argon in the operating conditions (950mbar and 86.7K) is1.399 g/cm$^{3}$ [@nist]. The sensitive volume is viewed by seven PMTs, whose responses have been equalized in gain. Daily calibrations ensure the long-term stability and the linearity of the response. The sensitive volume and the argon thermal bath are contained in a stainless steel dewar, 50 cm internal diameter and 200 cm internal height. Data analysis {#sec3} ============= For the measurements described in this work, the electric fields were switched off and the chamber was operated as a pure scintillation detector. The gain of the PMTs has been set up to optimize the data acquisition in the typical energy range of the environmental $\gamma$-ray background, namely up to 3 MeV. The energy threshold for data acquisition is about 40 keV. The threshold used for analysis is 100 keV, in order to exclude events from electronic noise. The response of the detector to $\gamma$ radiation was studied using different $\gamma$-ray sources ($^{57}$Co, $^{60}$Co, $^{137}$Cs) placed outside the chamber. The spectra obtained with the $^{57}$Co and $^{137}$Cs sources are shown in Figure \[fig:sources\]. Typical values of the resolution observed with the calibration sources are $\sigma(E)/E = 13\%$ at 122 keV ($^{57}$Co) and $\sigma(E)/E = 6\%$ at 662 keV ($^{137}$Cs). The correlation between energy and primary scintillation light detected was linear within the range tested with our sources. The energy resolution of the detector can be described empirically by the following parametrization: $$\sigma (E) \ = \ \sqrt{a_{0}^{2} +a_{1}E+ (a_{2} E)^{2}}, \label{resolution}$$ where $a_0 = 9.5$ keV, $a_1 = 1.2$ keV and $a_2 = 0.04$. The three terms take into account the effects from non-uniform light collection ($a_{2}$ term), statistical fluctuations in the light production ($a_{1}$ term) and electronic noise ($a_{0}$ term).\ ----- ----- (a) (b) ----- ----- \ The $\beta$-$\gamma$ spectrum in the detector (35 hours of live time) is displayed in Figure \[fig:spectrumtot\]. The total counting rate is about 6 Hz (4.2 Hz above the analysis threshold). A simulation based on the <span style="font-variant:small-caps;">Geant4</span> toolkit [@geant4] has been developed to reproduce and understand the observed features. The simulation takes into account the following two components of background: - External radiation from radioactive contaminants in the materials surrounding the liquid argon volume (stainless steel, thermal bath, PMTs). Only $\gamma$-emitters are taken into account, that originate an effective $\gamma$-ray flux through the surface of the sensitive volume. This includes: $^{238}$U and daughters (especially $^{222}$Rn dissolved in the external liquid argon), $^{232}$Th and daughters, $^{60}$Co and $^{40}$K. The presence of these sources has been confirmed with a portable NaI $\gamma$-spectrometer inserted into the empty dewar. - Bulk contaminations in the liquid argon of the chamber. In this case, both $\beta$ and $\gamma$ emitters are relevant. The most important contributions are from $^{222}$Rn, $^{39}$Ar and $^{85}$Kr.[^1] The signal from internal $^{222}$Rn and its daughters can be monitored by counting the $\alpha$ decays of the isotopes $^{222}$Rn, $^{218}$Po and $^{214}$Po in the energy region 5$-$7 MeV. After a new filling with freshly produced liquid argon the chamber shows a total $^{222}$Rn decay rate of the order of 1$-$2 Hz. Due to the $^{222}$Rn half-life of 3.8 days, the observed rate decreases to few tens of events/day four weeks after the filling. Therefore, $\beta$ decays from the Rn daughters $^{214}$Pb and $^{214}$Bi can be neglected, provided the measurement is performed a few weeks after the filling of the chamber. The counting rate due to the decay of $^{14}$C (dissolved in the liquid argon or located in the surrounding plastics) is estimated to be much less than 50 mHz. Most of the $^{14}$C events occur close to the chamber walls.\ The main characteristics of the $^{39}$Ar and $^{85}$Kr $\beta$ decays are summarized in Table \[table:1\]. Since both decays are classified as forbidden unique $\beta$ transitions ($\Delta I^{\Delta \pi} = 2^{-}$), the $\beta$ spectrum is not described by the usual Fermi function. For the present work, we assumed the $\beta$ spectra from Ref. [@betaspec].\ ----------- ----------- ------------------- --------------------- Isotope Half-life $\beta$ end-point $\beta$ mean energy (keV) (keV) $^{39}$Ar 269 y 565 220 $^{85}$Kr 10.8 y 687 251 ----------- ----------- ------------------- --------------------- \ The <span style="font-variant:small-caps;">Geant4</span>-based simulation is used to generate normalized spectra $s_i(E)$ from the different radioisotopes, taking into account the energy resolution of the detector. The isotopes from the natural radioactive chains are treated independently. A $\chi^2$ fit of the experimental spectrum $F(E)$ is then performed in the energy range from 100 keV to 3 MeV with a linear combination of the single components, i.e. $$F(E) = \sum_i{w_i \cdot s_i(E)}.$$ The coefficients $w_i$ are treated as free parameters and represent the counting rates induced by the single sources. In Figure \[fig:spectrumtot\] we show the experimental spectrum, superimposed with the output of the fit (i.e. $\sum w_{i} s_{i}(E)$). The fit is satisfactory in all the energy range considered. The signals from the most important external $\gamma$-rays radioactivity sources and from internal contaminations are shown in Figure \[fig:sim\], as derived from the analysis of the experimental spectrum. Discussion {#sec4} ========== Figure \[fig:sim\] shows that the energy region 2$-$3 MeV is dominated by interactions of $\gamma$-rays from $^{232}$Th daughters, the region 1.5$-$2 MeV by $\gamma$-rays from $^{238}$U daughters, and the region 0.5$-$1.5 MeV by $\gamma$-rays from $^{60}$Co and $^{40}$K. Below 0.5 MeV the main contribution comes from $\beta$ decays from internal contaminations of $^{39}$Ar and $^{85}$Kr; the two isotopes account for 65% of the total counting rate between 100 and 500 keV.\ The cosmogenically-originated $^{39}$Ar contamination of $^{\rm nat}$Ar in the troposphere was measured in Ref. [@loosli] to be $(7.9 \pm 0.3) \cdot 10^{-16}$ g/g; the quoted error was statistical only[^2]. Since the liquid argon used for the experiment is produced from the atmospheric gas, a similar $^{39}$Ar/$^{\rm nat}$Ar ratio is expected to be present in our sample.\ $^{85}$Kr is mainly produced as fission product of uranium and plutonium. Its abundance in the atmosphere is of the order of 1 Bq/m$^{3}$, corresponding to about $4 \cdot 10^{-15}$ g($^{85}$Kr)/g($^{\rm nat}$Ar) in air. Nevertheless, the distillation procedure for the production of liquid argon substantially reduces the $^{85}$Kr fraction. The residual $^{85}$Kr in liquid argon may vary in different batches of liquid.\ In order to better show the Ar and Kr signals, Figure \[fig:spectrumar\] displays the spectrum obtained from the experimental data after subtracting the fitted contribution from the other sources. The single $^{39}$Ar and $^{85}$Kr contributions can be disentangled from the different end-point energies, 565 keV and 687 keV respectively. Since $^{85}$Kr and $^{39}$Ar decays populate the same energy region of the spectrum their estimates are anti-correlated, as displayed in Figure \[fig:corr\].\ The specific activity of $^{39}$Ar in liquid argon resulting from the analysis is 1.41$\pm$0.02(stat)$\pm$0.11(syst) Bq/liter ($1 \sigma$) (see below for the discussion of systematic uncertainties). The corresponding $^{39}$Ar/$^{\rm nat}$Ar mass ratio is $(8.0 \pm 0.6) \cdot 10^{-16}$ g/g, with errors summed in quadrature. The result is in excellent agreement with the atmospheric determination of Ref. [@loosli].\ From the fit it is also obtained that the $^{85}$Kr activity in the sample under investigation is (0.16$\pm$0.13) Bq/liter ($1 \sigma$). In a previous measurement performed with a sample of Argon 99.999% from Air Liquid, it was found a $^{85}$Kr activity three times larger. This indicates that a non-negligible $^{85}$Kr contamination may be found in commercial liquid argon samples. In this case, an additional fractional distillation could be required to reduce the radioactive background for the WARP experiment.\ Item Relative error Absolute error (Bq/liter) -------------------- ---------------- --------------------------- -- Energy calibration $\pm$6.5% $\pm$0.092 Energy resolution $\pm$1.3% $\pm$0.018 Sensitive mass $\pm$3.8% $\pm$0.054 Total $\pm$ 0.11 \ The systematic uncertainties are summarized in Table \[table:2\]. The dominating item is related to the energy calibration of the detector response: since the discrimination between $^{39}$Ar and $^{85}$Kr is based upon the $\beta$ end-points, the fit result is sensitive to the energy calibration and resolution. The uncertainty on the energy calibration in the range of interest was evaluated to be 2% ($1 \sigma$) from the meaurements with the $\gamma$-ray sources; the corresponding $^{39}$Ar systematic error is 6.5%. The second important contribution is related to the uncertainty in the active volume of the chamber. The filling level can be determined with accuracy of about 1 mm, and the diameter of the teflon container with the reflector fixed on it is known with a precision of about 2 mm. This corresponds to an uncertainty on the sensitive mass of 3.8%. Conclusions {#sec5} =========== The best estimate of the $^{39}$Ar specific activity in the liquid argon is $(1.41 \pm 0.11)$ Bq/liter, or $(1.01 \pm 0.08)$ Bq/kg of natural Ar, or $(8.0 \pm 0.6) \cdot 10^{-16}$ g($^{39}$Ar)/g($^{\rm nat}$Ar). The value is consistent with the previous determination by H. Loosli [@loosli]. The uncertainty in our measurement is mainly due to systematics.\ The liquid argon sample under investigation shows a contamination of $^{85}$Kr, 0.16$\pm$0.13 Bq/liter ($1 \sigma$). Acknowledgments {#ack} =============== We wish to dedicate this work to the memory of our friend and colleague Nicola Ferrari, co-author of the paper, who prematurely passed away on July, 2006.\ We also thank Prof. H. Loosli for helpful communications concerning his $^{39}$Ar paper. L. P. acknowledges the support by the EU FP6 project <span style="font-variant:small-caps;">Ilias</span>. A. M. S. has been in part supported by a grant of the President of the Polish Academy of Sciences, by the <span style="font-variant:small-caps;">Ilias</span> contract Nr. RII3-CT-2004-506222 and by the MNiSW grant 1P03B04130. [00]{} R. Brunetti [*et al.*]{}, New Astron. Rev. **49**, 265 (2005). WARP Collaboration, R. Brunetti *et al.*, WARP: WIMP Argon Programme, Proposal for WARP to INFN, March 2004, available at `http://warp.pv.infn.it`. H. H. Loosli, Earth and Planetary Science Letters, **63**, 51 (1983). J. A. Formaggio and C. J. Martoff, Ann. Rev. Nucl. Part. Sci. **54**, 361 (2004). LBNL Isotopes Project Nuclear Data Dissemination Home Page (available at `http://ie.lbl.gov/toi.html`). W. Kutschera *et al.*, Nucl. Instr. Meth. B **92**, 241 (1994). P. Cennini *et al.*, Nucl. Instr. Meth. A **356**, 526 (1995). C. Rubbia, CERN-EP/77-08 (1977). P. Benetti *et al.*, Nucl. Instr. and Meth. A [**327**]{}, 203 (1993). P. Aprili *et al.* \[ICARUS Collaboration\], CERN/SPSC 2002-027 (2002). D. B. Cline *et al.*, Nucl. Instr. Meth. A **503**, 136 (2003). M. G. Boulay and A. Hime, Astropart. Phys. **25**, 179 (2006) A. Rubbia, \[arXiv::hep-ph/0407297\], Proceedings of the XI International Conference on Calorimetry in High Energy Physics (CALOR2004), Perugia, Italy (2004) A. Rubbia, Journal of Physics, Conf. Series **39**, 129 (2005) \[arXiv::hep-ph/0510320\] NIST Chemistry WebBook, NIST Standard Reference Database Number 69, June 2005, Eds. P. J. Linstrom and W. G. Mallard. MACRO Collaboration, M. Ambrosio [*et al.*]{}, Astropart. Phys. [**10**]{}, 11 (1999). Geant4 Collaboration, S. Agostinelli [*et al.*]{}, Nucl. Instr. Meth. A **506**, 250 (2003); Geant4 Collaboration, J. Allison *et al.*, IEEE Trans. Nucl. Scie. **53**, 270 (2006). V. D. Ashitkov [*et al.*]{}, Nucl. Instr. Meth. A **416**, 179 (1998). [^1]: Other radioactive isotopes of Ar, as $^{37}$Ar and $^{41}$Ar, are short-lived ($T_{1/2}$ are 35 days and 109 min, respectively) and their cosmogenic production is negligible in the underground laboratory. The long-lived $^{42}$Ar ($T_{1/2}$ = 32.9 y) is expected to be present in natural argon because of thermonuclear tests in the atmosphere; however, its concentration in $^{\rm nat}$Ar is negligible, $< 6 \cdot 10^{-21}$ g/g (at 90% CL) [@ar42], corresponding to less than 85 $\mu$Bq/liter in liquid argon. [^2]: The $^{39}$Ar/$^{\rm nat}$Ar ratio was measured for dating purposes. The knowledge of the absolute $^{39}$Ar specific activity was hence not necessary.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Thanks to the success of object detection technology, we can retrieve objects of the specified classes even from huge image collections. However, the current state-of-the-art object detectors (such as Faster R-CNN) can only handle pre-specified classes. In addition, large amounts of positive and negative visual samples are required for training. In this paper, we address the problem of open-vocabulary object retrieval and localization, where the target object is specified by a textual query (e.g., a word or phrase). We first propose Query-Adaptive R-CNN, a simple extension of Faster R-CNN adapted to open-vocabulary queries, by transforming the text embedding vector into an object classifier and localization regressor. Then, for discriminative training, we then propose negative phrase augmentation (NPA) to mine hard negative samples which are visually similar to the query and at the same time semantically mutually exclusive of the query. The proposed method can retrieve and localize objects specified by a textual query from one million images in only 0.5 seconds with high precision.' author: - | Ryota Hinami$^{1, 2}$ and Shin’ichi Satoh$^{2, 1}$\ $^{1}$The University of Tokyo, $^{2}$National Institute of Infomatics\ [hinami@nii.ac.jp, satoh@nii.ac.jp]{} bibliography: - 'QRCN.bib' title: | Discriminative Learning of Open-Vocabulary Object Retrieval\ and Localization by Negative Phrase Augmentation --- Introduction {#sec:intro} ============ Our goal is to retrieve objects from large-scale image database and localize their spatial locations given a textual query. The task of object retrieval and localization has many applications such as spatial position-aware image searches [@Hinami2017] and it recently has gathered much attention from researchers. While much of the previous work mainly focused on object instance retrieval wherein the query is an image [@Shen2012; @Taoa; @Tolias2015], recent approaches [@Aytar2014; @Hinami2016] enable retrieval of more generic concepts such as an object category. Although such approaches are built on the recent successes of object detection including that of R-CNN [@Girshick2014], object detection methods can generally handle only closed sets of categories (e.g., PASCAL 20 classes), which severely limits the variety of queries when they are used as retrieval systems. Open-vocabulary object localization is also a hot topic and many approaches are proposed to solve this problem [@Plummer2015; @Chen2017]. However, most of them are not scalable to make them useful for large-scale retrieval. ![Training examples in open-vocabulary object detection. (a) positive example of skier classifier. (b) examples without positive annotation, which can be positive. (c) examples without positive annotation from an image that contains a positive example. (d) proposed approach to select hard and true negative examples by using linguistics knowledge. []{data-label="fig:top"}](top.pdf){width="1.00\linewidth"} We first describe [*Query-Adaptive*]{} R-CNN as an extension of the Faster R-CNN [@Ren2015] object detection framework to open-vocabulary object detection simply by adding a component called a [*detector generator*]{}. While Faster R-CNN learns the class-specific linear classifier as learnable parameters of the neural network, we generate the weight of the classifier adaptively from text descriptions by learning the detector generator (Fig. \[fig:pipeline\]b). All of its components can be trained in an end-to-end manner. In spite of its simple architecture, it outperforms all state-of-the-art methods in the Flickr30k Entities phrase localization task. It can also be used for large-scale retrievals in the manner presented in [@Hinami2016]. However, training a discriminative classifier is harder in the open-vocabulary setting. Closed-vocabulary object detection models such as Faster R-CNN are trained using many negative examples, where a sufficient amount of good-quality negative examples is shown to be important for learning a discriminative classifier [@Felzenszwalb2010; @Shrivastava2016]. While closed-vocabulary object detection can use all regions without positive labels as negative data, in open-vocabulary detection, it is not guaranteed that a region without a positive label is negative. For example, as shown in Fig. \[fig:top\]b, a region with the annotation `a man` is not always negative for `skier`. Since training data for open-vocabulary object detection is generally composed of images, each having region annotations with free descriptions, it is nearly impossible to do an exhaustive annotation throughout the dataset for all possible descriptions. Another possible approach is to use the regions without positive labels in the image that contains positive examples, as shown in Fig. \[fig:top\]c. Although they can be guaranteed to be positive by carefully annotating the datasets, negative examples are only limited to the objects that cooccur with the learned class. To exploit negative data in open-vocabulary object detection, we use mutually exclusive relationships between categories. For example, an object with a label `dog` is guaranteed to be negative for the `cat` class because `dog` and `cat` are mutually exclusive. In addition, we propose an approach to select [*hard negative*]{} phrases that are difficult to discriminate (e.g., selecting `zebra` for `horse`). This approach, called [*negative phrase augmentation (NPA)*]{}, significantly improves the discriminative ability of the classifier and improves the retrieval performance by a large margin. Our contributions are as follows. 1) We propose Query-Adaptive R-CNN, an extension of Faster R-CNN to open vocabulary, that is a simple yet strong method of open-vocabulary object detection and that outperforms all state-of-the-art methods in the phrase localization task. 2) We propose negative phrase augmentation (NPA) to exploit hard negative examples when training for open-vocabulary object detection, which makes the classifier more discriminative and robust to distractors in retrieval. Our method can accurately find objects amidst one million images in 0.5 second. Related work ============ [**Phrase localization.**]{} Object grounding with natural language descriptions has recently drawn much attention and several tasks and approaches have been proposed for it [@Guadarrama2014; @Hu2015f; @Kazemzadeh2014; @Mao2016; @Plummer2015]. The most related task to ours is the phrase localization introduced by Plummer et al. [@Plummer2015], whose goal is to localize objects that corresponds to noun phrases in textual descriptions from an image. Chen et al. [@Chen2017] is the closest to our work in terms of learning region proposals and performing regression conditioned upon a query. However, most phrase localization methods are not scalable and cannot be used for retrieval tasks. Some approaches [@Plummer2017; @Wang] learn a common subspace between the text and image for phrase localization. Instead of learning the subspace between the image and sentence as in standard cross-modal searches, they learn the subspace between a region and a phrase. In particular, Wang et al. [@Wang] use a deep neural network to learn the joint embedding of images and text; their training uses structure-preserving constraints based on structured matching. Although these approaches can be used for large-scale retrieval, their accuracy is not as good as recent state-of-the-art methods. [**Object retrieval and localization.**]{} Object retrieval and localization have been researched in the context of particular object retrieval [@Shen2012; @Taoa; @Tolias2015], where a query is given as an image. Aytar et al. [@Aytar2014] proposed retrieval and localization of generic category objects by extending the object detection technique to large-scale retrieval. Hinami and Satoh [@Hinami2016] extended the R-CNN to large-scale retrieval by using approximate nearest neighbor search techniques. However, they assumed that the detector of the category is given as a query and require many sample images with bounding box annotations in order to learn the detector. Several other approaches have used the external search engines (e.g., Google image search) to get training images from textual queries [@Arandjelovi2012; @Chatfield2015]. Instead, we generate an object detector directly from the given textual query by using a neural network. [**Parameter prediction by neural network.**]{} Query-Adaptive R-CNN generates the weights of the detector from the query instead of learning them by backpropagation. The dynamic filter network [@DeBrabandere2016] is one of the first methods that generate neural network parameters dynamically conditioned on an input. Several subsequent approaches use this idea in zero-shot learning [@Ba2016] and visual question answering [@Noh2016]. Zhang et al. [@Zhang2017] integrates this idea into the Fast R-CNN framework by dynamically generating the classifier from the text in a similar manner to [@Ba2016]. We extend this work to the case of large-scale retrieval. The proposed Query-Adaptive R-CNN generates the regressor weights and learn the region proposal network following Faster R-CNN. It enables precise localization with fewer proposals, which makes the retrieval system more memory efficient. In addition, we propose a novel hard negative mining approach, called negative phrase augmentation, which makes the generated classifier more discriminative. Query-Adaptive R-CNN ==================== Query-adaptive R-CNN is a simple extension of Faster R-CNN to open-vocabulary object detection. While Faster R-CNN detects objects of fixed categories, Query-Adaptive R-CNN detects any objects specified by a textual phrase. Figure \[fig:pipeline\] illustrates the difference between Faster R-CNN and Query-Adaptive R-CNN. While Faster R-CNN learns a class-specific classifier and regressor as parameters of the neural networks, Query-Adaptive R-CNN generates them from the query text by using a detector generator. Query-Adaptive R-CNN is a simple but effective method that surpasses state-of-the-art phrase localization methods and can be easily extended to the case of large-scale retrieval. Furthermore, its retrieval accuracy is significantly improved by a novel training strategy called negative phrase augmentation (Sec. \[sec:train\]). Architecture {#sec:qrcn_arch} ------------ The network is composed of two subnetworks: a [*region feature extractor*]{} and [*detector generator*]{}, both of which are trained in an end-to-end manner. The region feature extractor takes an image as input and outputs features extracted from sub-regions that are candidate objects. Following Faster R-CNN [@Ren2015], regions are detected using a region proposal network (RPN) and the features of the last layer (e.g., fc7 in VGG network) are used as region features. The detector generator takes a text description as an input and outputs a linear classifier and regressor for the description (e.g., if `a dog` is given, `a dog` classifier and regressor are output). Finally, a confidence and a regressed bounding box are predicted for each region by applying the classifier and regressor to the region features. ![Difference in network architecture between (a) Faster R-CNN and (b) Query-Adaptive R-CNN. While Faster R-CNN learns the classifier of a closed set of categories as learnable parameters of neural networks, Query-Adaptive R-CNN generates a classifier and regressor adaptively from a query text by learning a detector generator that transforms the text into a classifier and regressor. ](pipeline.pdf){width="1.00\linewidth"} \[fig:pipeline\] [**Detector generator.**]{} The detector generator transforms the given text $t$ into a classifier $\mathbf{w}^c$ and regressor $(\mathbf{w}^r_x, \mathbf{w}^r_y, \mathbf{w}^r_w, \mathbf{w}^r_h)$, where $\mathbf{w}_c$ is the weight of a linear classifier and $(\mathbf{w}^r_x, \mathbf{w}^r_y, \mathbf{w}^r_w, \mathbf{w}^r_h)$ is the weight of a linear regressor in terms of $x$, $y$, width $w$, and height $h$, following [@Girshick2014]. We first transform a text $t$ of variable length into a text embedding vector $\mathbf{v}$. Other phrase localization approaches uses the Fisher vector encoding of word2vec [@Klein2015; @Plummer2015] or long-short term memory (LSTM) [@Chen2017] for the phrase embedding. However, we found that the simple mean pooling of word2vec [@Mikolov2013] performs better than these methods for our model (comparisons given in the supplemental material). The text embedding is then transformed into a detector, i.e., $\mathbf{w}_c=G_c(\mathbf{v})$ and $(\mathbf{w}^r_x, \mathbf{w}^r_y, \mathbf{w}^r_w, \mathbf{w}^r_h)=G_r(\mathbf{v})$. Here, we use a linear transformation for $G_c$ (i.e., $\mathbf{w}_c=\mathbf{W}\mathbf{v}$, where $\mathbf{W}$ is a projection matrix). For the regressor, we use a multi-layer perceptron with one hidden layer to predict each of $(\mathbf{w}^r_x, \mathbf{w}^r_y, \mathbf{w}^r_w, \mathbf{w}^r_h)=G_r(\mathbf{v})$. We tested various architectures for $G_r$ and found that sharing the hidden layer and reducing the dimension of the hidden layer (up to $16$) does not adversely affect the performance, while at the same time it significantly reduces the number of parameters (see Sec. \[sec:exp\_ph\] for details). Training with Negative Phrase Augmentation {#sec:train} ------------------------------------------ All components of Query-Adaptive R-CNN can be jointly trained in an end-to-end manner. The training strategy basically follows that of Faster R-CNN. The differences are shown in Figure \[fig:neg\_aug\]. Faster R-CNN is trained with the fixed closed set of categories (Fig. \[fig:neg\_aug\]a), where all regions without a positive label can be used as negative examples. On the other hand, Query-Adaptive R-CNN is trained using the open-vocabulary phrases annotated to the regions (Fig. \[fig:neg\_aug\]b), where sufficient negative examples cannot be used for each phrase compared to Faster R-CNN because a region without a positive label is not guaranteed to be negative in open-vocabulary object detection. We solve this problem by proposing negative phrase augmentation (NPA), which enables us to use good quality negative examples by using the linguistic relationship (e.g., mutually exclusiveness) and the confusion between the categories (Fig. \[fig:neg\_aug\]c). It significantly improves the discriminative ability of the generated classifiers. ![Difference in training between (a) closed-vocabulary and (b) open-vocabulary object detection. The approach of NPA is illustrated in (c).[]{data-label="fig:neg_aug"}](neg_aug.pdf){width="1.03\linewidth"} ### Basic Training {#sec:basic} First, we describe the basic training strategy without NPA (Fig. \[fig:neg\_aug\]b). Training a Query-Adaptive R-CNN requires the phrases and their corresponding bounding boxes to be annotated. For the $i$th image (we use one image as a minibatch), let us assume that $C_i$ phrases are associated with the image. The $C_i$ phrases can be considered as the classes to train in the minibatch. The labels $\mathbf{L}_i \in \{0,1\}^{C_i\times n_r}$ are assigned to the region proposals generated by RPN (each of the dotted rectangles in Fig \[fig:neg\_aug\]b); a positive label is assigned if the box overlaps the ground truth box by more than 0.5 in IoU and negative labels are assigned to other RoIs under the assumption that all positive objects of $C_i$ classes are annotated (i.e., regions without annotations are negative within the image).[^1] We then compute the classification loss by using the training labels and classification scores.[^2] The loss in terms of RPN and bounding box regression is computed in the same way as Faster R-CNN [@Ren2015]. ### Negative Phrase Augmentation {#sec:neg} Here, we address the difficulty of using negative examples in the training of open-vocabulary object detection. As shown in Fig. \[fig:top\]b, our generated classifier is not discriminative enough. The reason is the scarcity of negative examples when using the training strategy described in Sec. \[sec:basic\]; e.g., the `horse` classifier is not learned with the `zebra` as a negative example except for the rare case that both a `zebra` and a `horse` are in the same image. Using hard negative examples has proven to be effective in the object detection to train a discriminative detector [@Felzenszwalb2010; @Girshick2014; @Shrivastava2016]. However, adding negative examples is usually not easy in the open-vocabulary setting, because it is not guaranteed that a region without a positive label is negative. For example, an object with the label `man` is not a negative of `person` even though `person` is not annotated. There are an infinite number of categories in open-vocabulary settings, which makes it difficult to exhaustively annotate all categories throughout the dataset. How can we exploit hard examples that are guaranteed to be negative? We can make use of the mutually exclusive relationship between categories: e.g., an object with a `dog` label is negative for `cat` because `dog` and `cat` are mutually exclusive. There are two ways we can add to a minibatch: add negative images (regions) or negative phrases. Adding negative phrases (as in Fig. \[fig:neg\_aug\]c) is generally better because it involves a much smaller additional training cost than adding images in terms of the both computational cost and GPU memory usage. In addition, to improve the discriminative ability of the classifier, we select only hard negative phrases by mining the confusing categories. This approach, called [*negative phrase augmentation (NPA)*]{}, is a generic way of exploiting hard negative examples in open-vocabulary object detection and leads to large improvements in accuracy, as we show in Sec. \[sec:exp\_ret\]. [**Confusion table.**]{} We create a confusion table that associates a category with its hard negative categories, from which negative phrases are picked as illustrated in Fig. \[fig:neg\_aug\]c. To create the entry for category $c$, we first generate the candidate list of hard negative categories by retrieving the top 500 scored objects from all objects in the validation set of Visual Genome [@Krishna2016] (using $c$ as a query). After that, we remove the mutually non-exclusive category relative to $c$ from the list. Finally, we aggregate the list by category and assign a weight to each category. Each of the registered entries becomes like [`dog:{cat:0.5, horse:0.3, cow:0.2}`]{}. The weight corresponds to the probability of selecting the category in NPA, which is computed based on the number of appearances and their ranks in the candidate list.[^3] [**Removal of mutually non-exclusive phrases.**]{} To remove non-mutually exclusive phrases from the confusion table, we use two approaches that estimate whether the two categories are mutually exclusive or not. 1) The first approach uses the [*WordNet hierarchy*]{}: if two categories have parent-child relationships in WordNet [@Miller1995], they are not mutually exclusive. However, the converse is not necessarily true; e.g., `man` and `skier` are not mutually exclusive but do not have the parent-child relationship in the WordNet hierarchy. 2) As an alternative approach, we propose to use [*Visual Genome annotation*]{}: if two categories co-occur more often in the Visual Genome dataset [@Krishna2016], these categories are considered to be not mutually exclusive.[^4] These two approaches are complementary, and they improve detection performance by removing the mutually non-exclusive words (see Sec. \[sec:exp\_ret\]). [**The training pipeline**]{} with NPA is as follows: (1) [**Update the confusion table:**]{} The confusion table is updated periodically (after every 10k iterations in our study). Entries were created for categories that frequently appeared in 10k successive batches (or the whole training set if the size of the dataset is not large). (2) [**Add hard negative phrases:**]{} Negative phrases are added to each of the $C_i$ phrases in a minibatch. We replace the name of the category in each phrase with its hard negative category (e.g., generate `a running woman` for `a running man`), where the category name is obtained by extracting nouns. A negative phrase is randomly selected from the confusion table on the basis of the assigned probability. (3) [**Add losses:**]{} As illustrated in Fig. \[fig:neg\_aug\]c, we only add negative labels to the regions where a positive label is assigned to the original phrase. The classification loss is computed only for the regions, which is added to the original loss. Large-Scale Object Retrieval {#sec:ret} ============================ Query-Adaptive R-CNN can be used for large-scale object retrieval and localization, because it can be decomposed into a query-independent part and a query-dependent part, i.e., a region feature extractor and detector generator. We follow the approach used in large-scale R-CNN [@Hinami2016], but we overcome its two critical drawbacks. First, a large-scale R-CNN can only predict boxes included in the region proposals; these are detected offline even though the query is unknown at the time; therefore, to get high recall, a large number of object proposals should be used, which is memory inefficient. Instead, we generate a regressor as well as a classifier, which enables more accurate localization with fewer proposals. Second, a large-scale R-CNN assumes that the classifier is given as a query, and learning a classifier requires many samples with bounding annotations. We generate the classifier from a text query directly by using the detector generator of Query-Adaptive R-CNN. The resulting system is able to retrieve and localize objects from a database with [*one million images*]{} in [*less than one second*]{}. **Database indexing.** For each image in the database, the region feature extractor extracts region proposals and corresponding features. We create an index for the region features in order to speed up the search. For this, we use the IVFADC system [@Jegou2011] in the manner described in [@Hinami2016]. **Searching.** Given a text query, the detector generator generates a linear classifier and bounding box regressor. The regions with high classification scores are then retrieved from the database by making an IVFADC-based search. Finally, the regressor is applied to the retrieved regions to obtain the accurately localized bounding boxes. \[tab:phrase\] \[tab:res\_regressor\] Experiments =========== Experimental Setup ------------------ **Model:** Query-Adaptive R-CNN is based on VGG16 [@Simonyan2015], as in other work on phrase localization. We first initialized the weights of the VGG and RPN by using Faster R-CNN trained on Microsoft COCO [@Lin2014]; the weights were then fine-tuned for each dataset of the evaluation. In the training using Flickr30k Entities, we first pretrained the model on the Visual Genome dataset using the object name annotations. We used Adam [@Kingma2015] with a learning rate starting from 1e-5 and ran it for 200k iterations. **Tasks and datasets:** We evaluated our approaches on two tasks: phrase localization and open-vocabulary object detection and retrieval. The **phrase localization task** was performed on the Flickr30k Entities dataset [@Plummer2015]. Given an image and a sentence that describes the image, the task was to localize region that corresponds to the phrase in a sentence. Flickr30k datasets contain 44,518 unique phrases, where the number of words of each phrase is 1–8 (2.1 words on average). We followed the evaluation protocol of [@Plummer2015]. We did not use Flickr30k Entities for the retrieval task because the dataset is not exhaustively annotated (e.g., not all men appearing in the dataset are annotated with `man`), which makes it difficult to evaluate with a retrieval metric such as AP, as discussed in Plummer et al. [@Plummer2017]. Although we cannot evaluate the retrieval performance directly on the phrase localization task, we can make comparisons with other approaches and show that our method can handle a wide variety of phrases. The **open-vocabulary object detection and retrieval task** was evaluated in the same way as the standard object detection task. The difference was the assumption that we do not know the target category at training time in open-vocabulary settings; i.e., the method does not tune in to a specific category, unlike the standard object detection task. We used the Visual Genome dataset [@Krishna2016] and selected the 100 most frequently object categories as queries among its 100k or so categories.[^5] [^6] We split the dataset into training, validation, and test sets following [@Johnson2015]. We also evaluated our approaches on the PASCAL VOC 2007 dataset, which is a widely used dataset for object detection.[^7] As metrics, we used top-k precision and average precision (AP), computed from the region-level ranked list as in the standard object detection task.[^8] ![image](neg_qual.pdf){width="1.00\linewidth"} \[fig:aug\_qual\] Phrase localization {#sec:exp_ph} ------------------- **Comparison with state-of-the-art.** We compared our method with state-of-the-art methods on the Flickr30k Entities phrase localization task. We categorized the methods into two types, i.e., non-scalable and scalable methods (Tab. \[tab:phrase\]). 1) [*Non-scalable methods*]{} cannot be used for large-scale retrieval because their query-dependent components are too complex to process a large amount of images online, and 2) [*Scalable methods*]{} can be used for large-scale retrieval because their query-dependent components are easy to scale up (e.g., the $L_2$ distance computation); these include common subspace-based approaches such as CCA. Our method also belongs to the scalable category. We used a simple model without a regressor and NPA in the experiments. Table \[tab:phrase\] compares Query-Adaptive R-CNN with the state-of-the-art methods. Our model achieved [*65.21%*]{} in accuracy and outperformed all of the previous state-of-the-art models including the non-scalable or joint localization methods. Moreover, it significantly outperformed the scalable methods, which suggests the approach of predicting the classifier is better than learning a common subspace for the open-vocabulary detection problem. ![image](graph.pdf){width="1.00\linewidth"} \[fig:res\_aug\] [**Bounding box regressor.**]{} To demonstrate the effectiveness of the bounding box regressor for precise localization, we conducted evaluations with the regressor at different IoU thresholds. As explained in Sec. \[sec:qrcn\_arch\], the regressor was generated using $G_r$, which transformed 300-d text embeddings $x$ into 4096-d regressor weights $\mathbf{w}^r_x$, $\mathbf{w}^r_y$, $\mathbf{w}^r_w$, and $\mathbf{w}^r_h$. We compared three network architectures for $G_r$: 1) `300-n(-4096)` MLP having a hidden layer with $n$ units that is shared across the four outputs, 2) `300(-n-4096)` MLP having a hidden layer that is not shared, and 3) `300(-4096)` linear transformation (without a hidden layer). Table \[tab:res\_regressor\] shows the results with and without regressor. The regressor significantly improved the accuracy with high IoU thresholds, which demonstrates that the regressor improved the localization accuracy. In addition, the accuracy did not decrease as a result of sharing the hidden layer or reducing the number of units in the hidden layer. This suggests that the regressor lies in a very low-dimensional manifold because the regressor for one concept can be shared by many concepts (e.g., the `person` regressor can be used for `man`, `woman`, `girl`, `boy`, etc.). The number of parameters was significantly reduced by these tricks, to even fewer than in the linear transformation. The accuracy slightly decreased with a threshold of 0.5, because the regressor was not learned properly for the categories that did not frequently appear in the training data. \[tab:open\] \[tab:confuse\] ![image](large_qual.pdf){width="1.00\linewidth"} \[fig:large\_qual\] Open-Vocabulary Object Retrieval {#sec:exp_ret} -------------------------------- **Main comparison.** Open-vocabulary object detection and retrieval is a much more difficult task than phrase localization, because we do not know how many objects are present in an image. We used NPA to train our model. As explained in Sec. \[sec:neg\], we used two strategies, [*Visual Genome annotation (VG)*]{} and [*WordNet hierarchy (WN)*]{}, to remove mutually non-exclusive phrases from the confusion table. As a baseline, we compared with region-based CCA [@Plummer2017], which is scalable and shown to be effective for phrase localization; for a fair comparison, the subspace was learned using the same dataset as ours. An approximate search was not used to evaluate the actual performance at open-vocabulary object detection. Table \[tab:open\] compares different training strategies. NPA significantly improved the performance: [*more than 25% relative improvement*]{} for all metrics. Removing mutually non-exclusive words also contributed the performance: WN and VG both improved performance (5.8% and 6.9% relative AP gain, respectively). Performance improved even further by combining them (11.8% relative AP gain), which shows they are complementary. AP was much improved by NPA for the PASCAL dataset as well (47% relative gain). However, the performance was still much poorer than those of the state-of-the-art object detection methods [@JosephRedmon2016; @Ren2015], which suggests that there is a large gap between open-vocabulary and closed-vocabulary object detection. **Detailed results of NPA.** To investigate the effect of NPA, we show the AP with and without NPA for individual categories in Figure \[fig:res\_aug\], which are sorted by relative AP improvement. It shows that AP improved especially for animals (`elephant`, `cow`, `horse`, etc.) and person (`skier`, `surfer`, `girl`), which are visually similar within the same upper category. Table \[tab:confuse\] shows the most confused category and its total count in the top 100 search results for each query, which shows what concept is confusing for each query and how much the confusion is reduced by NPA.[^9] This shows that visually similar categories resulted in false positive without NPA, while their number was suppressed by training with NPA. The reason is that these confusing categories were added for negative phrases in NPA, and the network learned to reject them. Figure \[fig:aug\_qual\] shows the qualitative search results for each query with and without NPA (and CCA as a baseline), which also showed that NPA can discriminate confusing categories (e.g., `horse` and `zebra`). These results clearly demonstrate that NPA significantly improves the discriminative ability of classifiers by adding hard negative categories. \[tab:res\_large\] **Large-scale experiments.** Finally, we evaluated the scalability of our method on a large image database. We used one million images from the ILSVRC 2012 training set for this evaluation. Table \[tab:res\_large\] show the speed and memory. The mean and standard deviation of speed are computed over 20 queries in PASCAL VOC dataset. Our system could retrieve objects from one million images in around 0.5 seconds. We did not evaluate accuracy because there is no such large dataset with bounding box annotations.[^10] Figure \[fig:large\_qual\] shows the retrieval results from one million images, which demonstrates that our system can accurately retrieve and localize objects from a very large-scale database. Conclusion ========== Query-Adaptive R-CNN is a simple yet strong framework for open-vocabulary object detection and retrieval. It achieves state-of-the-art performance on the Flickr30k phrase localization benchmark and it can be used for large-scale object retrieval by textual query. In addition, its retrieval accuracy can be further increased by using a novel training strategy called negative phrase augmentation (NPA) that appropriately selects hard negative examples by using their linguistic relationship and confusion between categories. This simple and generic approach significantly improves the discriminative ability of the generated classifier. [**Acknowledgements:**]{} This work was supported by JST CREST JPMJCR1686 and JSPS KAKENHI 17J08378. {#section .unnumbered} Detailed Analysis on Phrase Localization ======================================== With or Without Regression -------------------------- Figure \[fig:reg\_ph\_qual\] compares the results with and without bounding box regression. We use the model of `300-16(-4096)` to generate the regressor (explained in our paper in Sec.5.2). Figure \[fig:reg\_ph\_qual\]a shows the successful cases. The regression is effective especially for the frequently appeared categories in training data such as `person` and `dog` because the accurate regressor can be learned by using many examples. The regression was succeeded for several uncommon categories such as `potter` and `gondola`; the reason is that regressor can be shared with other common categories, e.g., `person` and `boat` regressor can be used for `potter` and `gondola`, respectively. Figure \[fig:reg\_ph\_qual\]b shows the failure cases, which include the categories with ambiguous boundary (e.g., `sidewalk` and `mud`). The regressor does not work for such categories. In addition, if the category is not frequently appeared in training data, the regressor moves the bounding box into the wrong direction. Future work includes automatically determining whether to perform bounding box regression or not. ![image](reg_ph_qual.pdf){width="1.00\linewidth"} With or Without Negative Phrase Augmentation -------------------------------------------- Table \[tab:aug\_ph\_quan\] shows the phrase localization performance with and without negative phrase augmentation (NPA). It shows that the phrase localization performance is not improved by training with NPA. As explained in our paper, it is due to the difference between the phrase localization and object detection tasks; phrase localization assumes there is only one relevant object in the image while object detection places no assumption on the number of objects. Because of this, in the phrase localization task, we can benefit from NPA only when confusing objects appear in the single image. Figure \[fig:aug\_ph\_qual\]a shows such cases: e.g., when two persons appear in the same image, the method with NPA can select the appropriate person that is relevant to the query. However, since such cases are rare in the Flickr30k Entities dataset, NPA does not contribute to performance. Figure \[fig:aug\_ph\_qual\]b shows the failure cases of NPA. The method with NPA tends to predict the small bounding box in which other objects do not appear. The reason is that NPA cannot handle highly overlapped objects appropriately. For example, in the third example in Fig. \[fig:aug\_ph\_qual\], the `sand` region may have high scores for the `deer`. If there are many such cases in the validation set, the `sand` is added to hard negative phrase for the `deer`. The `sand` classifier thus predicts low score to the regions that are overlapped with the `deer`. Therefore, the method with NPA tends to predict the small box that contains only the (part of) relevant object. This is the limitation of NPA and causes the accuracy decrease in phrase localization task. ![image](aug_ph_qual.pdf){width="1.00\linewidth"} Ablation Studies ---------------- We here present the detailed analysis of our approach on the Flickr30k Entities phrase localization task and quantify our architectural design decisions. For the simplicity, in the comparison of the region proposal and text embedding, we used the pretrained Faster R-CNN model trained on the COCO object detection and finetuned it for phrase localization task. The bounding box regression and NPA are not used in this experiments. [**Pretraining.**]{} Table \[tab:pre\] compares three pretrained models trained on 1) ImageNet classification, 2) PASCAL, and 3) COCO object detection. In addition, we pretrain the whole model including detector generator using Visual Genome dataset after initial pretraining of 1)–3). The results show that there is more than 4% difference in accuracy between simply using ImageNet pretrained model and pretraining on COCO and Visual Genome. Since Flickr30k Entities dataset does not contain many training examples for each object category, pretraining Faster R-CNN with large object detection datasets is important. Training on the Visual Genome dataset further improves the performance because it contains a much larger number of categories than COCO dataset and detector generator is also pretrained on such rich data. [**Region proposal.**]{} Table \[tab:region\] compares three region proposal approaches: 1) selective search [@Uijlings2013], 2) region proposal network (RPN) trained on COCO dataset, which is frozen during the training of phrase localization, and 3) RPN finetuned on phrase localization task. The number of regions is 2000 for the selective search following [@Girshick2014] and 300 for the RPN following [@Ren2015]. In addition, we compared two region sampling strategies: random sampling used in [@Girshick2015; @Ren2015] and online hard example mining (OHEM) [@Shrivastava2016]. The results show that the RPN finetuned for phrase localization task generates much higher quality region proposals than others (12.41% increase in accuracy compared to the selective search), which demonstrates that learning region proposals play an important role in the phrase localization. OHEM further improved the accuracy by 1.56%. [**Text embedding.**]{} Table \[tab:text\] compares five text embedding vectors: 1) Word2Vec [@Mikolov2013] trained on Google News dataset[^11], which is used in our paper, 2) Word2Vec trained on Flickr tags[^12] [@Li2015a], 3) Hybrid Gaussian-Laplacian mixture model (HGLMM) [@Klein2015], which is used in [@Plummer2015; @Plummer2017; @Wang], 4) Skip-thought vector (combine-skip model)[^13] [@Kiros2015], and 5) Long-short term memory (LSTM) that encodes a phrase into a vector in the manner described in [@Chen2017; @Rohrbach2016], which is learned jointly with other components of Query-Adaptive R-CNN. The second column of Table \[tab:text\] shows the dimension of the text embedding vector. This result shows that the performance is not much affected by the choice of the text embedding. The mean pooling of Word2Vec performs the best despite its simplicity. Additional Examples of Negative Phrase Augmentation =================================================== Figure \[fig:neg\_qual1\], \[fig:neg\_qual2\], and \[fig:neg\_qual3\] show additional examples of the negative phrase augmentation (corresponds to Fig. 4 in our paper). There are many false alarms between the confusing categories such as the animal (`zebra`, `bear`, and `giraffe`), person (`skier` and `child`), and vehicle (`boat`, `train`, and `bus`) without NPA, which are successfully discarded by training with NPA. ![image](neg_qual_1.pdf){width="1.00\linewidth"} ![image](neg_qual_2.pdf){width="1.00\linewidth"} ![image](neg_qual_3.pdf){width="1.00\linewidth"} Additional Examples of Open-Vocabulary Object Retrieval and Localization ======================================================================== Figure \[fig:large\_qual\] shows the additional examples of object retrieval and localization (corresponds to Fig. 6 in our paper). Instead of the ILSVRC dataset used in our paper, we here used the Microsoft COCO dataset [@Lin2014] (40504 images from the validation set) that contains a wider variety of concepts. These results demonstrate that our system can accurately search the wide variety of objects specified by the natural language query. ![image](large_qual_sup.pdf){width="1.00\linewidth"} [^1]: Although this assumption is not always true for datasets such as Flickr30k Entities, it nonetheless works well for them because exceptions are rare. [^2]: Whereas Faster R-CNN uses the softmax cross entropy over the $C + 1$ (background) classes, where $C$ is the number of closed sets of a category, we use the sigmoid cross entropy because the $C_i$ classes are not always mutually exclusive and a background class cannot be defined in the context of open-vocabulary object detection. [^3]: We compute the weight of each category as the sum of 500 minus the rank for all ranked results in the candidate lists normalized over all categories in order to sum to one. [^4]: We set the ratio at 1% of objects in either category. For example, if there are 1000 objects with the `skier` label and 20 of those objects are also annotated with `man` (20/1000=2%), we consider that `skier` and `man` are not mutually exclusive. [^5]: Since the WordNet synset ID is assigned to each object, we add objects with labels of hyponyms as positives (e.g., `man` is positive for the `person` category). [^6]: We exclude the background (e.g., `grass`, `sky`, `field`), multiple objects (e.g., `people`, `leaves`), and ambiguous categories (e.g, `top`, `line`). [^7]: We used the model trained on Visual Genome even for the evaluation on the PASCAL dataset because of the assumption that the target category is unknown. [^8]: We did not separately evaluate the detection and retrieval tasks because both can be evaluated with the same metric. [^9]: For each query, we scored all the objects in the Visual Genome testing set and counted the false alarms in the top 100 scored objects. [^10]: adding distractors would also be difficult, because we cannot guarantee that relevant objects are not in the images. [^11]: https://code.google.com/archive/p/word2vec/ [^12]: the model is provided by the author of [@Dong2016] [^13]: We use the implementation and pre-trained model provided in https://github.com/ryankiros/skip-thoughts
{ "pile_set_name": "ArXiv" }
--- abstract: 'Suppose that $E \subset {\mathbb{R}}^{n+1}$ is a uniformly rectifiable set of codimension $1$. We show that every harmonic function is ${\varepsilon}$-approximable in $L^p(\Omega)$ for every $p \in (1,\infty)$, where $\Omega \coloneqq {\mathbb{R}}^{n+1} \setminus E$. Together with results of many authors this shows that pointwise, $L^\infty$ and $L^p$ type ${\varepsilon}$-approximability properties of harmonic functions are all equivalent and they characterize uniform rectifiability for codimension $1$ Ahlfors-David regular sets. Our results and techniques are generalizations of recent works of T. Hytönen and A. Rosén and the first author, J. M. Martell and S. Mayboroda.' address: - 'Steve Hofmann, Department of Mathematics, University of Missouri, Columbia, MO 65211, USA' - 'Olli Tapiola, Department of Mathematics and Statistics, P.O. Box 35 (MaD), FI-40014 University of Jyväskylä, Finland' author: - Steve Hofmann - Olli Tapiola bibliography: - 'approximability.bib' date: 'May 17, 2019' title: | Uniform rectifiability\ and ${\varepsilon}$-approximability of harmonic functions in $L^p$ --- Introduction ============ In many branches of analysis, Carleson measure estimates are powerful tools that are deeply connected to e.g. elliptic partial differential equations and geometric measure theory. These estimates are particularly useful for measures of the type $|\nabla u(Y)| \, dY$ (see e.g. [@feffermanstein; @garnett]) but the problem is that even strong analytic properties of the function $u$ are not enough to guarantee that the distributional gradient defines a measure of this type. The idea behind *${\varepsilon}$-approximability* is that although a function may fail this Carleson measure property, it can sometimes be approximated arbitrarily well in the $L^\infty$ sense (typically, if it is the solution to an elliptic partial differential equation) by a function $\varphi$ such that $|\nabla \varphi(Y)| \, dY$ is a Carleson measure. Starting from the work of N. Th. Varopoulos [@varopoulos] and J. Garnett [@garnett], this approximation technique has had an imporant role in the development of the theory of elliptic partial differential equations. It has been used to e.g. explore the absolute continuity properties of elliptic measures [@kenigkochpiphertoro; @hofmannkenigmayborodapipher] and, very recently, give a new characterization of uniform rectifiability [@hofmannmartellmayboroda; @garnettmourgogloutolsa]. In this article, we extend the recent results of the first author, J. M. Martell and S. Mayboroda [@hofmannmartellmayboroda] and show that if $E \subset {\mathbb{R}}^{n+1}$ is a uniformly rectifiable (UR) set of codimension $1$, then every harmonic function is ${\varepsilon}$-approximable in $L^p(\Omega)$ for every ${\varepsilon}\in (0,1)$ and every $p \in (1,\infty)$, where $\Omega \coloneqq {\mathbb{R}}^{n+1} \setminus E$. The $L^p$ version of ${\varepsilon}$-approximability was recently introduced by T. Hytönen and A. Rosén [@hytonenrosen] who showed that any weak solution to certain elliptic partial differential equations in ${\mathbb{R}}^{n+1}_+$ is ${\varepsilon}$-approximable in $L^p$ for every ${\varepsilon}\in (0,1)$ and every $p \in (1,\infty)$. Let us be more precise and recall the definition of ${\varepsilon}$-approximability: \[defin:eps\_app\] Suppose that $E \subset {\mathbb{R}}^{n+1}$ is an $n$-dimensional ADR set (see Definition \[defin:adr\]) and let $\Omega \coloneqq {\mathbb{R}}^{n+1} \setminus E$ and ${\varepsilon}\in (0,1)$. We say that a function $u$ is *${\varepsilon}$-approximable* if there exists a constant $C_{\varepsilon}$ and a function $\varphi = \varphi^{\varepsilon}\in BV_{\text{loc}}(\Omega)$ satisfying $$\begin{aligned} \|u-\varphi\|_{L^\infty(\Omega)} < {\varepsilon}\ \ \ \ \ \text{ and } \ \ \ \ \ \sup_{x \in E, r > 0} \frac{1}{r^n} \iint_{B(x,r) \cap \Omega} |\nabla \varphi(Y)| \, dY \le C_{\varepsilon}. \end{aligned}$$ Here $\iint_{B(x,r) \cap \Omega} |\nabla \varphi| \, dY$ stands for the total variation of $\varphi$ over $B(x,r) \cap \Omega$ (see Section \[section:bounded\_variation\]). Sometimes $W^{1,1}$ [@hofmannkenigmayborodapipher] or $C^\infty$ [@garnett; @kenigkochpiphertoro] is used in the definition instead of $BV_{\text{loc}}$. The first results about ${\varepsilon}$-approximability showed that every bounded harmonic function $u$, normalized so that $\|u\|_{L^\infty} \leq 1$, enjoys this this approximation property for every ${\varepsilon}\in (0,1)$ in the upper half-space ${\mathbb{R}}^{n+1}_+$ [@varopoulos; @garnett] and in Lipschitz domains [@dahlberg]. This is a highly non-trivial property since there exist bounded harmonic functions $u$ such that $|\nabla u(Y)| \, dY$ is not a Carleson measure [@garnett]. The $L^p$ version of the property was defined only recently in [@hytonenrosen]: Suppose that $E \subset {\mathbb{R}}^{n+1}$ is an $n$-dimensional ADR set and let $\Omega \coloneqq {\mathbb{R}}^{n+1} \setminus E$, ${\varepsilon}\in (0,1)$ and $p \in (1,\infty)$. We say that a function $u$ is *${\varepsilon}$-approximable in $L^p$* if there exists a function $\varphi = \varphi^{\varepsilon}\in BV_{\text{loc}}(\Omega)$ and constants $C_p$ and $D_{p,{\varepsilon}}$ such that $$\begin{aligned} \left\{ \begin{array}{l} \|N_*(u-\varphi)\|_{L^p(E)} \lesssim {\varepsilon}C_p \|N_*u\|_{L^p(E)} \\ \|{\mathcal{C}}(\nabla \varphi)\|_{L^p(E)} \lesssim D_{p,{\varepsilon}} \|N_*u\|_{L^p(E)} \end{array} \right. , \end{aligned}$$ where $N_*$ is the non-tangential maximal operator (see Definition \[defin:non-tangential\]) and $$\begin{aligned} {\mathcal{C}}(\nabla \varphi)(x) \coloneqq \sup_{r > 0} \frac{1}{r^n} \iint_{B(x,r) \cap \Omega} |\nabla \varphi| \, dY. \end{aligned}$$ Here, as above, we have written $\iint_{B(x,r) \cap \Omega} |\nabla \varphi| dY$ to denote the total variation of $\varphi$ over $B(x,r) \cap \Omega$; we ask the reader to forgive this abuse of notation. See Section \[section:bounded\_variation\] for details. In [@hytonenrosen], the authors showed that if $\Omega = {\mathbb{R}}^{n+1}_+$ and $A \in L^\infty({\mathbb{R}}^n; \mathcal{L}({\mathbb{R}}^{n+1}))$ satisfies $\langle A(x)v,v \rangle \ge \lambda_A |v|^2$ for almost every $x \in {\mathbb{R}}^n$ and all $v \in {\mathbb{R}}^{n+1} \setminus \{0\}$, then any weak solution $u$ to the $t$-independent real scalar (but possibly non-symmetric) divergence form elliptic equation $\text{div}_{x,t} A(x) \nabla_{x,t} u(x,t) = 0$ is ${\varepsilon}$-approximable in $L^p$ for any ${\varepsilon}\in (0,1)$ and any $p \in (1,\infty)$. If we move from ${\mathbb{R}}^{n+1}_+$ to the UR context (see Definition \[defin:ur\]) with no assumptions on connectivity, things will not only get more complicated but we also lose many powerful tools. For example, constructing objects like Whitney regions and Carleson boxes becomes considerably more difficult and the harmonic measure no longer necessarily belongs to the class weak-$A_\infty$ with respect to the surface measure [@bishopjones]. Despite these difficulties, there exists a rich theory of harmonic analysis and many results on elliptic partial differential equations on sets with UR boundaries. Uniform rectifiability can be characterized in numerous different ways and many of these characterizations are valid in all codimensions (see the seminal work of G. David and S. Semmes [@davidsemmes_singular; @davidsemmes_analysis]). For example, UR sets are precisely those ADR sets for which certain types of singular integral operators are bounded from $L^2$ to $L^2$. Recently, the first author, Martell and Mayboroda showed that if $E$ is a UR set of codimension $1$, then every bounded harmonic function in ${\mathbb{R}}^{n+1} \setminus E$ is ${\varepsilon}$-approximable for every ${\varepsilon}\in (0,1)$ [@hofmannmartellmayboroda]. After this, it was shown by Garnett, Mourgoglou and Tolsa that ${\varepsilon}$-approximability of bounded harmonic functions implies uniform rectifiability for $n$-ADR sets [@garnettmourgogloutolsa]. This characterization result was then generalized for a class of elliptic operators by Azzam, Garnett, Mourgoglou and Tolsa [@azzamgarnettmourgogloutolsa]. Our main result is the following generalization of the Hytönen-Rosén approximation theorem [@hytonenrosen Theorem 1.3]: \[theorem:main\_result\] Let $E \subset {\mathbb{R}}^{n+1}$ be a UR set of codimension $1$ and denote $\Omega \coloneqq {\mathbb{R}}^{n+1} \setminus E$. Then every harmonic function in $\Omega$ is ${\varepsilon}$-approximable in $L^p$ for every ${\varepsilon}\in (0,1)$ and every $p \in (1,\infty)$ with $C_p = \|M_{\mathbb{D}}\|_{L^p \to L^p}$ and $D_p = C_p \|M\|_{L^p \to L^p}/{\varepsilon}^2$, where $M$ is the Hardy-Littlewood maximal operator and $M_{\mathbb{D}}$ is its dyadic version (see Section \[section:notation\]). In fact, the key ideas of Hytönen and Rosén allow us to construct $p$-independent approximating functions. To be more precise, let us consider the following pointwise approximating property: Suppose that $E \subset {\mathbb{R}}^{n+1}$ is an $n$-dimensional ADR set and let $\Omega \coloneqq {\mathbb{R}}^{n+1} \setminus E$ and ${\varepsilon}\in (0,1)$. We say that a function $u$ is *pointwise ${\varepsilon}$-approximable* if there exists a function $\varphi = \varphi^{\varepsilon}\in BV_{\text{loc}}(\Omega)$ and a constant $D_{\varepsilon}$ such that $$\begin{aligned} \left\{ \begin{array}{l} N_*(u-\varphi)(x) \lesssim {\varepsilon}M_{\mathbb{D}}(N_*u)(x) \\ {\mathcal{C}}_{\mathbb{D}}(\nabla \varphi)(x) \lesssim D_{\varepsilon}M(M_{\mathbb{D}}(N_*u))(x) \end{array} \right. \end{aligned}$$ for almost any $x \in E$, where ${\mathcal{C}}_{\mathbb{D}}$ is a dyadic version of ${\mathbb{D}}$ (see Section \[section:cc\]). Since ${\mathcal{C}}(\nabla \varphi)$ and ${\mathcal{C}}_{\mathbb{D}}(\nabla \varphi)$ are $L^p$-equivalent by Lemma \[lemma:Lp-comparability\_of\_C\], Theorem \[theorem:main\_result\] is an immediate corollary of the following result and the $L^p$-boundedness of the Hardy-Littlewood maximal operator and its dyadic versions: \[thm:main\_result\_pointwise\] Suppose that $E \subset {\mathbb{R}}^{n+1}$ is an $n$-dimensional UR set and let $\Omega \coloneqq {\mathbb{R}}^{n+1} \setminus E$ and ${\varepsilon}\in (0,1)$. Then every harmonic function in $\Omega$ is pointwise ${\varepsilon}$-approximable. Although the $L^p$ version of ${\varepsilon}$-approximability seems like the weakest one of all the properties, it is equivalent with the other properties in the codimension $1$ ADR context provided that $p$ is large enough. This follows from the recent results of S. Bortz and the second author [@bortztapiola]. Hence, combining our results with the results in [@hofmannmartellmayboroda], [@garnettmourgogloutolsa] and [@bortztapiola] gives us the following characterization theorem: Suppose that $E \subset {\mathbb{R}}^{n+1}$ is an $n$-dimensional ADR set and let $\Omega \coloneqq {\mathbb{R}}^{n+1} \setminus E$. The following conditions are equivalent: 1. $E$ is UR. 2. Bounded harmonic functions in $\Omega$ are ${\varepsilon}$-approximable for every ${\varepsilon}\in (0,1)$. 3. Harmonic functions in $\Omega$ are pointwise ${\varepsilon}$-approximable for every ${\varepsilon}\in (0,1)$. 4. Harmonic functions in $\Omega$ are ${\varepsilon}$-approximable in $L^p$ for some $p > n/(n-1)$ and every ${\varepsilon}\in (0,1)$. 5. Harmonic functions in $\Omega$ are ${\varepsilon}$-approximable in $L^p$ for all $p \in (1,\infty)$ and every ${\varepsilon}\in (0,1)$. To prove the implication 1) $\Rightarrow$ 3), we combine some techniques of the proof of the Hytönen-Rosén theorem with the tools and techniques from [@hofmannmartellmayboroda]. Some of the techniques can be used in a straightforward way but with the rest of them we have take care of many technicalities and be careful with the details. We start by recalling the basic definitions and some results needed in our statements and proofs. For the most part, our notation and terminology agrees with [@hofmannmartellmayboroda]. Notation {#section:notation} -------- We use the following notation. 1. The set $E \subset {\mathbb{R}}^{n+1}$ will always be a closed set of Hausdorff dimension $n$. We denote $\Omega \coloneqq {\mathbb{R}}^{n+1} \setminus E$. 2. The letters $c$ and $C$ denote constants that depend only on the dimension, the ADR constant (see Definition \[defin:adr\]), the UR constants (see Definition \[defin:ur\]) and other similar parameters. We call them *structural constants*. The values of $c$ and $C$ may change from one occurence to another. We do not track how our bounds depend on these constants and usually just write $\lambda_1 \lesssim \lambda_2$ if $\lambda_1 \le c\lambda_2$ for a structural constant $c$ and $\lambda_1 \approx \lambda_2$ if $\lambda_1 \lesssim \lambda_2 \lesssim \lambda_1$. 3. We use capital letters $X,Y,Z$, and so on to denote points in $\Omega$ and lowecase letters $x,y,z$, and so on to denote points in $E$. 4. The $(n+1)$-dimensional Euclidean open ball of radius $r$ will be denoted $B(x,r)$ or $B(X,r)$ depending on whether the center point lies on $E$ or $\Omega$. We denote the surface ball of radius $r$ centered at $x$ by $\Delta(x,r) \coloneqq B(x,r) \cap E$. 5. Given a Euclidean ball $B \coloneqq B(X,r)$ or a surface ball $\Delta \coloneqq \Delta(x,r)$ and constant $\kappa > 0$, we denote $\kappa B \coloneqq B(X,\kappa r)$ and $\kappa \Delta \coloneqq \Delta(x, \kappa r)$. 6. For every $X \in \Omega$ we set $\delta(X) \coloneqq {\text{dist}}(X,E)$. 7. We let ${\mathcal{H}}^n$ be the $n$-dimensional Hausdorff measure and denote $\sigma \coloneqq {\mathcal{H}}^n|_E$. The $(n+1)$-dimensional Lebesgue measure of a measurable set $A \subset \Omega$ will be denoted by $|A|$. 8. For a set $A \subset {\mathbb{R}}^{n+1}$, we let $1_A$ be the indicator function of $A$: $1_A(x) = 0$ if $x \notin A$ and $1_A(x) = 1$ if $x \in A$. 9. The interior of a set $A$ will be denoted by $\text{int}(A)$. The closure of a set $A$ will be denoted by $\overline{A}$. 10. For $\mu$-measurable sets $A$ with positive and finite measure we set $\fint_A f \, d\mu \coloneqq \tfrac{1}{\mu(A)} f \, d\mu$. 11. The Hardy-Littlewood maximal operator and its dyadic version (see Section \[section:dyadic\_cubes\]) in $E$ will be denoted $M$ and $M_{\mathbb{D}}$, respectively: $$\begin{aligned} Mf(x) &\coloneqq \sup_{\Delta(y,r) \ni x} \fint_{\Delta(y,r)} |f(z)| \, d\sigma(z),\\ M_{\mathbb{D}}f(x) &\coloneqq \sup_{Q \in {\mathbb{D}}, Q \ni x} \fint_Q |f(z)| \, d\sigma(z). \end{aligned}$$ ADR, UR and NTA sets -------------------- \[defin:adr\] We say that a closed set $E \subset {\mathbb{R}}^{n+1}$ is an *$n$-ADR* (Ahlfors-David regular) set if there exists a uniform constant $C$ such that $$\begin{aligned} \frac{1}{C} r^n \le \sigma(\Delta(x,r)) \le C r^n \end{aligned}$$ for every $x \in E$ and every $r \in (0,{\text{diam}}(E))$, where ${\text{diam}}(E)$ may be infinite. \[defin:ur\] Following [@davidsemmes_singular; @davidsemmes_analysis], we say that an $n$-ADR set $E \subset {\mathbb{R}}^{n+1}$ is *UR* (uniformly rectifiable) if it contains “big pieces of Lipschitz images” (BPLI) of ${\mathbb{R}}^n$: there exist constants $\theta, \Lambda > 0$ such that for every $x \in E$ and $r \in (0,{\text{diam}}(E))$ there is a Lipschitz mapping $\rho = \rho_{x,r} \colon {\mathbb{R}}^n \to {\mathbb{R}}^{n+1}$, with Lipschitz norm no larger that $\Lambda$, such that $$\begin{aligned} {\mathcal{H}}^n(E \cap B(x,r) \cap \rho(\{y \in {\mathbb{R}}^n \colon |y| < r\})) \ge \theta r^n. \end{aligned}$$ Following [@jerisonkenig], we say that a domain $\Omega \subset {\mathbb{R}}^{n+1}$ is *NTA* (nontangentially accessible) if 1. $\Omega$ satisfies the *Harnack chain condition*: there exists a uniform constant $C$ such that for every $\rho > 0$, $\Lambda \ge 1$ and $X,X' \in \Omega$ with $\delta(X), \delta(X') \ge \rho$ and $|X - X'| < \Lambda \rho$ there exists a chain of open balls $B_1, \ldots, B_N \subset \Omega$, $N \le C(\Lambda)$, with $X \in B_1$, $X' \in B_N$, $B_k \cap B_{k+1} \neq \emptyset$ and $C^{-1} {\text{diam}}(B_k) \le {\text{dist}}(B_k,\partial \Omega) \le C {\text{diam}}(B_k)$, 2. $\Omega$ satisfies the *corkscrew condition*: there exists a uniform constant $c$ such that for every surface ball $\Delta \coloneqq \Delta(x,r)$ with $x \in \partial\Omega$ and $0 < r < {\text{diam}}(\partial \Omega)$ there exists a point $X_\Delta \in \Omega$ such that $B(X_\Delta, cr) \subset B(x,r) \cap \Omega$, 3. ${\mathbb{R}}^{n+1} \setminus \overline{\Omega}$ satisfies the corkscrew condition. Dyadic cubes; Carleson and sparse collections {#section:dyadic_cubes} --------------------------------------------- \[thm:dyadic\_cubes\] Suppose that $E$ is an ADR set. Then there exists a countable collection ${\mathbb{D}}$, $$\begin{aligned} {\mathbb{D}}\coloneqq \bigcup_{k \in {\mathbb{Z}}} {\mathbb{D}}_k, \ \ \ \ \ {\mathbb{D}}_k \coloneqq \{ Q_\alpha^k \colon \alpha \in \mathcal{A}_k \} \end{aligned}$$ of Borel sets (that we call dyadic cubes) such that 1. the collection ${\mathbb{D}}$ is *nested*: $\text{if } Q,P \in {\mathbb{D}}, \text{ then } Q \cap P \in \{\emptyset,Q,P\}$, 2. $E = \bigcup_{Q \in {\mathbb{D}}_k} Q$ for every $k \in {\mathbb{Z}}$ and the union is disjoint, 3. there exist constants $c_1 > 0$ and $C_1 \ge 1$ with the following property: for any cube $Q_\alpha^k$ there exists a point $z_\alpha^k \in Q_\alpha^k$ (that we call the *center point of $Q_\alpha^k$*) such that $$\begin{aligned} \label{dyadic_cubes_balls_inclusion} \Delta(z_\alpha^k,c_1 2^{-k}) \subseteq Q_\alpha^k \subseteq \Delta(z_\alpha^k, C_1 2^{-k}) \eqqcolon \Delta_{Q_\alpha^k}, \end{aligned}$$ 4. if $Q,P \in {\mathbb{D}}$ and $Q \subseteq P$, then $$\begin{aligned} \label{dyadic_cubes_big_balls_inclusion} \Delta_Q \subseteq \Delta_P, \end{aligned}$$ 5. for every cube $Q_\alpha^k$ there exists a uniformly bounded number of disjoint cubes $Q_{\beta_i}^{k+1}$ such that $Q_\alpha^k = \bigcup_i Q_{\beta_i}^{k+1}$, where the uniform bound depends only on the ADR constant of $E$, 6. the cubes form a connected tree under inclusion: if $Q, P \in {\mathbb{D}}$, then there exists a cube $R \in {\mathbb{D}}$ such that $Q \cup P \subseteq R$. The last property in the previous theorem does not appear in the constructions in [@christ; @sawyerwheeden; @hytonenkairema], but it is easy to modify the construction to get this property. The basic idea in the construction in [@hytonenkairema] is to choose first the center points $z_\alpha^k$, then define a partial order among those points and finally build the cubes by using density arguments. Thus, if we simply choose the center points $z_\alpha^k$ in such a way that there exists a point $z_0 \in \bigcap_{k \in {\mathbb{Z}}} \{z_\alpha^k\}_\alpha$, then by for any $r > 0$ there exists a cube $Q_r$ that contains the ball $B(z_0,r)$. This implies the last property in the previous theorem. \[notation:dyadic\_cubes\] 1. Since the set $E$ may be bounded or disconnected, we may encounter a situation where $Q_\alpha^k = Q_\beta^l$ although $k \neq l$. In particular, in the second to last property of Theorem \[thm:dyadic\_cubes\] there might exist only one cube $Q_{\beta_i}^{k+1}$ which equals $Q_\alpha^k$ as a set. Thus, we use the notation ${\mathbb{D}}(E)$ for the collection of all relevant cubes $Q \in {\mathbb{D}}$, i.e. if $Q_\alpha^k \in {\mathbb{D}}(E)$, then $C_1 2^{-k} \lesssim {\text{diam}}(E)$ and the number $k$ is maximal in the sense that there does not exist a cube $Q_\beta^l \in {\mathbb{D}}$ such that $Q_\beta^l = Q_\alpha^k$ for some $l > k$. Notice that the number $k$ is bounded for each cube since the ADR condition excludes the presence of isolated points in $E$. This way in ${\mathbb{D}}(E)$ it is natural to talk about the children of a cube $Q$ (i.e. the largest cubes $P \subsetneq Q$) and the parent of a cube $Q$ (i.e. the smallest cube $R \supsetneq Q$). 2. For every cube $Q_\alpha^k \coloneqq Q \in {\mathbb{D}}$, we denote $\ell(Q) \coloneqq 2^{-k}$ and $z_Q \coloneqq z_\alpha^k$. We call $\ell(Q)$ the *side length* of $Q$. 3. For every $Q \in {\mathbb{D}}$, we denote the collection of dyadic subcubes of $Q$ by ${\mathbb{D}}_Q$. Suppose that $\Lambda \ge 1$. We say that a collection ${\mathcal{A}}\subset {\mathbb{D}}$ is *$\Lambda$-Carleson* (or that it satisfies a *Carleson packing condition*) if $$\begin{aligned} \sum_{Q \in {\mathcal{A}}, Q \subset Q_0} \sigma(Q) \le \Lambda \sigma(Q_0) \end{aligned}$$ for every cube $Q_0 \in {\mathbb{D}}$. Suppose that $\lambda \in (0,1)$. We say that a collection ${\mathcal{A}}\subset {\mathbb{D}}$ is *$\lambda$-sparse* if for every cube $Q \in {\mathcal{A}}$ there exists a subset $E_Q \subset Q$ satisfying 1. $E_Q \cap E_{Q'} = \emptyset$ if $Q \neq Q'$ and 2. $\sigma(E_Q) \ge \lambda \sigma(Q)$. The following result will be useful for us with some technical estimates. \[thm:sparse\_carleson\] A collection ${\mathcal{A}}\subset {\mathbb{D}}$ is $\Lambda$-Carleson if and only if it is $\tfrac{1}{\Lambda}$-sparse. Although it is very easy to show that sparseness implies the Carleson property, the other implication is not obvious. For dyadic cubes in ${\mathbb{R}}^n$, it was first proven by I. Verbitsky [@verbitsky Corollary 2] and the result was later rediscovered by A. Lerner and F. Nazarov with a different proof [@lernernazarov Lemma 6.3]. For general Borel sets, the result was proven by T. Hänninen [@hanninen Theorem 1.3]. Since the dyadic cubes in Theorem \[thm:dyadic\_cubes\] are Borel sets, the result of Hänninen is suitable for us. In addition to sparseness arguments, we use a discrete Carleson embedding theorem (Theorem \[theorem:carleson\_embedding\]) to prove that local bounds imply global bounds. In fact, we could use the embedding theorem instead of sparseness arguments throughout the paper but this would give us slightly weaker estimates. Let ${\mathcal{A}}\subset {\mathbb{D}}$ be any collection of dyadic cubes. We say that a cube $P \in {\mathcal{A}}$ is an *${\mathcal{A}}$-maximal subcube of $Q_0$* if there do not exist any cubes $P' \in {\mathcal{A}}$ such that $P \subsetneq P' \subset Q_0$. Corona decomposition, Whitney regions and Carleson boxes {#subsection:whitney_regions} -------------------------------------------------------- We say that a subcollection ${\mathcal{S}}\subset {\mathbb{D}}(E)$ is *coherent* if the following three conditions hold. 1. There exists a maximal element $Q({\mathcal{S}}) \in {\mathcal{S}}$ such that $Q \subset {\mathcal{S}}$ for every $Q \in {\mathcal{S}}$. 2. If $Q \in {\mathcal{S}}$ and $P \in {\mathbb{D}}(E)$ is a cube such that $Q \subset P \subset Q({\mathcal{S}})$, then also $P \in {\mathcal{S}}$. 3. If $Q \in {\mathcal{S}}$, then either all children of $Q$ belong to ${\mathcal{S}}$ or none of them do. If ${\mathcal{S}}$ satisfies only conditions (a) and (b), then we say that ${\mathcal{S}}$ is *semicoherent*. In this article, we do not work directly with Definition \[defin:ur\] but use the *bilateral corona decomposition* instead: Suppose that $E \subset {\mathbb{R}}^{n+1}$ is a uniformly rectifiable set of codimension $1$. Then for any pair of positive constants $\eta \ll 1$ and $K \gg 1$ there exists a disjoint decomposition ${\mathbb{D}}(E) = {\mathcal{G}}\cup {\mathcal{B}}$ satisfying the following properties: 1. The “good” collection ${\mathcal{G}}$ is a disjoint union of coherent stopping time regimes ${\mathcal{S}}$. 2. The “bad” collection ${\mathcal{B}}$ and the maximal cubes $Q({\mathcal{S}})$ satisfy a Carleson packing condition: for every $Q \in {\mathbb{D}}(E)$ we have $$\begin{aligned} \sum_{Q' \subset Q, Q' \in {\mathcal{B}}} \sigma(Q') + \sum_{{\mathcal{S}}: Q({\mathcal{S}}) \subset Q} \sigma(Q({\mathcal{S}})) \le C_{\eta, K} \sigma(Q). \end{aligned}$$ 3. For every ${\mathcal{S}}$, there exists a Lipschitz graph $\Gamma_{\mathcal{S}}$, with Lipschitz constant at most $\eta$, such that for every $Q \in {\mathcal{S}}$ we have $$\begin{aligned} \sup_{x \in \Delta_Q^*} {\text{dist}}(x,\Gamma_{\mathcal{S}}) + \sup_{y \in B_Q^* \cap \Gamma_{\mathcal{S}}} {\text{dist}}(y,E) < \eta \ell(Q), \end{aligned}$$ where $B_Q^* \coloneqq B(z_Q,K\ell(Q))$ and $\Delta_Q^* \coloneqq B_Q^* \cap E$. The proof of this decomposition is based on the use of both the unilateral corona decomposition [@davidsemmes_singular] and the bilateral weak geometric lemma [@davidsemmes_analysis] of David and Semmes. The decomposition plays a key role in this paper. In [@hofmannmartellmayboroda Section 3], the bilateral corona decomposition is used to construct Whitney regions $U_Q$ and Carleson boxes $T_Q$ with respect to the dyadic cubes $Q \in {\mathbb{D}}(E)$ using a dyadic Whitney decomposition of ${\mathbb{R}}^{n+1} \setminus E$. The Whitney regions are a substitute for the dyadic Whitney tiles $Q \times (\ell(Q)/2,\ell(Q))$ and the Carleson boxes are a substitute for the dyadic boxes $Q \times (0,\ell(Q))$ in ${\mathbb{R}}^{n+1}_+$. We list some of their important properties in the next lemma which we use constantly without specifically referring to it each time. The Whitney regions $U_Q$, $Q \in {\mathbb{D}}(E)$, satisfy the following properties. 1. The region $U_Q$ is a union of a bounded number of slightly fattened Whitney cubes $I^* \coloneqq (1+\tau)I$ such that $\ell(Q) \approx \ell(I)$ and ${\text{dist}}(Q,I) \approx \ell(Q)$. We denote the collection of these Whitney cubes by ${\mathcal{W}}_Q$. 2. The regions $U_Q$ have a bounded overlap property. In particular, we have $\sum_i |U_{Q_i}| \lesssim |\bigcup_i U_{Q_i}|$ for cubes $Q_i$ such that $Q_i \neq Q_j$ if $i \neq j$. 3. If $U_Q \cap U_P \neq \emptyset$, then $\ell(Q) \approx \ell(P)$ and ${\text{dist}}(Q,P) \lesssim \ell(Q)$. 4. For every $Y \in U_Q$ we have $\delta(Y) \approx \ell(Q)$. 5. For every $Q \in {\mathbb{D}}(E)$, we have $|U_Q| \approx \ell(Q)^{n+1} \approx \ell(Q) \cdot \sigma(Q)$. 6. If $Q \in {\mathcal{G}}$, then $U_Q$ breaks into exactly two connected components $U_Q^+$ and $U_Q^-$ such that $|U_Q^+| \approx |U_Q^-|$. 7. If $Q \in {\mathcal{B}}$, then $U_Q$ breaks into a bounded number of connected components $U_Q^i$ such that $|U_Q^i| \approx |U_Q^j|$ for all $i$ and $j$. 8. If $\text{diam}(E) = \infty$, then $\bigcup_{Q \in {\mathbb{D}}(E)} U_Q = \Omega$. 9. If $\text{diam}(E) < \infty$, then there exists a point $z_0 \in E$ and a constant $C \ge 1$ such that $B(z_0,C\cdot\text{diam}(E)) \setminus E \subset \bigcup_{Q \in {\mathbb{D}}(E)} U_Q$. The constant $C$ can be made large but this makes the implicit constant in the bounded overlap property large as well. For every $Q \in {\mathcal{G}}$, the components $U_Q^+$ and $U_Q^-$ have “center points” that we denote by $X_Q^+$ and $X_Q^-$, respectively. We also set $Y_Q^\pm \coloneqq X_{\widetilde{Q}}^\pm$, where $\widetilde{Q}$ is the dyadic parent of $Q$ unless $Q = Q({\mathcal{S}})$, in which case we set $\widetilde{Q} = Q$. We use these points in the construction in Section \[section:construction\_local\]. For any cube $Q \in {\mathcal{G}}$, the collection ${\mathcal{W}}_Q$ breaks naturally into two disjoint subcollection ${\mathcal{W}}_Q^+$ and ${\mathcal{W}}_Q^-$. For every $Q \in {\mathbb{D}}(E)$, we define the Carleson box as the set $$\begin{aligned} T_Q \coloneqq \text{int}\left( \bigcup_{Q' \in {\mathbb{D}}_Q} U_Q \right).\end{aligned}$$ For each ${\mathcal{A}}\subset {\mathbb{D}}(E)$, we set $$\begin{aligned} \label{defin:sawtooth} \Omega_{\mathcal{A}}\coloneqq \text{int} \left( \bigcup_{Q' \in {\mathcal{A}}} U_{Q'} \right).\end{aligned}$$ Local $BV$ {#section:bounded_variation} ---------- We say that a function $f \in L^1_\text{loc}(\Omega)$ has *locally bounded variation* (denote $f \in BV_{\text{loc}}(\Omega)$) if for any bounded open set $U \subset \Omega$ such that $\overline{U} \subset \Omega$ we have $$\begin{aligned} \sup_{\substack{\overrightarrow{\Psi} \in C_0^1(U),\\ \|\overrightarrow{\Psi}\|_{L^\infty} \le 1}} \iint_{U} f(Y) \text{div}\overrightarrow{\Psi}(Y) \, dY < \infty. \end{aligned}$$ The latter expression can be shown to define a measure, by the Riesz representation theorem. We have the following: Suppose that $f \in BV_\text{loc}(\Omega)$. Then there exists a Radon measure $\mu$ on $\Omega$ such that $$\begin{aligned} \mu(U) = \sup_{\substack{\overrightarrow{\Psi} \in C_0^1(U),\\ \|\overrightarrow{\Psi}\|_{L^\infty} \le 1}} \iint_{U} f(Y) \text{div}\overrightarrow{\Psi}(Y) \, dY. \end{aligned}$$ for any open set $U \subset \Omega$; we call $\mu(U)$ the [*total variation*]{} of $f$ on $U$. Abusing notation, for an open set $U\subset \Omega$, we shall write $$\mu(U):= \iint_U |\nabla f(Y)| \, dY,$$ which should not be mistaken for a usual Lebesgue integral. Indeed, we may have situations where $A \subset B$ and $|A| = |B|$ but $\iint_A |\nabla f(Y)| \, dY \ll \iint_B |\nabla f(Y)| \, dY$. In particular, if $f \in BV_\text{loc}(\Omega)$, the sets $U, U_1, \ldots, U_k \subset \Omega$ are open and $U \subset \bigcup_i U_i$, then $$\begin{aligned} \label{estimate:additive_variation} \iint_U |\nabla f(Y)| \, dY \le \sum_i \iint_{U_i} |\nabla f(Y)| \, dY.\end{aligned}$$ We emphasize that we write $ |\nabla f| dY$ to indicate the variation measure of $f$, which is denoted by $\|Df\|$ in [@evansgariepy]; thus, for $f\in BV_{loc}(\Omega)$, and for any open set $U\subset \Omega$, we let $\iint_{U} |\nabla f| dY$ denote the total variation of $f$ over $U$. We shall continue to use this (mildly abusive) notational convention in the sequel, when working with elements of $BV_{loc}(\Omega)$. ${\mathcal{C}}$ and ${\mathcal{C}}_{\mathbb{D}}$ {#section:cc} ------------------------------------------------ For every $k \in {\mathbb{N}}$, we let $F_k$ be the ordered pair $(E,k)$. In this section, we let $Q_0 = E$ be the maximal dyadic cube if $E$ is a bounded set. We define the operators ${\mathcal{C}}$ and ${\mathcal{C}}_{\mathbb{D}}$ by setting $$\begin{aligned} {\mathcal{C}}(f)(x) &\coloneqq \sup_{r > 0} \frac{1}{r^n} \iint_{B(x,r) \setminus E} |f(Y)| \, dY, \\ {\mathcal{C}}_{\mathbb{D}}(f)(x) &\coloneqq \sup_{Q \in {\mathbb{D}}^*, x \in Q} \frac{1}{\ell(Q)^n} \iint_{T_Q} |f(Y)| \, dY,\end{aligned}$$ where $$\begin{aligned} {\mathbb{D}}^* \coloneqq \left\{ \begin{array}{cl} {\mathbb{D}}(E), &\text{if } \ {\text{diam}}(E) = \infty \\ {\mathbb{D}}(E) \cup \{F_k \colon k=\Lambda_0,\Lambda_0+1,\ldots\}, &\text{if } \ {\text{diam}}(E) < \infty \end{array} \right.\end{aligned}$$ and $$\begin{aligned} T_{F_k} \coloneqq B(z_0, 2^k {\text{diam}}(E)), \ \ \ \ \ \ \ell(F_k) \coloneqq 2^k {\text{diam}}(E)\end{aligned}$$ for some fixed point $z_0 \in E$ and a number $\Lambda_0$ such that $T_{Q_0} \subset T_{F_{\Lambda_0}}$. We will call also the pairs $F_k$ cubes although their actual structure is irrelevant and we will interpret $x \in F_k$ simply as $x \in E$. Usually, these functions are not pointwise equivalent but we only have ${\mathcal{C}}_{\mathbb{D}}(f)(x) \lesssim {\mathcal{C}}(f)(x)$ for every $x \in E$ (this follows from the ADR property of $E$ and the fact that $T_Q \subset B(z_Q, C\ell(Q))$ for a uniform constant $C$). However, in $L^p$ sense, these functions are always comparable. This can be seen easily from the level set comparison formula that we prove next. This comparability is convenient for us since we construct the approximating function $\varphi$ in Theorem \[theorem:main\_result\] with the help of the dyadic Whitney regions. Thus, it is more natural for us to prove the desired $L^p$ bound for ${\mathcal{C}}_{\mathbb{D}}(\nabla \varphi)$ instead of ${\mathcal{C}}(\nabla \varphi)$. We prove the comparison formula by using well-known techniques from the proof of the corresponding formula for the Hardy-Littlewood maximal function and its dyadic version [@duoandikoetxea Lemma 2.12]. \[lemma:Lp-comparability\_of\_C\] Suppose that $f \in BV_\text{loc}(\Omega)$. Then there exist uniform constants $A_1$ and $A_2$ (depending on the dimension and the ADR constant) such that for every $\lambda > 0$ we have $$\begin{aligned} \sigma\left(\left\{ x \in E \colon {\mathcal{C}}(\nabla f)(x) > A_1 \lambda \right\}\right) \ \le \ A_2 \cdot \sigma\left(\left\{ x \in E \colon {\mathcal{C}}_{\mathbb{D}}(\nabla f)(x) > \lambda \right\} \right). \end{aligned}$$ In particular, $\| {\mathcal{C}}(f) \|_{L^p(E)} \le A_1 A_2^{1/p} \| {\mathcal{C}}_{\mathbb{D}}(f) \|_{L^p(E)}$ for every $p \in (1,\infty)$. We first note that if $r \gg {\text{diam}}(E)$, then by the definition of ${\mathcal{C}}_{\mathbb{D}}$ we have the bound $\tfrac{1}{r^n} \iint_{B(x,r) \setminus E} |\nabla f(Y)| \, dY \lesssim {\mathcal{C}}_{\mathbb{D}}(\nabla f)(x)$. Thus, we may assume that the balls in this proof have uniformly bounded radii $\lesssim {\text{diam}}(E)$ and the cubes belong to ${\mathbb{D}}(E)$. Naturally, we may also assume that the right hand side of the inequality is finite. We notice that if ${\mathcal{C}}_{\mathbb{D}}(f)(x) > \lambda$, then there exists a cube $Q \in {\mathbb{D}}(E)$ such that $x \in Q$ and $\tfrac{1}{\sigma(Q)} \iint_{T_Q} |\nabla f(Y)| \, dY > \lambda$. By the definition of ${\mathcal{C}}_{\mathbb{D}}(f)$, we also have ${\mathcal{C}}_{\mathbb{D}}(f)(y) > \lambda$ for every $y \in Q$. In particular, we have $$\begin{aligned} \left\{ x \in E \colon {\mathcal{C}}_{\mathbb{D}}(\nabla f)(x) > \lambda \right\} = \bigcup_i Q_i \end{aligned}$$ for disjoint dyadic cubes $Q_i$. We now claim that if $A_1$ is large enough, then $$\begin{aligned} \label{inclusion_level_sets} \left\{ x \in E \colon {\mathcal{C}}(\nabla f)(x) > A_1 \lambda \right\} \subseteq \bigcup_i 2\Delta_{Q_i} \end{aligned}$$ where $\Delta_{Q_i}$ is the surface ball . Suppose that $y \notin \bigcup_i 2\Delta_{Q_i}$ and let $r > 0$. Let us choose $k \in {\mathbb{Z}}$ so that $2^{k-1} \le r < 2^k$. Now there exist at most $K$ dyadic cubes $R_1, R_2, \ldots, R_m$ such that $\ell(R_j) = 2^k$ and $R_j \cap \Delta(y,r) \neq \emptyset$ for every $j=1,2,\ldots,m$. We notice that none of the cubes $R_j$ can be contained in any of the cubes $Q_i$ since otherwise we would have $y \in 2\Delta_{R_j} \subset 2\Delta_{Q_i}$ by . Thus, we have $\tfrac{1}{\ell(R_j)^n} \iint_{T_{R_j}} |\nabla f(Y)| \, dY \le \lambda$ for every $j$. We can use a straightforward geometric argument to show that $B(y,r) \subset \bigcup_{j=1}^m T_{R_j}$ (see [@hofmannmartellmayboroda pages 2353-2354]). Hence, since $r \approx \ell(R_j)$ for every $j$, we have $$\begin{aligned} \frac{1}{r^n} \iint_{B(y,r)} |\nabla f(Y)| \, dY &\overset{\eqref{estimate:additive_variation}}{\lesssim} \sum_{j=1}^m \frac{1}{\ell(R_j)^n} \iint_{T_{R_j}} |\nabla f(Y)| \, dY \lesssim \lambda \end{aligned}$$ and $y \notin \left\{ x \in E \colon {\mathcal{C}}(\nabla f)(x) > A_1 \lambda \right\}$ for a large enough $A_1$. In particular, holds and we have $$\begin{aligned} \sigma( \left\{ x \in E \colon {\mathcal{C}}(\nabla f)(x) > A_1 \lambda \right\}) &\le \sum_i \sigma(2\Delta_{Q_i}) \\ &\lesssim \sum_i \sigma(Q_i) \\ &= \sigma\left( \bigcup_i Q_i \right) = \sigma( \left\{ x \in E \colon {\mathcal{C}}_{\mathbb{D}}(\nabla f)(x) > \lambda \right\}). \end{aligned}$$ The $L^p$ comparability ${\mathcal{C}}(\nabla f)$ and ${\mathcal{C}}_{\mathbb{D}}(\nabla f)$ follows immediately: $$\begin{aligned} \|{\mathcal{C}}(\nabla f)\|_{L^p(E)}^p &= p \int_0^\infty \lambda^{p-1} \sigma(\{x \in E \colon {\mathcal{C}}(\nabla f)(x) > \lambda\}) \, d\lambda \\ &\le A_2 p \int_0^\infty \lambda^{p-1} \sigma(\{x \in E \colon A_1 {\mathcal{C}}_{\mathbb{D}}(\nabla f)(x) > \lambda\}) \, d\lambda \\ &= A_1^p A_2 \|{\mathcal{C}}_D(\nabla f)\|_{L^p(E)}^p. \end{aligned}$$ Cones, non-tangential maximal functions and square functions {#subsection:non-tangential} ------------------------------------------------------------ We recall from [@hofmannmartellmayboroda Section 3] that the Whitney regions $U_Q$ and the fattened Whitney regions $\widehat{U}_Q$, $Q \in {\mathbb{D}}$, are defined using fattened Whitney boxes $I^* \coloneqq (1 + \tau)I$ and $I^{**} \coloneqq (1 + 2\tau)I$ respectively, where $\tau$ is a suitable positive parameter. Let us define the regions $\widehat{\textbf{U}}_Q$ using even fatter Whitney boxes $I^{***} \coloneqq (1 + 3\tau)W$. \[defin:non-tangential\] For any $x \in E$, we define the *cone at $x$* by setting $$\begin{aligned} \label{defin:cone} \Gamma(x) \coloneqq \bigcup_{Q \in {\mathbb{D}}(E), Q \ni x} \widehat{\textbf{U}}_Q. \end{aligned}$$ We define the *non-tangential maximal function* $N_*u$ and, for $u \in W^{1,2}_{\text{loc}}(\Omega)$, the square function $Su$ as follows: $$\begin{aligned} N_*u(x) &\coloneqq \sup_{Y \in \Gamma(x)} |u(Y)|, \ \ \ \ \ x \in E,\\ Su(x) &\coloneqq \left( \int_{\Gamma(x)} |\nabla u(Y)|^2 \delta(Y)^{1-n} \, dY \right)^{1/2}, \ \ \ \ \ x \in E. \end{aligned}$$ The Hytönen-Rosén techniques in [@hytonenrosen Section 6] rely on the use of local $S \lesssim N$ and $N \lesssim S$ estimates from [@hofmannkenigmayborodapipher]. Although a local $S \lesssim N$ estimate holds also in our context [@hofmannmartellmayboroda_square], a local $N \lesssim S$ estimate does not hold without suitable assumptions on connectivity. Thus, we cannot apply the Hytönen-Rosén techniques directly but we have to combine them with the techniques created in [@hofmannmartellmayboroda]. In Section \[section:construction\] we consider the following modified versions of $\Gamma(x)$ and $N_* u$ to bypass some additional technicalities: For every $x \in E$ and $\alpha > 0$ we define the *cone of $\alpha$-aperture at $x$* $\Gamma_\alpha(x)$ by setting $$\begin{aligned} \label{defin:modified_cone} \Gamma_\alpha(x) \coloneqq \bigcup_{Q \in {\mathbb{D}}(E), Q \ni x} \bigcup_{\substack{P \in {\mathbb{D}}(E), \\ \ell(P) = \ell(Q), \\ \alpha \Delta_Q \cap P \neq \emptyset}} \widehat{\textbf{U}}_P. \end{aligned}$$ Using the cones $\Gamma_\alpha(x)$, we define the *non-tangential maximal function of $\alpha$-aperture* $N^\alpha_* u$ by setting $N_*^\alpha u(x) \coloneqq \sup_{Y \in \Gamma_\alpha(x)} |u(Y)|$. If the set $E$ is bounded, then the cones and are also bounded since we only constructed Whitney regions $U$ such that ${\text{diam}}(U) \lesssim {\text{diam}}(E)$. Thus, if $E$ is bounded, we use the cones $$\begin{aligned} &\widehat{\Gamma}(x) \coloneqq \Gamma(x) \cup B(z_0, C\cdot{\text{diam}}(E))^c \text{ and} \\ &\widehat{\Gamma}_\alpha(x) \coloneqq \Gamma_\alpha(x) \cup B(z_0, C_\alpha\cdot{\text{diam}}(E))^c \end{aligned}$$ for a suitable point $z_0 \in E$ and suitable constants $C$ and $C_\alpha$ instead. The usefulness of these modified cones and non-tangential maximal functions lies in the fact that for a suitable choice of $\alpha$ the cone $\Gamma_\alpha(x)$ contains some crucial points that may not be contained in $\Gamma(x)$ and in the $L^p$ sense the function $N_*^\alpha u$ is not too much larger than $N_* u$. We prove the latter claim in the next lemma but postpone the proof of the first claim to Section \[section:construction\]. \[lemma:Lp-comparability\_of\_N\] Suppose that $u$ is a continuous function and let $\alpha \ge 1$. Then $\|N_*u\|_{L^p(E)} \approx_\alpha \|N_*^\alpha u\|_{L^p(E)}$ for every $p \in (0,\infty)$. We only prove the claim for the case ${\text{diam}}(E) = \infty$ as the proof for the case ${\text{diam}}(E) < \infty$ is almost the same. Since the set $E$ is ADR, measures of balls with comparable radii are comparable. Using this property makes it is simple and straightforward to generalize the classical proof of C. Fefferman and E. Stein [@feffermanstein Lemma 1] from ${\mathbb{R}}^{n+1}_+$ to $\Omega$ to show that $\| {\mathcal{N}}_\alpha u\|_{L^p(E)} \approx_{\alpha,\beta} \| {\mathcal{N}}_\beta u\|_{L^p(E)}$ where $$\begin{aligned} {\mathcal{N}}_\gamma u(x) \coloneqq \sup_{Y \in \widetilde{\Gamma}_\gamma(x)} |u(Y)|, \ \ \ \ \ \widetilde{\Gamma}_\gamma(x) \coloneqq \left\{ Y \in \Omega \colon {\text{dist}}(x,Y) < \gamma \cdot \delta(Y) \right\}. \end{aligned}$$ By the definition of the cones $\Gamma(x)$, there exists $\gamma_0 > 0$ such that $\widetilde{\Gamma}_{\gamma_0}(x) \subset \Gamma(x)$ for every $x \in E$. Thus, we only need to show that $\Gamma_\alpha(x) \subset \widetilde{\Gamma}_\gamma(x)$ for some uniform $\gamma = \gamma(\alpha)$ for all $x \in E$ since this gives us the estimate (\*) in the chain $$\begin{aligned} \|N_* u\|_{L^p(E)} \le \|N_*^\alpha u\|_{L^p(E)} &\overset{\text{(*)}}{\le} \|{\mathcal{N}}_{\gamma} u\|_{L^p(E)} \\ &\approx_{\gamma,\gamma_0} \|{\mathcal{N}}_{\gamma_0} u\|_{L^p(E)} \le \|N_* u\|_{L^p(E)}. \end{aligned}$$ Suppose that $Q, P \in {\mathbb{D}}(E)$, $x \in Q$, $\ell(Q) = \ell(P)$ and $\alpha \Delta_Q \cap P \neq \emptyset$. By the construction of the Whitney regions, for every $Y \in \widehat{\textbf{U}}_P$ we have $$\begin{aligned} \delta(Y) \approx \ell(P) \approx {\text{dist}}(Y,P). \end{aligned}$$ On the other hand, since $\alpha \Delta_Q \cap P \neq \emptyset$ and $\ell(P) = \ell(Q)$, we know that for any $y \in P$ we have $$\begin{aligned} {\text{dist}}(x,y) \lesssim \alpha \ell(Q) = \alpha \ell(P). \end{aligned}$$ Let us take any $z \in P$. Now for every $Y \in \widehat{\textbf{U}}_P$ we have $$\begin{aligned} {\text{dist}}(x,Y) \le {\text{dist}}(x,z) + {\text{dist}}(z,Y) \lesssim \alpha \ell(P) + \ell(P) \lesssim \alpha \ell(P) \approx \alpha \cdot \delta(Y). \end{aligned}$$ In particular, there exists a uniform constant $\gamma = \gamma(\alpha)$ such that $\Gamma_\alpha(x) \subset \widetilde{\Gamma}_{\gamma}(x)$. Principal cubes =============== As in [@hytonenrosen], we define the numbers $M_{\mathbb{D}}(N_*u)(Q)$ by setting $$\begin{aligned} M_{{\mathbb{D}}}(N_* u)(Q) \coloneqq \sup_{Q \subseteq R \in {\mathbb{D}}} \fint_R N_* u(y) \, d\sigma(y)\end{aligned}$$ for every $Q \in {\mathbb{D}}(E) \eqqcolon {\mathbb{D}}$. We shall use a collection ${\mathcal{I}}\subset {\mathbb{D}}(E) = {\mathbb{D}}$ such that $$\begin{aligned} \label{initial_collection} {\mathcal{I}}\coloneqq \left\{Q_i \colon i \in \widetilde{{\mathbb{N}}} \right\}, \ \ \ \ \ Q_i \subsetneq Q_{i+1} \ \forall i, \ \ \ \ \ \bigcup_i Q_i = E,\end{aligned}$$ where $\widetilde{{\mathbb{N}}} = \{1,2,\ldots,n_0\}$ for some $n_0 \in {\mathbb{N}}$ if $E$ is bounded, and $\widetilde{{\mathbb{N}}} = {\mathbb{N}}$ otherwise. This type of a collection exists by the last property in Theorem \[thm:dyadic\_cubes\] and by the properties of dyadic cubes, the collection is Carleson. Let us construct a collection ${\mathcal{P}}\subset {\mathbb{D}}$ of ”stopping cubes“ using the construction described in [@hytonenrosen Section 6.1]. We set ${\mathcal{P}}_0 \coloneqq {\mathcal{I}}$ and consider all the cubes $Q' \in {\mathbb{D}}(E) \setminus {\mathcal{P}}_0$ such that 1. for some $Q \in {\mathcal{P}}_0$ we have $Q' \subsetneq Q$ and $$\begin{aligned} \label{stopping_condition_1} M_{{\mathbb{D}}}(N_* u)(Q') = \sup_{Q' \subseteq R \in {\mathbb{D}}} \fint_R N_* u(y) \, d\sigma(y) > 2M_{{\mathbb{D}}}(N_* u)(Q), \end{aligned}$$ 2. $Q'$ is not contained in any such $Q'' \subsetneq Q$ such that either $Q'' \in {\mathcal{P}}_0$ or holds for the pair $(Q'',Q)$. We denote by ${\mathcal{P}}_1$ the collection we get by adding all the cubes $Q'$ satisfying both (a) and (b) to ${\mathcal{P}}_0$. We then continue this process for ${\mathcal{P}}_1$ in place of ${\mathcal{P}}_0$ and so on. We set ${\mathcal{P}}\coloneqq \bigcup_{k=0}^\infty {\mathcal{P}}_k$. We also set $$\begin{aligned} \pi_{\mathcal{P}}Q = \text{ the smallest cube } Q_0 \in {\mathcal{P}}\text{ such that } Q \subseteq Q_0.\end{aligned}$$ Here we mean smallest with respect to the side length. Naturally, we have $\pi_{\mathcal{P}}Q = Q$ for every $Q \in {\mathcal{P}}$, and since ${\mathcal{I}}\subset {\mathcal{P}}$, for every cube $Q \in {\mathbb{D}}$ there exists some cube $P_Q \in {\mathcal{P}}$ such that $Q \subset P_Q$. \[remark:simplification\] The collection ${\mathcal{P}}$ is an auxiliary collection that helps us to simplify the proofs of several claims. We use it in the following way. Suppose that we have a subcollection $\mathcal{W} \subset {\mathbb{D}}$ and we want to show that $\mathcal{W}$ satisfies a Carleson packing condition. Let $Q_0 \in {\mathbb{D}}$. Now for every $Q \in \mathcal{W}$ such that $Q \subset Q_0$, we have either $\pi_{\mathcal{P}}Q = \pi_{\mathcal{P}}Q_0$ or $\pi_{\mathcal{P}}Q = P = \pi_{\mathcal{P}}P$ for some $P \in {\mathcal{P}}$ such that $P \subsetneq \pi_{\mathcal{P}}Q_0$. In particular, we have $$\begin{aligned} \sum_{Q \in \mathcal{W}, Q \subseteq Q_0} \sigma(Q) &= \sum_{\substack{Q \in \mathcal{W}, \\ \pi_{\mathcal{P}}Q = \pi_{\mathcal{P}}Q_0}} \sigma(Q) + \sum_{P \in {\mathcal{P}}, P \subsetneq \pi_{\mathcal{P}}Q_0} \sum_{\substack{Q \in \mathcal{W},\\ \pi_{\mathcal{P}}Q = P}} \sigma(Q) \eqqcolon I_{Q_0} + \sum_{P \in {\mathcal{P}}, P \subsetneq \pi_{\mathcal{P}}Q_0} I_P. \end{aligned}$$ We prove in Lemma \[lemma:p-carleson\] below that the collection ${\mathcal{P}}$ satisfies a Carleson packing condition. Thus, if we can show that $I_{Q_0} \lesssim \sigma(Q_0)$ for an arbitrary cube $Q_0 \in {\mathcal{P}}$, we get $$\begin{aligned} \sum_{P \in {\mathcal{P}}, P \subsetneq \pi_{\mathcal{P}}Q_0} I_P \lesssim \sum_{P \in {\mathcal{P}}, P \subsetneq \pi_{\mathcal{P}}Q_0} \sigma(P) \lesssim \sigma(Q_0). \end{aligned}$$ Thus, to show that the collection $\mathcal{W}$ satisfies a Carleson packing condition, it is enough to show that $I_{Q_0} \lesssim \sigma(Q_0)$ for every cube $Q_0 \in {\mathbb{D}}$. The usefulness of this simplification is that if $Q \in {\mathbb{D}}\setminus {\mathcal{P}}$ and $\pi_{\mathcal{P}}Q = P$, then by the construction of the collection ${\mathcal{P}}$ we have $$\begin{aligned} M_{{\mathbb{D}}}(N_* u)(Q) \le 2M_{{\mathbb{D}}}(N_* u)(P). \end{aligned}$$ We use this property several times in the proofs. For any cube $Q_0 \in {\mathbb{D}}$, we say that $R \in {\mathcal{P}}$ is a *${\mathcal{P}}$-proper subcube of $Q_0$* if we have $M_{{\mathbb{D}}}(N_* u)(R) > 2M_{{\mathbb{D}}}(N_* u)(Q_0)$ and $M_{{\mathbb{D}}}(N_* u)(R') \le 2M_{{\mathbb{D}}}(N_* u)(Q_0)$ for every intermediate cube $R \subsetneq R' \subsetneq Q_0$. \[lemma:p-carleson\] For every $Q_0 \in {\mathbb{D}}(E)$ we have $$\begin{aligned} \label{estimate:Pc_carleson} \sum_{P \in {\mathcal{P}}, P \subseteq Q_0} \sigma(P) \lesssim \sigma(Q_0). \end{aligned}$$ Let us start by noting that we may assume that $Q_0 \in {\mathcal{P}}$ since otherwise we can simply consider the ${\mathcal{P}}$-maximal subcubes of $Q_0$. To be more precise, the ${\mathcal{P}}$-maximal subcubes of $Q_0$ are disjoint by definition and thus, if we sum their measures together, it is at most $\sigma(Q_0)$. Now, if $Q \in {\mathcal{P}}$ and $Q \subset Q_0$, we know that $Q$ is one of the ${\mathcal{P}}$-maximal subsubes of $Q_0$ or it is contained properly in one of them. Hence, if we prove the estimate for the case $Q_0 \in {\mathcal{P}}$, it implies the same estimate even with the same implicit constant for the case $Q_0 \notin {\mathcal{P}}$. Suppose first that we have a collection of disjoint cubes $Q' \subset Q$ that satisfy $M_{{\mathbb{D}}}(N_* u)(Q') > 2M_{{\mathbb{D}}}(N_* u)(Q)$. Then, for every such cube $Q'$ we have $M_{{\mathbb{D}}}(N_* u)(Q') > \fint_Q N_*u \, d\sigma$ and thus, for every point $x \in Q'$ we get $$\begin{aligned} M_{{\mathbb{D}}}(1_Q N_*u)(x) &= \sup_{R \in {\mathbb{D}}, x \in R \subseteq Q} \fint_R N_*u \, d\sigma \\ &\ge \sup_{R \in {\mathbb{D}}, Q' \subseteq R \subsetneq Q} \fint_R N_*u \, d\sigma = M_{{\mathbb{D}}}(N_*u)(Q') > 2M_{{\mathbb{D}}}(N_*u)(Q). \end{aligned}$$ In particular, by the $L^1 \to L^{1,\infty}$ boundedness of $M_{\mathbb{D}}$ we have $$\begin{aligned} \nonumber \sum_{Q'} \sigma(Q') &\le \sigma\left(\left\{x \in E \colon M_{{\mathbb{D}}}(1_Q N_*u)(x) > 2M_{{\mathbb{D}}}(N_*u)(Q) \right\}\right) \\ \label{apu1} &\le \frac{1}{ 2M_{{\mathbb{D}}}(N_*u)(Q)} \|1_Q N_*u\|_{L^1(\sigma)} = \frac{\fint_Q N_*u \, d\sigma}{M(N_* u)(Q)} \frac{\sigma(Q)}{2} \le \frac{\sigma(Q)}{2}. \end{aligned}$$ We notice that if $R \in {\mathcal{P}}\setminus {\mathcal{I}}$, then $R$ is a ${\mathcal{P}}$-proper subcube of some cube $Q \in {\mathcal{P}}$. To be more precise, if $R \in {\mathcal{P}}\setminus {\mathcal{I}}$, then there exists a chain of cubes $R = R_1 \subsetneq R_2 \subsetneq \ldots \subsetneq R_k$, $R_i \in {\mathcal{P}}$, such that for every $i = 1,2,\ldots,k-1$ $R_i$ is a ${\mathcal{P}}$-proper subcube of $R_{i+1}$ and $R_k \in {\mathcal{I}}$. If such a chain of length $k$ from $R$ to $Q$ exists, we denote $R \in {\mathcal{P}}_Q^k$. By using the property $k$ times, we see that for each $Q \in {\mathcal{P}}$ we have $$\begin{aligned} \label{apu2} \sum_{R \in {\mathcal{P}}_Q^k} \sigma(R) \le \sum_{R \in {\mathcal{P}}_Q^{k-1}} \sum_{S \in {\mathcal{P}}_Q^k, S \subsetneq R} \sigma(S) \le \frac{1}{2} \sum_{R \in {\mathcal{P}}_Q^{k-1}} \sigma(R) \le \ldots \le \frac{1}{2^{k-1}} \sum_{R \in {\mathcal{P}}_Q^1} \sigma(R) \le \frac{\sigma(Q)}{2^k}. \end{aligned}$$ Now it is straightforward to prove the packing condition. We have $$\begin{aligned} \sum_{P \in {\mathcal{P}}, P \subseteq Q_0} \sigma(P) \ &= \ \sum_{P \in {\mathcal{I}}, P \subseteq Q_0} \sigma(P) + \sum_{P \in {\mathcal{P}}\setminus {\mathcal{I}}, P \subseteq Q_0} \sigma(P) \\ \ &\le \ C_{\mathcal{I}}\sigma(Q_0) + \sum_{Q \in {\mathcal{I}}, Q \subseteq Q_0} \sum_{k=1}^\infty \sum_{P \in {\mathcal{P}}_Q^k} \sigma(P) \\ \ &\overset{\eqref{apu2}}{\le} \ C_{\mathcal{I}}\sigma(Q_0) + \sum_{Q \in {\mathcal{I}}, Q \subseteq Q_0} \sum_{k=1}^\infty \frac{\sigma(Q)}{2^k} \\ \ &= C_{\mathcal{I}}\sigma(Q_0) + \sum_{Q \in {\mathcal{I}}, Q \subseteq Q_0} \sigma(Q) \\ &\le C_{\mathcal{I}}\sigma(Q_0) + C_{\mathcal{I}}\sigma(Q_0) \end{aligned}$$ which proves the claim. “Large Oscillation” cubes ========================= Before constructing the approximating function, we consider two collections of cubes that will act as the basis of our construction. In this section, we show that the union of the collection of “large oscillation” cubes $$\begin{aligned} \mathcal{R} \coloneqq \left\{Q \in {\mathbb{D}}\colon {\underset{U_Q^i}{\text{osc}} \,} u > \varepsilon M_{\mathbb{D}}(N_*u)(Q) \text{ for some } i\right\}.\end{aligned}$$ and the collection of “bad” cubes from the corona decomposition satisfies a Carleson packing condition. We apply this property in the technical estimates in Section \[section:construction\]. \[lemma:oscillation\_carleson\] For every $Q_0 \in {\mathbb{D}}(E)$ we have $$\begin{aligned} \label{estimate:oscillation_carleson} \sum_{R \in {\mathcal{R}}, R \subseteq Q_0} \sigma(R) \lesssim \frac{1}{\varepsilon^2} \sigma(Q_0). \end{aligned}$$ We break the proof into three parts. **Part 1: Simplification.** First, by Remark \[remark:simplification\], it is enough to show that $$\begin{aligned} \underset{\pi_{\mathcal{P}}R = \pi_{\mathcal{P}}Q_0}{\sum_{R \in {\mathcal{R}}, R \subset Q_0}} \sigma(R) \lesssim \frac{1}{{\varepsilon}^2} \sigma(Q_0).\end{aligned}$$ Also, since the “bad” collection in the bilateral corona decomposition is Carleson, it suffices to consider the “good” cubes in ${\mathcal{R}}$, i.e. the collection ${\mathcal{R}}\cap \mathcal{G}$. Thus, we may assume that $Q_0 \in {\mathcal{R}}\cap \mathcal{G}$ since otherwise we may simply consider the $({\mathcal{R}}\cap \mathcal{G})$-maximal subcubes of $Q_0$ similarly as with the collection ${\mathcal{P}}$ in the proof of Lemma \[lemma:p-carleson\]. Furthermore, since the Whitney regions $U_R$ of the “good” cubes $R$ break into two components $U_R^+$ and $U_R^-$, it is enough to bound the sum $$\begin{aligned} \underset{\pi_{\mathcal{P}}R = \pi_{\mathcal{P}}Q_0}{\sum_{R \in {\mathcal{R}}^+, R \subset Q_0}} \sigma(R) \lesssim \sigma(Q_0),\end{aligned}$$ where ${\mathcal{R}}^+ \coloneqq \{Q \in {\mathcal{R}}\cap \mathcal{G} \colon \text{osc}_{U_Q^+} > \varepsilon M_{\mathbb{D}}(N_*u)(Q) \}$, as the arguments for the corresponding collection ${\mathcal{R}}^-$ are the same. Since $Q_0 \in {\mathcal{G}}$, there exists a stopping time regime ${\mathcal{S}}_0 = {\mathcal{S}}_0(Q_0)$ such that $Q_0 \in {\mathcal{S}}_0$. We note that if we have $Q \subset Q_0$ for a cube $Q \in {\mathcal{R}}^+$, then either $Q \in {\mathcal{S}}_0$ or, by the coherency and disjointness of the stopping time regimes, $Q_0 \in {\mathcal{S}}$ for such a ${\mathcal{S}}$ that $Q({\mathcal{S}}) \subsetneq Q_0$. Let $\mathfrak{S} = \mathfrak{S}(Q_0)$ be the collection of the stopping time regimes ${\mathcal{S}}$ such that $Q({\mathcal{S}}) \subsetneq Q_0$. Then we have $$\begin{aligned} \underset{\pi_{\mathcal{P}}R = \pi_{\mathcal{P}}Q_0}{\sum_{R \in {\mathcal{R}}^+, R \subset Q_0}} \sigma(R) &= \underset{\pi_{\mathcal{P}}R = \pi_{\mathcal{P}}Q_0}{\sum_{R \in {\mathcal{R}}^+ \cap {\mathcal{S}}_0, R \subset Q_0}} \sigma(R) + \sum_{{\mathcal{S}}\in \mathfrak{S}} \underset{\pi_{\mathcal{P}}R = \pi_{\mathcal{P}}Q_0}{\sum_{R \in {\mathcal{R}}^+ \cap {\mathcal{S}}, R \subset Q_0}} \sigma(R) \\ &\eqqcolon I_{Q_0} + II_{Q_0}.\end{aligned}$$ Let us show that if $I_{Q_0} \lesssim \sigma(Q_0)$ for every $Q_0 \in {\mathbb{D}}$, then $II_{Q_0} \lesssim \sigma(Q_0)$ for every $Q_0 \in {\mathbb{D}}$. Suppose that $Q \in {\mathcal{S}}\in \mathfrak{S}$. Since $Q({\mathcal{S}}) \subsetneq Q_0$, we have $\pi_{\mathcal{P}}Q = \pi_{\mathcal{P}}Q_0$ only if $\pi_{\mathcal{P}}Q = \pi_{\mathcal{P}}Q(S) = \pi_{\mathcal{P}}Q_0$. Thus, it holds that $$\begin{aligned} II_{Q_0} = \sum_{{\mathcal{S}}\in \mathfrak{S}} \underset{\pi_{\mathcal{P}}R = \pi_{\mathcal{P}}Q_0}{\sum_{R \in {\mathcal{R}}^+ \cap {\mathcal{S}}, R \subset Q_0}} \sigma(R) \le \sum_{{\mathcal{S}}\in \mathfrak{S}} \underset{\pi_{\mathcal{P}}R = \pi_{\mathcal{P}}Q({\mathcal{S}})}{\sum_{R \in {\mathcal{R}}^+ \cap {\mathcal{S}}, R \subset Q_0}} \sigma(R) = \sum_{{\mathcal{S}}\in \mathfrak{S}} I_{Q({\mathcal{S}})} \lesssim \sum_{{\mathcal{S}}\in \mathfrak{S}} \sigma(Q({\mathcal{S}})) \lesssim \sigma(Q_0)\end{aligned}$$ by the Carleson packing property of the collection $\{Q({\mathcal{S}})\}_{\mathcal{S}}$. Hence, to prove , it suffices to show $I_{Q_0} \lesssim \sigma(Q_0)$. **Part 2: $\delta(Y) \lesssim D_\mathcal{A}(Y)$ in $\widehat{U}_P^+$.** Let $\mathcal{A} \subset {\mathcal{G}}$ be a collection of cubes and set $$\begin{aligned} \Omega_\mathcal{A}^* \coloneqq \text{int} \left( \bigcup_{Q \in \mathcal{A}} \widehat{\textbf{U}}_Q^+ \right) = \text{int} \left( \bigcup_{Q \in \mathcal{A}} \bigcup_{I \in \mathcal{W}_Q^+} I^{***} \right)\end{aligned}$$ and $D_\mathcal{A}(Y) \coloneqq {\text{dist}}(Y,\partial \Omega_\mathcal{A}^*)$. Recall the definitions of $I^{**}$ and $I^{***}$ from Section \[subsection:non-tangential\]. Let us fix a cube $P \in \mathcal{A}$ and a point $Y \in \widehat{U}_P^+ = \bigcup_{I \in {\mathcal{W}}_P^+} I^{**}$. We now claim that $\delta(Y) \lesssim D_\mathcal{A}(Y)$ . We notice first that although the regions $\widehat{\textbf{U}}_Q^+$ may overlap, we have $\ell(Q) \approx \ell(Q') \approx \ell(P)$ for all overlapping regions $\widehat{\textbf{U}}_Q^+$ and $\widehat{\textbf{U}}_{Q'}^+$ such that $Y \in \widehat{\textbf{U}}_Q^+ \cap \widehat{\textbf{U}}_{Q'}^+$ (see (3.2), (3.8) and related estimates in [@hofmannmartellmayboroda]). Also, the fattened Whitney boxes $I^{***}$ may overlap, but we have $\ell(I^{***}) \approx \ell(I) \approx \ell(J) \approx \ell(J^{***}) \approx \ell(P)$ if $Y \in I^{***} \cap J^{***}$. By a simple geometrical consideration we know that $${\text{dist}}(Y,\partial I^{***}) \approx_{\tau} \ell(I). $$ It now holds that $D_\mathcal{A}(Y) = {\text{dist}}(Y,\partial I^{***})$ for some $I^{***} \ni Y$ or $D_\mathcal{A}(Y) \ge {\text{dist}}(Y,\partial I^{***})$ for every such $I^{***}$. In particular, we have $$\begin{aligned} D_\mathcal{A}(Y) &\ge \inf_{Q \in \mathcal{A}, Y \in \widehat{\textbf{U}}_Q^+} \inf_{I \in \mathcal{W}_Q^+} {\text{dist}}(Y,\partial I^{***}) \\ &\approx \inf_{Q \in \mathcal{A}, Y \in \widehat{\textbf{U}}_Q^+} \inf_{I \in \mathcal{W}_Q^+} \ell(I) \approx \inf_{Q \in \mathcal{A}, Y \in \widehat{\textbf{U}}_Q^+} \ell(Q) \approx \ell(P).\end{aligned}$$ Now we can take any $I \in {\mathcal{W}}_P^+$ such that $Y \in I^{**}$ and notice that $\ell(P) \approx \ell(I) \approx \ell(I^{**}) \approx \text{dist}(I^{**},\partial \Omega) \approx {\text{dist}}(Y, \partial \Omega)$. Hence $D_\mathcal{A}(Y) \gtrsim \delta(Y)$ for every $Y \in \widehat{U}_P^+$. **Part 3: The sum $I_{Q_0}$.** To simplify the notation, let us write $$\begin{aligned} {\mathcal{R}}_0^+ \coloneqq \{R \in {\mathcal{R}}^+ \cap {\mathcal{S}}_0 \colon R \subset Q_0, \pi_{\mathcal{P}}R = \pi_{\mathcal{P}}Q_0\}.\end{aligned}$$ We consider the region $\Omega^{***}$, $$\Omega^{***} := \text{int} \left( \bigcup_{R \in {\mathcal{R}}_0^+} \widehat{\textbf{U}}_R^+\right)$$ and set $D(Y) \coloneqq \text{dist}(Y,\partial \Omega^{***})$ for every $Y \in \Omega$. Suppose that $R \in {\mathcal{R}}_0^+$. By Part 2, we know that $$\begin{aligned} \label{distance_estimate} \delta(Y) \lesssim D(Y) \ \ \ \ \text{for every } Y \in \widehat{U}_R^+.\end{aligned}$$ We also notice that $$\begin{aligned} \Omega^{***} = \text{int} \left(\bigcup_{R \in {\mathcal{R}}_0^+} \widehat{\textbf{U}}_R^+ \right) \subset \text{int} \left( \bigcup_{R \in {\mathcal{R}}_0^+} \bigcup_{x \in R} \Gamma(x) \right),\end{aligned}$$ so we have $$\begin{aligned} \nonumber \sup_{X \in \Omega^{***}} |u(X)| = \sup_{R \in {\mathcal{R}}_0^+} \sup_{X \in \widehat{\textbf{U}}_R^+} |u(X)| &\le \sup_{R \in {\mathcal{R}}_0^+} \inf_{x \in R} N_*u(x) \\ \label{control_in_omega***} &\le \sup_{R \in {\mathcal{R}}_0^+} M_{\mathbb{D}}(N_*u)(R) \lesssim M_{\mathbb{D}}(N_*u)(\pi_{\mathcal{P}}Q_0).\end{aligned}$$ In the last inequality we used the definition of ${\mathcal{R}}_0^+$ (see Remark \[remark:simplification\]). By [@hofmannmartellmayboroda (5.8)] (or [@hofmannmartell Section 4]), we have $$\begin{aligned} \label{estimate:oscillation_square} \left( {\underset{U_R^+}{\text{osc}} \,} u \right)^2 \lesssim \ell(R)^{-n} \iint_{\widehat{U}_R^+} |\nabla u(Y)|^2 \delta(Y) \, dY\end{aligned}$$ for every $R \in {\mathcal{R}}^+$. Notice also that if $R \in {\mathcal{R}}_0^+$, then by the definition of the numbers $M_D(N_*u)(Q)$ we have $M_{\mathbb{D}}(N_*u)(\pi_{\mathcal{P}}Q_0) \le M_{\mathbb{D}}(N_*u)(R)$ simply because $R \subset \pi_{\mathcal{P}}Q_0$. Thus, using (A) the definition of the numbers $M_{\mathbb{D}}(N_*u)(Q)$, (B) the ADR property of $E$, (C) the definition of the collection ${\mathcal{R}}^+$ and (D) the bounded overlap of the regions $\widehat{U}_R^+$ we get $$\begin{aligned} \label{inequality_chain1} M_{\mathbb{D}}(N_*u)(\pi_{\mathcal{P}}Q_0)^2 I_{Q_0} &\overset{\text{(A)}}{\le} \sum_{R \in {\mathcal{R}}_0^+} M_{\mathbb{D}}(N_*u)(R)^2 \sigma(R) \\ \nonumber &\overset{\text{(B)}}{\lesssim} \sum_{R \in {\mathcal{R}}_0^+} M_{\mathbb{D}}(N_*u)(R)^2 \ell(R)^n \\ \nonumber &\overset{\text{(C)}, \eqref{estimate:oscillation_square}}{\lesssim} \frac{1}{\varepsilon^2} \sum_{R \in {\mathcal{R}}_0^+} \iint_{\widehat{U}_R^+} |\nabla u(Y)|^2 \delta(Y) \, dY \\ \nonumber &\overset{\eqref{distance_estimate}}{\lesssim} \frac{1}{\varepsilon^2} \sum_{R \in {\mathcal{R}}_0^+} \iint_{\widehat{U}_R^+} |\nabla u(Y)|^2 D(Y) \, dY \\ \nonumber &\overset{\text{(D)}}{\lesssim} \frac{1}{\varepsilon^2} \iint_{\Omega^{***}} |\nabla u(Y)|^2 D(Y) \, dY\end{aligned}$$ Since $Q_0 \in {\mathcal{R}}$, we notice that the collection ${\mathcal{R}}_0^+$ forms a semi-coherent subregime of ${\mathcal{S}}_0$. Thus, by [@hofmannmartellmayboroda Lemma 3.24], the set $\Omega^{***}$ is a chord-arc domain (i.e. NTA domain with ADR boundary). Furthermore, by [@azzametal Theorem 1.2], $\partial \Omega^{***}$ is UR. Since $\Omega^{***} \subset B(x_{Q_0}, C\ell(Q_0))$ for a suitable structural constant $C$ (see [@hofmannmartellmayboroda (3.14)]), the ADR property of $\partial \Omega$ and [@hofmannmartellmayboroda Theorem 1.1] give us $$\begin{aligned} \label{inequality_chain2} \frac{1}{\varepsilon^2} \iint_{\Omega^{***}} |\nabla u(Y)|^2 D(Y) \, dY \lesssim \frac{1}{\varepsilon^2} \|u\|_{L^\infty(\Omega^{***})}^2 \cdot \sigma(Q_0) \overset{\eqref{control_in_omega***}}{\lesssim} \frac{1}{\varepsilon^2} M_{\mathbb{D}}(N_*u)(\pi_{\mathcal{P}}Q_0)^2 \cdot \sigma(Q_0).\end{aligned}$$ Since the numbers $M_{\mathbb{D}}(N_*u)(\pi_{\mathcal{P}}Q_0)^2$ cancel from and , this concludes the proof of the lemma. Since the bad collection ${\mathcal{B}}$ in the bilateral corona decomposition satisfies a Carleson packing condition, we immediately get the following corollary: \[corollary:bad\_cubes\_carleson\] For every $Q_0 \in {\mathbb{D}}(E)$ we have $$\begin{aligned} \label{corollary:carleson_bad_oscillation} \sum_{R \in ({\mathcal{R}}\cup {\mathcal{B}}), R \subseteq Q_0} \sigma(R) \lesssim \frac{1}{\varepsilon^2} \sigma(Q_0). \end{aligned}$$ Generation cubes ================ For every stopping time regime ${\mathcal{S}}$, we construct a collection of *generation cubes* $G({\mathcal{S}})$ as in [@hofmannmartellmayboroda Section 5] but with modified stopping conditions. For clarity, let us repeat the key details and definitions from [@hofmannmartellmayboroda Section 5] here. We set $Q^0 \coloneqq Q({\mathcal{S}})$ and $G_0 \coloneqq \{Q^0\}$, start subdividing $Q^0$ dyadically and stop when we reach a cube $Q \in {\mathbb{D}}_{Q^0}$ for which at least one of the following conditions holds: 1. $Q$ is not in ${\mathcal{S}}$, 2. $|u(Y_Q^+) - u(Y_{Q^0}^+)| > \varepsilon M_{\mathbb{D}}(N_*u)(Q)$, 3. $|u(Y_Q^-) - u(Y_{Q^0}^-)| > \varepsilon M_{\mathbb{D}}(N_*u)(Q)$. The points $Y_Q^{\pm}$ were defined in Section \[subsection:whitney\_regions\]. We denote the collection of maximal subcubes of $Q^0$ extracted by these stopping time conditions by ${\mathcal{F}}_1 = {\mathcal{F}}_1(Q^0)$ and we let $G_1 = G_1(Q^0) \coloneqq {\mathcal{F}}_1 \cap {\mathcal{S}}$ be the collection of *first generation cubes*. We notice that the collection of subcubes of $Q^0$ that are not contained in any stopping cube $Q \in {\mathcal{F}}_1$ form a semicoherent subregime of ${\mathcal{S}}$. We denote this subregime by ${\mathcal{S}}' = {\mathcal{S}}'(Q^0)$. If $G_1$ is non-empty, we repeat the construction above for the cubes $Q^1 \in G_1$ but replace $Y_{Q^0}^\pm$ by $Y_{Q^1}^\pm$ in conditions (2) and (3). Continuing like this gives us collections $G_k$ for $k \ge 0$ (notice that starting from some $k$ the collections might be empty), where $$\begin{aligned} G_{k+1}(Q^0) \coloneqq \bigcup_{Q^k \in G_k(Q^0)} G_1(Q^k).\end{aligned}$$ To emphasize the dependency on ${\mathcal{S}}$, we denote $$\begin{aligned} G_k({\mathcal{S}}) \coloneqq G_k(Q({\mathcal{S}})),\end{aligned}$$ and we set the collection of all generation cubes to be $$\begin{aligned} G^* \coloneqq \bigcup_{{\mathcal{S}}} G({\mathcal{S}}).\end{aligned}$$ By this construction, we have $$\begin{aligned} \label{decomposition:stopping_time_regimes} {\mathcal{S}}= \bigcup_{Q \in G({\mathcal{S}})} {\mathcal{S}}'(Q)\end{aligned}$$ for each stopping time regime ${\mathcal{S}}$, where ${\mathcal{S}}'(Q)$ is a semicoherent subregime of ${\mathcal{S}}$ with maximal element $Q$ and the subregimes ${\mathcal{S}}'(Q)$ are disjoint. Our next goal is to prove that the collection $G^*$ satisfies a Carleson packing condition: \[lemma:stopping\_carleson\] For every $Q_0 \in {\mathbb{D}}$ we have $$\begin{aligned} \sum_{S \in G^*, S \subseteq Q_0} \sigma(S) \lesssim \frac{1}{{\varepsilon}^2} \sigma(Q_0). \end{aligned}$$ Before the proof, let us make two observations that help us to simplify the proof. 1. By arguing as in the proof of Lemma \[lemma:oscillation\_carleson\], we may assume that $Q_0 \in G^*$ and it suffices to show that $$\begin{aligned} \underset{\pi_{\mathcal{P}}S = \pi_{\mathcal{P}}Q_0}{\sum_{S \in G^* \cap {\mathcal{S}}_0, S \subset Q_0}} \sigma(S) \lesssim \frac{1}{{\varepsilon}^2} \sigma(Q_0), \end{aligned}$$ where ${\mathcal{S}}_0$ is the unique stopping time regime such that $Q_0 \in {\mathcal{S}}_0$. 2. For every $k \ge 0$ and $S \in G_k({\mathcal{S}}_0)$, let $G_1(S) \subset G({\mathcal{S}}_0)$ be the $G^*$-children of $S$, i.e. the cubes $P \in G_{k+1}({\mathcal{S}}_0)$ such that $P \subsetneq S$. For each such $S$ we have $$\begin{aligned} \label{stopping_carleson_estimate} M_{\mathbb{D}}(N_* u)(S)^2 \underset{\pi_{\mathcal{P}}Q = \pi_{\mathcal{P}}Q_0}{\sum_{Q \in G_1(S)}} \sigma(Q) \lesssim \frac{1}{{\varepsilon}^2} \iint_{\Omega_{\mathscr{S}(S)}} |\nabla u(Y)|^2 \delta(Y) \, dY, \end{aligned}$$ where $\mathscr{S}(S) \coloneqq {\mathcal{S}}'(S) \cap \{Q \in {\mathbb{D}}\colon \pi_{\mathcal{P}}Q = \pi_{\mathcal{P}}Q_0\}$ is a semicoherent subregime of ${\mathcal{S}}_0$ and $\Omega_{\mathscr{S}(S)}$ is the associated sawtooth region (see ). The estimate is a counterpart of [@hofmannmartellmayboroda Lemma 5.11] and it follows easily from the original proof. To be a little more precise, instead of having ${\varepsilon}^2 \le 100|u(Y_Q^+) - u(Y_S^+)|^2$ for every $Q \in G_1(S)$ as in [@hofmannmartellmayboroda (5.13)], we have ${\varepsilon}^2 M_{\mathbb{D}}(N_*u)(S)^2 \le {\varepsilon}^2 M_{\mathbb{D}}(N_*u)(Q)^2 \le |u(Y_Q^+) - u(Y_S^+)|^2$ for every $Q \in G_1(S)$. The rest of the proof works as it is. Let us follow the arguments in the proof of [@hofmannmartellmayboroda Lemma 5.16] and write $$\begin{aligned} \underset{\pi_{\mathcal{P}}S = \pi_{\mathcal{P}}Q_0}{\sum_{S \in G^* \cap {\mathcal{S}}_0, S \subset Q_0}} \sigma(S) &= \sum_{k \ge 0} \underset{\pi_{\mathcal{P}}S = \pi_{\mathcal{P}}Q_0}{\sum_{S \in G_k(Q_0)}} \sigma(S) \\ &= \sigma(Q_0) + \sum_{k \ge 1} \sum_{S' \in G_{k-1}(Q_0)} \underset{\pi_{\mathcal{P}}S = \pi_{\mathcal{P}}Q_0}{\sum_{S \in G_1(S')}} \sigma(S) \eqqcolon \sigma(Q_0) + I. \end{aligned}$$ Using and the definition of the sawtooth regions gives us $$\begin{aligned} \nonumber M_{\mathbb{D}}(N_* u)(\pi_{\mathcal{P}}Q_0)^2 I &\overset{\eqref{stopping_carleson_estimate}}{\lesssim} \frac{1}{{\varepsilon}^2} \sum_{k \ge 1} \sum_{S' \in G_{k-1}(Q_0)} \iint_{\Omega_{\mathscr{S}(S')}} |\nabla u(Y)|^2 \delta(Y) \, dY \\ \label{estimate:triple_sum} &\le \frac{1}{{\varepsilon}^2} \sum_{k \ge 1} \sum_{S' \in G_{k-1}(Q_0)} \underset{\pi_{\mathcal{P}}S = \pi_{\mathcal{P}}Q_0}{\sum_{S \in {\mathcal{S}}'(S')}} \iint_{U_S} |\nabla u(Y)|^2 \delta(Y) \, dY \end{aligned}$$ We denote $\Omega_0 \coloneqq \bigcup_{S \in G^*_{Q_0}} U_S$ where $G^*_{Q_0} \coloneqq \{S \in {\mathbb{D}}\colon \pi_{\mathcal{P}}S = \pi_{\mathcal{P}}Q_0\} \cap \bigcup_{k \ge 1} \bigcup_{S' \in G_{k-1}(Q_0)} {\mathcal{S}}'(S')$. By the construction, $\bigcup_{k \ge 1} \bigcup_{S' \in G_{k-1}(Q_0)} {\mathcal{S}}'(S')$ is a coherent subregime of ${\mathcal{S}}_0$ with maximal element $Q_0$ and thus, $G^*_{Q_0}$ is a semicoherent subregime of ${\mathcal{S}}_0$. In particular, the sawtooth region $\Omega_0$ splits into two chord-arc domains $\Omega_0^\pm$ by [@hofmannmartellmayboroda Lemma 3.24]. Furthermore, by [@azzametal Theorem 1.2], both $\partial \Omega_0^+$ and $\partial \Omega_0^-$ are UR. We also note that $\Omega_0 \subset B(x_{Q_0}, C \ell(Q_0))$ (see [@hofmannmartellmayboroda (3.14)]). Thus, since the triple sum in runs over a collection of disjoint cubes, we can use the bounded overlap of the Whitney regions, [@hofmannmartellmayboroda Theorem 1.1] and the ADR property of $E$ to show that $$\begin{aligned} \frac{1}{{\varepsilon}^2} \sum_{k \ge 1} \sum_{S' \in G_{k-1}(Q_0)} \underset{\pi_{\mathcal{P}}S = \pi_{\mathcal{P}}Q_0}{\sum_{S \in {\mathcal{S}}'(S')}} \iint_{U_S} |\nabla u(Y)|^2 \delta(Y) \, dY &\lesssim \frac{1}{{\varepsilon}^2} \iint_{\Omega_0} |\nabla u(Y)|^2 \delta(Y) \, dY \\ &\lesssim \frac{1}{{\varepsilon}^2} \|u\|_{L^\infty(\Omega_0)}^2 \sigma(Q_0). \end{aligned}$$ Since $\pi_{\mathcal{P}}S = \pi_{\mathcal{P}}Q_0$ for every $S \in G^*_{Q_0}$, by we have $M_{\mathbb{D}}(N_*u)(S) \le 2M_{\mathbb{D}}(N_*u)(\pi_{\mathcal{P}}Q_0)$ for every $S \in G^*_{Q_0}$. In particular: $$\begin{aligned} \|u\|_{L^\infty(\Omega_0)}^2 &\le \sup_{S \in G^*_{Q_0}} \sup_{Y \in U_S} |u(Y)|^2 \\ &\le \sup_{S \in G^*_{Q_0}} \inf_{x \in S} N_*u(x)^2 \\ &\le \sup_{S \in G^*_{Q_0}} M_{\mathbb{D}}(N_*u)(S)^2 \lesssim M_{\mathbb{D}}(N_*u)(\pi_{\mathcal{P}}Q_0)^2. \end{aligned}$$ Since the numbers $M_{\mathbb{D}}(N_* u)(\pi_{\mathcal{P}}Q_0)^2$ cancel out, we have proven the Carleson packing condition of $G^*$. Construction of the approximating function {#section:construction} ========================================== Before we construct the function, we prove the following technical lemma related to the modified cones $\Gamma_\alpha(x)$ that we defined in Section \[subsection:non-tangential\]. Recall that $$\begin{aligned} \label{defin:large_cones} \Gamma_\alpha(x) = \bigcup_{Q \in {\mathbb{D}}(E), Q \ni x} \bigcup_{\substack{P \in {\mathbb{D}}(E), \\ \ell(P) = \ell(Q), \\ \alpha \Delta_Q \cap P \neq \emptyset}} \widehat{\textbf{U}}_P.\end{aligned}$$ \[lemma:large\_cones\] There exists a uniform constant $\alpha_0 > 0$ such that the following holds: if $Q \in {\mathbb{D}}(E)$ is any cube and $P \in G^*$ is a generation cube such that $\ell(Q) \le \ell(P)$ and $\Omega_{{\mathcal{S}}'(P)} \cap T_Q \neq \emptyset$, then $X_P^\pm, Y_P^\pm \in \Gamma_{\alpha_0}(x)$ for every $x \in Q$. We start by noticing that there exists $\alpha > 0$ (depending only on the structural constants) such that $$\begin{aligned} \label{implication:union} \text{if } P \text{ appears in the union } \eqref{defin:large_cones}\text{, then also } \widetilde{P} \text{ appears in the same union}, \end{aligned}$$ where $\widetilde{P}$ is the dyadic parent of $P$. Indeed, if we have $Q, P \in {\mathbb{D}}(E)$, $x \in Q$, $\ell(Q) = \ell(P)$ and $\alpha \Delta_Q \cap P \neq \emptyset$, then also $x \in \widetilde{Q}$, $\ell(\widetilde{Q}) = \ell(\widetilde{P})$ and $\alpha \Delta_{\widetilde{Q}} \cap \widetilde{P} \neq \emptyset$. The last claim follows from the fact that $\emptyset \neq \alpha \Delta_Q \cap P \subset \alpha \Delta_{\widetilde{Q}} \cap \widetilde{P}$. Let us then prove the claim of the lemma by following the argument in the proof of [@hofmannmartellmayboroda Lemma 5.20]. Since $\Omega_{{\mathcal{S}}'(P)} \cap T_Q \neq \emptyset$, there exist cubes $P' \in {\mathcal{S}}'(P)$ and $Q' \subset Q$ such that $U_{P'} \cap U_{Q'} \neq \emptyset$. By the properties of the Whitney regions, we have ${\text{dist}}(Q',P') \lesssim \ell(Q') \approx \ell(P')$. Let us consider two cases: 1. Suppose that $\ell(P') \ge \ell(Q)$. Then there exists a cube $Q''$ such that $Q \subset Q''$ and $\ell(Q'') = \ell(P')$. Since $Q' \subset Q''$, we have ${\text{dist}}(Q'',P') \le {\text{dist}}(Q',P') \lesssim \ell(Q') \le \ell(Q'')$. Thus, for a large enough $\alpha_0$, we have $\widehat{\bf{U}}_{P'} \subset \Gamma_{\alpha_0}(x)$ for every $x \in Q$ and the claim follows from . 2. Suppose that $\ell(P') < \ell(Q)$. Then by the semicoherency of ${\mathcal{S}}'(P)$, there exists a cube $P'' \in {\mathcal{S}}'(P)$ such that $P' \subset P'' \subset P$ and $\ell(P'') = \ell(Q)$. Since $P' \subset P''$ and $Q' \subset Q$, we know that ${\text{dist}}(P'',Q) \le {\text{dist}}(P',Q') \lesssim \ell(Q') \le \ell(Q)$. Thus, for a large enough $\alpha_0$, we have $\widehat{\bf{U}}_{P''} \subset \Gamma_{\alpha_0}(x)$ for every $x \in Q$. Again, the claim follows now from . Constructing the function in $T_{Q_0}$ {#section:construction_local} -------------------------------------- In this section we adopt the terminology from other papers (including [@hofmannmartellmayboroda]) and say that a component $U_{Q}^i$ is *blue* if $\text{osc}_{U_Q^i} u \le \varepsilon M_{\mathbb{D}}(N_*u)(Q)$ and *red* if $\text{osc}_{U_Q^i} u > \varepsilon M_{\mathbb{D}}(N_*u)(Q)$. We recall the construction of the local functions $\varphi_0$, $\varphi_1$ and $\varphi$ from [@hofmannmartellmayboroda Section 5]. We start by defining an ordered family of good cubes $\{Q_k\}_{k \ge 1}$ relative to a fixed cube $Q_0 \in {\mathbb{D}}$. If $Q_0 \in {\mathcal{G}}$, then $Q_0 \in {\mathcal{S}}$ for some stopping time regime ${\mathcal{S}}$ and thus, $Q_0 \in {\mathcal{S}}_1'$ for some subregime in . In this case, we set $Q_1 = Q({\mathcal{S}}_1')$. If $Q_0 \notin {\mathcal{G}}$, then we let $Q_1$ be any good subcube of $Q_0$ such that $Q_1$ is maximal with respect to the side length; such a cube much exist since ${\mathcal{B}}$ is Carleson. Since $Q_1 \in {\mathcal{G}}$, we have $Q_1 \in {\mathcal{S}}$ for some stopping time regime ${\mathcal{S}}$, and by the coherency of ${\mathcal{S}}$, we have $Q_1 = Q({\mathcal{S}}_1')$ for some subregime in . Once the cube $Q_1$ has been chosen in these two cases, we let $Q_2$ be a subcube of maximum side length in $({\mathbb{D}}_{Q_0} \cap {\mathcal{G}}) \setminus {\mathcal{S}}_1'$ and so on. This gives us a sequence of cubes $Q_k \in {\mathcal{G}}$ such that $\ell(Q_1) \ge \ell(Q_2) \ge \ell(Q_3) \ge \cdots$, $Q_k = Q({\mathcal{S}}_k')$ and ${\mathcal{G}}\cap {\mathbb{D}}_{Q_0} \subset \bigcup_{k \ge 1} {\mathcal{S}}_k'$. We define recursively $$\begin{aligned} A_1 \coloneqq \Omega_{{\mathcal{S}}_1'}, \ \ \ \ \ A_k \coloneqq \Omega_{{\mathcal{S}}_k'} \setminus \left( \bigcup_{j=1}^{k-1} A_j \right), \ k \ge 2.\end{aligned}$$ and $$\begin{aligned} A_1^\pm \coloneqq \Omega_{{\mathcal{S}}_1'}^\pm, \ \ \ \ \ A_k^\pm \coloneqq \Omega_{{\mathcal{S}}_k'}^\pm \setminus \left( \bigcup_{j=1}^{k-1} A_j \right), \ k \ge 2,\end{aligned}$$ where $$\begin{aligned} \Omega_{{\mathcal{S}}_k'} \coloneqq \text{int}\left( \bigcup_{Q \in {\mathcal{S}}_k'} U_Q^\pm \right).\end{aligned}$$ We also set $$\begin{aligned} \Omega_0 \coloneqq \bigcup_k \Omega_{{\mathcal{S}}_k'} = \bigcup_k A_k \ \ \ \ \ \text{ and } \ \ \ \ \ \Omega_0^\pm \coloneqq \bigcup_k A_k^\pm.\end{aligned}$$ We now define $\varphi_0$ on $\Omega_0$ by setting $$\begin{aligned} \varphi_0 \coloneqq \sum_k \left(u(Y_{Q_k}^+)1_{A_k^+} + u(Y_{Q_k}^-)1_{A_k^-}\right).\end{aligned}$$ As for the rest of the subcubes of ${\mathbb{D}}_{Q_0}$, we let $\{Q(k)\}_k$ be some fixed enumeration of the cubes $({\mathcal{R}}\cup {\mathcal{B}}) \cap {\mathbb{D}}_{Q_0}$ and define recursively $$\begin{aligned} V_1 \coloneqq U_{Q(1)}, \ \ \ \ \ V_k \coloneqq U_{Q(k)} \setminus \left( \bigcup_{j=1}^{k-1} V_j \right), \ k \ge 2.\end{aligned}$$ Each Whitney region $U_{Q(k)}$ splits into a uniformly bounded number of connected components $U_{Q(k)}^i$. Thus, we may further split $$\begin{aligned} V_1^i \coloneqq U_{Q(1)}^i, \ \ \ \ \ V_k^i \coloneqq U_{Q(k)}^i \setminus \left( \bigcup_{j=1}^{k-1} V_j \right), \ k \ge 2\end{aligned}$$ and then define $$\begin{aligned} \varphi_1(Y) \coloneqq \left\{ \begin{array}{cl} u(Y), & \text{ if } U_{Q(k)}^i \text{ is red} \\ u(X_I), & \text{ if } U_{Q(k)}^i \text{ is blue} \end{array} \right., Y \in V_k^i,\end{aligned}$$ on each $V_k^i$, where $X_I$ is the center of a fixed Whitney cube $I \subset U_{Q(k)}^i$. We then denote $\Omega_1 \coloneqq \text{int} \left( \bigcup_{Q \in \left( \mathcal{B} \cup {\mathcal{R}}\right) \cap {\mathbb{D}}_{Q_0}} U_Q \right) = \text{int} \left( \bigcup_k V_k \right)$, set the values of $\varphi_0$ and $\varphi_1$ to be $0$ outside their original domains of definition and define the function $\varphi$ on the Carleson box $T_{Q_0}$ as $$\begin{aligned} \varphi(Y) \coloneqq \left\{ \begin{array}{cl} \varphi_0(Y), & Y \in T_{Q_0} \setminus \overline{\Omega_1} \\ \varphi_1(Y), & Y \in \Omega_1 \end{array} \right. ,\end{aligned}$$ From the point of view of ${\mathcal{C}}_{\mathbb{D}}$, the values of $\varphi$ on the boundary of $\Omega_1$ are not important since the $(n+1)$-dimensional measure of $\partial \Omega_1$ is $0$. Thus, we may simply set $\varphi|_{\partial \Omega_1} = u$ since this is convenient from the point of view of $N_*(u - \varphi)$. Verifying the estimates on $Q_0$ -------------------------------- Let us fix a cube $Q_0 \in {\mathbb{D}}(E)$. We start by verifying the following three estimates on $Q_0$. \[lemma:local\_pointwise\_bounds\] Suppose that $x \in Q_0$, $Q' \in {\mathbb{D}}_{Q_0}$ and $\overrightarrow{\Psi} \in C_0^1(W_{Q'})$ with $\|\overrightarrow{\Psi}\|_{L^\infty} \le 1$, where $W_{Q'} \subset \Omega$ is any bounded and open set satisfying $T_{Q'} \subset W_{Q'}$. Then the following estimates hold: 1. $N_*(1_{T_{Q_0}} (u - \varphi))(x) \le {\varepsilon}M_{\mathbb{D}}(N_* u)(x)$, 2. $$\begin{aligned} \iint_{T_{Q'} \setminus \overline{\Omega_1}} \varphi_0 \text{div} \overrightarrow{\Psi} \lesssim \frac{1}{{\varepsilon}^2} \int_{\beta \Delta_{Q'}} N_*^{\alpha_0}u \, d\sigma, \end{aligned}$$ 3. $$\begin{aligned} \iint_{T_{Q'}} \varphi_1 \text{div} \overrightarrow{\Psi} \lesssim \frac{1}{{\varepsilon}^2} \int_{\beta \Delta_{Q'}} N_* u \, d\sigma, \end{aligned}$$ where $\beta > 0$ is a uniform constant and $\alpha_0 > 0$ is the constant in Lemma \[lemma:large\_cones\]. 1. Let us estimate the quantity $|u(Y) - \varphi(Y)|$ for different $Y \in T_{Q_0}$. 1. Suppose that $Y \in V_k^i$ such that $U_{Q(k)}^i$ is a red component. Then we have $\varphi(Y) = u(Y)$ and $|u(Y) - \varphi(Y)| = 0$. 2. Suppose that $Y \in V_k^i$ such that $U_{Q(k)}^i$ is a blue component. Then $\varphi(Y) = u(X_I)$ for a Whitney cube $I \subseteq U_{Q(k)}^i$ and $|u(Y) - \varphi(Y)| \le \text{osc}_{U_{Q(k)}^i} u \le {\varepsilon}M_{\mathbb{D}}(N_*u)(Q(k))$. 3. Suppose that $Y \in T_{Q_0} \setminus \overline{\Omega_1}$. Then $Y \in A_k^\pm$ for some $k$ such that $Q_k \notin {\mathcal{R}}$. Without loss of generality, we may assume that $Y \in A_k^+$. Now $\varphi(Y) = u(Y_{Q_k}^+)$ and, since $Q_k \notin {\mathcal{R}}$, we have $|u(Y) - \varphi(Y)| \le \text{osc}_{U_{Q_k}^+} \le {\varepsilon}M_{\mathbb{D}}(N_*u)(Q_k)$. Combining the previous estimates gives us $$\begin{aligned} N_*(1_{T_{Q_0}}(u-\varphi))(x) &= \sup_{Y \in \Gamma(x) \cap T_{Q_0}} |u(Y) - \varphi(Y)| \\ &= \sup_{\substack{Q \in {\mathbb{D}}_{Q_0} \\ Q \ni x}} \sup_{Y \in U_Q} |u(Y) - \varphi(Y)| \\ &\le \sup_{\substack{Q \in {\mathbb{D}}_{Q_0} \\ Q \ni x}} {\varepsilon}M_{\mathbb{D}}(N_*u)(Q) \\ &\le {\varepsilon}M_{\mathbb{D}}(N_*u)(x). \end{aligned}$$ 2. We first notice that since $\Psi$ is compactly supported in $\Omega$, we have $ {\text{dist}}(\text{supp} \, \Psi, E) > 0$. Thus, for each $A_k$, the set $(T_{Q'} \cap A_k \cap \text{supp} \, \Psi) \setminus \overline{\Omega_1}$ consists of a union of boundedly overlapping sets that are “nice” enough for integration by parts. The divergence theorem gives us $$\begin{aligned} \iint_{T_{Q'} \setminus \overline{\Omega_1}} \varphi_0 \, \text{div} \overrightarrow{\Psi} &\le \sum_k \iint_{(T_{Q'} \cap A_k) \setminus \overline{\Omega_1}} \varphi_0 \, \text{div} \overrightarrow{\Psi} \\ &= \sum_k \iint_{(T_{Q'} \cap A_k) \setminus \overline{\Omega_1}} \text{div}(\varphi_0 \overrightarrow{\Psi}) \\ &\le \sum_k \left( \iint_{\partial( (T_{Q'} \cap A_k^+) \setminus \overline{\Omega_1}))} \varphi_0 \overrightarrow{\Psi} \cdot \overrightarrow{N} + \iint_{\partial( (T_{Q'} \cap A_k^-) \setminus \overline{\Omega_1}))} \varphi_0 \overrightarrow{\Psi} \cdot \overrightarrow{N} \right) \\ &\le \sum_k |u(Y_{Q_k}^+)| \cdot {\mathcal{H}}^n( T_{Q'} \cap \partial (A_k^+ \setminus \overline{\Omega_1}) ) \\ &\quad + \sum_k |u(Y_{Q_k}^-)| \cdot {\mathcal{H}}^n( T_{Q'} \cap \partial (A_k^- \setminus \overline{\Omega_1}) ) \\ &\eqqcolon I^+ + I^-. \end{aligned}$$ We only consider the sum $I^+$ since the sum $I^-$ can be handled the same way as $I^+$. We get $$\begin{aligned} {\mathcal{H}}^n( T_{Q'} \cap \partial (A_k^+ \setminus \overline{\Omega_1})) \le {\mathcal{H}}^n( T_{Q'} \cap \partial A_k^+ ) + {\mathcal{H}}^n( T_{Q'} \cap A_k^+ \cap \partial \Omega_1 ) \end{aligned}$$ and thus, we have $$\begin{aligned} I^+ \le \sum_k |u(Y_{Q_k}^+)| \cdot {\mathcal{H}}^n( T_{Q'} \cap \partial A_k^+ ) + \sum_k |u(Y_{Q_k}^+)| \cdot {\mathcal{H}}^n( T_{Q'} \cap A_k^+ \cap \partial \Omega_1 ) \eqqcolon I_1^+ + I_2^+. \end{aligned}$$ Let us consider the sum $I_1^+$ first. We split $$\begin{aligned} I_1^+ = \sum_{k \colon Q_k \subset Q'} |u(Y_{Q_k}^+)| \cdot {\mathcal{H}}^n(T_{Q'} \cap \partial A_k^+) + \sum_{k\colon Q_k \not\subset Q'} |u(Y_{Q_k}^+)| \cdot {\mathcal{H}}^n(T_{Q'} \cap \partial A_k^+) \eqqcolon J_1^+ + J_2^+. \end{aligned}$$ By [@hofmannmartellmayboroda Proposition A.2, (5.21)] we know that $\partial A_k^+$ satisfies an upper ADR bound. Thus, since $\partial(T_{Q'} \cap A_k^+) \subset \overline{\Omega_{{\mathcal{S}}_k'}}$ and ${\text{diam}}(\Omega_{{\mathcal{S}}_k'}) \lesssim \ell(Q_k)$, we get $$\begin{aligned} J_1^+ \lesssim \sum_{k: Q_k \subset Q'} |u(Y_{Q_k}^+)| \cdot \ell(Q_k)^n \approx \sum_{k: Q_k \subset Q'} |u(Y_{Q_k}^+)| \cdot \sigma(Q_k) \le \sum_{k: Q_k \subset Q'} \inf_{Q_k} N_*u \cdot \sigma(Q_k). \end{aligned}$$ Since the collection of generation cubes is $C{\varepsilon}^{-2}$-Carleson by Lemma \[lemma:stopping\_carleson\], it is $C{\varepsilon}^2$-sparse by Theorem \[thm:sparse\_carleson\]. Thus, we get $$\begin{aligned} \sum_{k: Q_k \subset Q'} \inf_{Q_k} N_*u \cdot \sigma(Q_k) &\lesssim \frac{1}{{\varepsilon}^2} \sum_{k: Q_k \subset Q'} \inf_{Q_k} N_*u \cdot \sigma(E_{Q_k}) \\ &\le \frac{1}{{\varepsilon}^2} \sum_{k: Q_k \subset Q'} \int_{E_{Q_k}} N_*u \, d\sigma \\ &\le \frac{1}{{\varepsilon}^2} \int_{Q'} N_* u \, d\sigma \end{aligned}$$ Let us then consider the sum $J_2^+$. By the same argument as in [@hofmannmartellmayboroda p. 2370], we know that the number of the cubes $Q_k$ such that $T_{Q'} \cap \partial A_k^+ \neq \emptyset$ and $\ell(Q_k) \ge \ell(Q')$ is uniformly bounded. Thus, by Lemma \[lemma:large\_cones\] and the fact that $\partial A_k^+$ satisfies an upper ADR bound (as we noted above), we get $$\begin{aligned} \sum_{\substack{k\colon Q_k \not\subset Q', \\ T_{Q'} \cap \partial A_k^+ \neq \emptyset, \\ \ell(Q') \le \ell(Q_k)}} |u(Y_{Q_k}^+)| \cdot {\mathcal{H}}^n(T_{Q'} \cap \partial A_k^+) &\le \sum_{\substack{k\colon Q_k \not\subset Q', \\ T_{Q'} \cap \partial A_k^+ \neq \emptyset, \\ \ell(Q') \le \ell(Q_k)}} \inf_{Q'} N_*^{\alpha_0} u \cdot {\mathcal{H}}^n(T_{Q'} \cap \partial A_k^+) \\ &\quad \lesssim \inf_{Q'} N_*^{\alpha_0} u \cdot \left( {\text{diam}}(T_{Q'}) \right)^n \\ &\quad \approx \inf_{Q'} N_*^{\alpha_0} u \cdot \sigma(Q') \\ &\quad \le \int_{Q'} N_*^{\alpha_0}u \, d\sigma. \end{aligned}$$ For the cubes $Q_k$ in $J_2^+$ such that $\ell(Q_k) \le \ell(Q')$ we may use the same argument as in [@hofmannmartellmayboroda p. 2370] to see that every such cube is contained in some nearby cube $Q''$ of $Q'$ of the same side length as $Q'$ with ${\text{dist}}(Q',Q'') \lesssim \ell(Q')$. The number of such $Q''$ is uniformly bounded. By using the same techniques as with the sum $J_1^+$, we get $$\begin{aligned} \sum_{\substack{k\colon Q_k \not\subset Q', \\ T_{Q'} \cap \partial A_k^+ \neq \emptyset, \\ \ell(Q') \ge \ell(Q_k)}} |u(Y_{Q_k}^+)| \cdot {\mathcal{H}}^n(T_{Q'} \cap \partial A_k^+) &\lesssim \sum_{Q''} \frac{1}{{\varepsilon}^2} \int_{Q''} N_*u \, d\sigma \\ &\le \frac{1}{{\varepsilon}^2} \int_{\beta_0 \Delta_{Q'}} N_*u \, d\sigma \end{aligned}$$ for some uniform constant $\beta_0$. Thus, we get $$\begin{aligned} J_2^+ \lesssim \frac{1}{{\varepsilon}^2} \int_{\beta_0 \Delta_{Q'}} N_*^{\alpha_0}u \, d\sigma. \end{aligned}$$ Let us then consider the sum $I_2^+$. We first notice that $$\begin{aligned} {\mathcal{H}}^n( T_{Q'} \cap A_k^+ \cap \partial \Omega_1 ) \le \sum_m {\mathcal{H}}^n( T_{Q'} \cap A_k^+ \cap \partial V_m ). \end{aligned}$$ Thus, we get $$\begin{aligned} I_2^+ &\le \sum_k \sum_m |u(Y_{Q_k}^+)| \cdot {\mathcal{H}}^n( T_{Q'} \cap A_k^+ \cap \partial V_m ) \\ &= \sum_{k: Q_k \subset Q'} \sum_m |u(Y_{Q_k}^+)| \cdot {\mathcal{H}}^n( T_{Q'} \cap A_k^+ \cap \partial V_m ) \\ &\quad + \sum_{k: Q_k \not\subset Q'} \sum_m |u(Y_{Q_k}^+)| \cdot {\mathcal{H}}^n( T_{Q'} \cap A_k^+ \cap \partial V_m ) \\ &\eqqcolon J_3^+ + J_4^+. \end{aligned}$$ Suppose that $A_k^+ \cap \partial V_m \neq \emptyset$. Then, by the construction, we have $\ell(Q(m)) \lesssim \ell(Q_k)$ and ${\text{dist}}(Q(m),Q_k) \lesssim \ell(Q_k)$. Thus, there exists a uniform constant $\beta_1 > 0$ such that $Q(m) \subset \beta_1 \Delta_{Q_k}$ and the set $\beta_1 \Delta_{Q_k}$ can be covered by a uniformly bounded number of disjoint cubes with approximately the same side length as $Q_k$. In particular, since $T_{Q'} \cap A_k^+ \cap \partial V_m$ satisfies an upper ADR bound for every $m$ by the construction and [@hofmannmartellmayboroda (5.25), Proposition A.2], we get $$\begin{aligned} J_3^+ &= \sum_{k: Q_k \subset Q'} \sum_m |u(Y_{Q_k}^+)| \cdot {\mathcal{H}}^n( T_{Q'} \cap A_k^+ \cap \partial V_m ) \\ &\lesssim \sum_{k: Q_k \subset Q'} |u(Y_{Q_k}^+)| \sum_{m: Q(m) \subset \beta_1 \Delta_{Q_k}} \ell(Q(m))^n \\ &\lesssim \sum_{k: Q_k \subset Q'} |u(Y_{Q_k}^+)| \sum_{m: Q(m) \subset \beta_1 \Delta_{Q_k}} \sigma(Q(m)) \\ &\overset{\eqref{corollary:carleson_bad_oscillation}}{\lesssim} \frac{1}{{\varepsilon}^2} \sum_{k: Q_k \subset Q'} |u(Y_{Q_k}^+)| \cdot \sigma(Q_k). \end{aligned}$$ Now we can use exactly the same arguments as with the sum $J_1^+$ to see that $$\begin{aligned} J_3^+ \lesssim \frac{1}{{\varepsilon}^2} \int_{Q'} N_* u \, d\sigma. \end{aligned}$$ Finally, let us handle the sum $J_4^+$. Just as above with the sum $J_3^+$, for some uniform constant $\beta_2 > 0$ we get $$\begin{aligned} \sum_{\substack{k: Q_k \not\subset Q' \\ \ell(Q') \le \ell(Q_k)}} \sum_m |u(Y_{Q_k}^+)| \cdot {\mathcal{H}}^n( T_{Q'} \cap A_k^+ \cap \partial V_m ) &\le \sum_{\substack{k: Q_k \not\subset Q' \\ T_{Q'} \cap A_k^+ \neq \emptyset \\ \ell(Q') \le \ell(Q_k)}} |u(Y_{Q_k}^+)| \sum_{m: V_m \subset \beta_2 \Delta_{Q'}} \sigma(Q(m)) \\ &\overset{\eqref{corollary:carleson_bad_oscillation}}{\lesssim} \frac{1}{{\varepsilon}^2} \sum_{\substack{k: Q_k \not\subset Q' \\ T_{Q'} \cap A_k^+ \neq \emptyset \\ \ell(Q') \le \ell(Q_k)}} |u(Y_{Q_k}^+)| \cdot \sigma(Q') \\ &\overset{\ref{lemma:large_cones}}{\le} \frac{1}{{\varepsilon}^2} \sum_{\substack{k: Q_k \not\subset Q' \\ T_{Q'} \cap A_k^+ \neq \emptyset \\ \ell(Q') \le \ell(Q_k)}} \inf_{Q'} N_*^{\alpha_0} u \cdot \sigma(Q') \\ &\lesssim \frac{1}{{\varepsilon}^2} \int_{Q'} N_*^{\alpha_0} u \, d\sigma, \end{aligned}$$ where we used the fact that there exists only a uniformly bounded number of cubes $Q_k$ that satisfy the condition of the sum by [@hofmannmartellmayboroda Lemma 5.20]. By using the same argument as with the latter half of the sum $J_2^+$, we get the bound $$\begin{aligned} \sum_{\substack{k: Q_k \not\subset Q' \\ \ell(Q') \ge \ell(Q_k)}} \sum_m |u(Y_{Q_k}^+)| \cdot {\mathcal{H}}^n( T_{Q'} \cap A_k^+ \cap \partial V_m ) \lesssim \frac{1}{{\varepsilon}^2} \int_{\beta_3 \Delta_{Q'}} N_*u \, d\sigma \end{aligned}$$ for some uniform contant $\beta_3 > 0$. Thus, we have $$\begin{aligned} J_4^+ \lesssim \frac{1}{{\varepsilon}^2} \int_{\beta_3 \Delta_{Q'}} N_*^{\alpha_0}u \, d\sigma. \end{aligned}$$ Combining the estimates for $J_1^+$, $J_2^+$, $J_3^+$ and $J_4^+$ gives us the claim. 3. By [@hofmannmartellmayboroda (5.25)], we have $$\begin{aligned} \label{estimate:component_boundary} {\mathcal{H}}^n(\partial V_k^i) \le {\mathcal{H}}^n(\partial V_k) \lesssim \ell(Q(k))^n \approx \sigma(Q(k)) \end{aligned}$$ for every $Q(k)$ and $i$. We also note that $\partial T_{Q'}$ satisfies an upper ADR bound [@hofmannmartellmayboroda Proposition A.2]. Recall that the function $\varphi_1$ is supported on $\Omega_1$. Thus, since the sets $V_l$ are disjoint, we get $$\begin{aligned} \iint_{T_{Q'}} \varphi_1 \text{div} \overrightarrow{\Psi} &= \sum_l \iint_{T_{Q'} \cap V_l} \varphi_1 \text{div} \overrightarrow{\Psi} \\ &= \sum_l \sum_i \iint_{T_{Q'} \cap V_l^i} \varphi_1 \text{div} \overrightarrow{\Psi} \\ &= \sum_l \sum_i \left( \iint_{T_{Q'} \cap V_l^i} \text{div}( \varphi_1 \overrightarrow{\Psi} ) - \iint_{T_{Q'} \cap V_l^i} \nabla \varphi_1 \cdot \overrightarrow{\Psi} \right) \\ &\le \sum_l \sum_i \left( \left| \iint_{T_{Q'} \cap V_l^i} \text{div}( \varphi_1 \overrightarrow{\Psi} )\right| + \iint_{T_{Q'} \cap V_l^i} |\nabla \varphi_1| \right). \end{aligned}$$ Let us first assume that $U_{Q(l)}^i$ is a blue component. Recall that since the collection ${\mathcal{R}}\cup {\mathcal{B}}$ is $C{\varepsilon}^{-2}$-Carleson by Corollary \[corollary:bad\_cubes\_carleson\], it is $C{\varepsilon}^2$-sparse by Theorem \[thm:sparse\_carleson\]. Thus, by the definition of $\varphi_1$ and the divergence theorem, we have $$\begin{aligned} \left| \iint_{T_{Q'} \cap V_l^i} \text{div}( \varphi_1 \overrightarrow{\Psi} )\right| + \iint_{T_{Q'} \cap V_l^i} |\nabla \varphi_1| &= \left| \iint_{T_{Q'} \cap V_l^i} \text{div}( \varphi_1 \overrightarrow{\Psi}) \right| \\ &\le \iint_{T_{Q'} \cap \partial V_l^i} |u(X_{I(l,i)})| \\ &\overset{\eqref{estimate:component_boundary}}{\le} \inf_{Q(l)} N_*u \cdot \sigma(Q(l)) \\ &\lesssim \frac{1}{{\varepsilon}^2} \inf_{Q(l)} N_*u \cdot \sigma(E_{Q(l)}). \end{aligned}$$ Suppose then that $U_{Q(l)}^i$ is a red component. Since $\partial V_l^i \subset \Gamma(y)$ for every $y \in Q(l)$, we get $| \iint_{T_{Q'} \cap V_l^i} \text{div}( u \overrightarrow{\Psi} )| \le \tfrac{1}{{\varepsilon}^2} \inf_{Q(l)} N_*u \cdot \sigma(E_{Q(l)})$ by the same argument as above. Also, by the definition of the function $\varphi_1$, Caccioppoli’s inequality and the sparseness arguments, we have $$\begin{aligned} \iint_{T_{Q'} \cap V_l^i} |\nabla \varphi_1| &= \iint_{V_l^i} |\nabla u| \\ &\lesssim \left( \iint_{V_l^i} |\nabla u|^2 \right)^{1/2} \ell(Q(l))^{(n+1)/2} \\ &\lesssim \frac{1}{\ell(Q(l))} \left( \iint_{\widehat{U}_{Q(l)}} |u|^2 \right)^{1/2} \ell(Q(l))^{(n+1)/2} \\ &\lesssim \frac{1}{\ell(Q(l))} \left( \iint_{\widehat{U}_{Q(l)}} \inf_{Q(l)} (N_*u)^2 \right)^{1/2} \ell(Q(l))^{(n+1)/2} \\ &\lesssim \frac{1}{\ell(Q(l))} \inf_{Q(l)} (N_*u) \cdot \ell(Q(l))^{n+1} \\ &\approx \sigma(Q(l)) \cdot \inf_{Q(l)} (N_*u) \lesssim \frac{1}{{\varepsilon}^2} \sigma(E_{Q(l)}) \cdot \inf_{Q(l)} N_*u. \end{aligned}$$ Thus, since every Whitney region $U_Q$ has only a uniformly bounded number of components $U_Q^i$, we get $$\begin{aligned} \iint_{T_{Q'}} |\nabla \varphi_1| \lesssim \sum_l \frac{1}{{\varepsilon}^2} \sigma(E_{Q(l)}) \cdot \inf_{Q(l)} N_*u. \end{aligned}$$ Since $V_l$ meets $T_{Q'}$, we know that ${\text{dist}}(Q(l),Q') \lesssim \ell(Q')$. In particular, all the relevant cubes $Q(l)$ are contained in some nearby cubes $Q''$ such that $\ell(Q'') \approx \ell(Q')$ and ${\text{dist}}(Q'',Q') \lesssim \ell(Q')$. The number of such $Q''$ is uniformly bounded. Thus, since the sets $E_{Q(l)}$ are disjoint, we get $$\begin{aligned} \sum_l \frac{1}{{\varepsilon}^2} \sigma(E_{Q(l)}) \cdot \inf_{Q(l)} N_*u \le \frac{1}{{\varepsilon}^2} \sum_{Q''} \int_{Q''} N_* u \lesssim \frac{1}{{\varepsilon}^2} \int_{\beta_0 \Delta_{Q'}} N_*u \end{aligned}$$ for some uniform constant $\beta_0$. Let us then consider the dyadic total variation of the whole approximating function $\varphi$: \[proposition:local\_gradient\_bound\] Suppose that $Q' \in {\mathbb{D}}_{Q_0}$ and $\overrightarrow{\Psi} \in C_0^1(W_{Q'})$ with $\|\overrightarrow{\Psi}\|_{L^\infty} \le 1$, where $W_{Q'} \subset \Omega$ is any bounded and open set satisfying $T_{Q'} \subset W_{Q'}$. Then $$\begin{aligned} \iint_{T_{Q'}} \varphi \, \text{div} \overrightarrow{\Psi} \lesssim \frac{1}{{\varepsilon}^2} \int_{\beta \Delta_{Q'}} N_*^{\alpha_0} u \, d\sigma, \end{aligned}$$ where $\beta > 0$ is a uniformly bounded constant and $\alpha_0 > 0$ is the constant in Lemma \[lemma:large\_cones\]. We start by splitting the integral with respect to $\varphi_0$ and $\varphi_1$. $$\begin{aligned} \iint_{T_{Q'}} \varphi \, \text{div} \overrightarrow{\Psi} = \iint_{T_{Q'} \setminus \overline{\Omega_1}} \varphi_0 \, \text{div} \overrightarrow{\Psi} + \iint_{T_{Q'} \cap \overline{\Omega_1}} \varphi_1 \, \text{div} \overrightarrow{\Psi}. \end{aligned}$$ For the first integral, we can simply use the part ii) of Lemma \[lemma:local\_pointwise\_bounds\]. For the second integral we get $$\begin{aligned} \iint_{T_{Q'} \cap \overline{\Omega_1}} \varphi_1 \text{div} \overrightarrow{\Psi} &= \sum_k \iint_{V_k \cap T_{Q'}} \varphi_1 \text{div} \overrightarrow{\Psi} \\ &= \sum_k \left( \iint_{V_k \cap T_{Q'}} \text{div}( \varphi_1 \overrightarrow{\Psi} ) - \iint_{V_k \cap T_{Q'}} \nabla \varphi_1 \cdot \overrightarrow{\Psi} \right) \\ &\le \sum_k \left| \iint_{V_k \cap T_{Q'}} \text{div}( \varphi_1 \overrightarrow{\Psi} ) \right| + \sum_k \iint_{V_k \cap T_{Q'}} |\nabla \varphi_1|. \end{aligned}$$ The second sum is just as in the proof of part iii) of Lemma \[lemma:local\_pointwise\_bounds\] and thus, we can bound it by $C{\varepsilon}^{-2} \int_{\beta_0 \Delta_{Q'}} N_*u$. For the first sum, we use the divergence theorem and Theorem \[thm:sparse\_carleson\] and get $$\begin{aligned} \sum_k \left| \iint_{V_k \cap T_{Q'}} \text{div}( \varphi_1 \overrightarrow{\Psi} ) \right| &\le \sum_k \iint_{\partial(V_k \cap T_{Q'})} \left| \varphi_1 \overrightarrow{\Psi} \cdot \overrightarrow{N} \right| \\ &\le \sum_k \sup_{U_{Q(k)}} |u| \cdot {\mathcal{H}}^n(V_k \cap \partial T_{Q'}) \\ &\le \sum_{k: \, {\text{dist}}(Q(k),Q') \lesssim \ell(Q')} \inf_{Q(k)} N_* u \cdot \sigma(Q(k)) \\ &\lesssim \frac{1}{{\varepsilon}^2}\sum_{k: \, {\text{dist}}(Q(k),Q') \lesssim \ell(Q')} \inf_{Q(k)} N_* u \cdot \sigma(E_{Q(k)}). \end{aligned}$$ By the structure of the Whitney regions, we know $V_k \cap T_{Q'} = \emptyset$ if $\ell(Q(k)) \gg \ell(Q')$ or ${\text{dist}}(Q(k),Q') \gg \ell(Q')$. Thus, there exists a uniform constant $\beta_1 > 0$ such that $Q(k) \subset \beta_1 \Delta_{Q'}$ for every $k$ in the sum above. We may cover $\beta_1 \Delta_{Q'}$ by a uniformly bounded number of disjoint cubes $P_j$ such that $\ell(P_j) \approx \ell(Q')$. This gives us $$\begin{aligned} \sum_{k: \, {\text{dist}}(Q(k),Q') \lesssim \ell(Q')} \inf_{Q(k)} N_* u \cdot \sigma(E_{Q(k)}) &\le \sum_{k: \, {\text{dist}}(Q(k),Q') \lesssim \ell(Q')} \int_{E_{Q(k)}} N_* u \\ &\le \sum_j \int_{P_j} N_* u \, d\sigma \\ &\le \int_{\beta_2 \Delta_{Q'}} N_* u \, d\sigma \end{aligned}$$ for some uniform constant $\beta_2 \ge \beta_1$. Combining the previous bounds finishes the proof. \[remark:modified\_local\_bounds\] We notice that the previous proposition holds also in the following form: If we have cubes $Q',Q_1,Q_2 \in {\mathbb{D}}_{Q_0}$ and $\overrightarrow{\Psi} \in C_0^1(W_{Q'})$ with $\|\overrightarrow{\Psi}\|_{L^\infty} \le 1$ for an open and bounded set $W_{Q'}$ containing $T_{Q'}$, then $$\begin{aligned} \iint_{(T_{Q'} \cap T_{Q_1}) \setminus T_{Q_2}} \varphi \, \text{div} \overrightarrow{\Psi} \lesssim \frac{1}{{\varepsilon}^2} \min\left\{ \int_{\beta_2 \Delta_{Q'}} N_* u \, d\sigma, \int_{\beta_2 \Delta_{Q_1}} N_* u \, d\sigma \right\} \end{aligned}$$ for some uniform constant $\beta_2$. Indeed, in the previous two proofs, we needed only the upper ADR estimates for the boundaries of $A_m$ and $V_k$ and these estimates remain valid if we remove a finite number of pieces whose boundaries satisfy an upper ADR estimate. By [@hofmannmartellmayboroda Proposition A.2], $\partial T_Q$ is ADR for every $Q \in {\mathbb{D}}(E)$. Also, by the stucture of the regions, these modified sets are “nice” enough to justify integration by parts that we used in the proofs. From local to global -------------------- Let us now construct the global approximating function. Although our construction is a little different than the construction in [@hofmannmartellmayboroda p. 2373], the basic ideas are the same. ### $E$ is a bounded set {#subsection:function_construction_bounded} Let us first assume that ${\text{diam}}(E) < \infty$. In this case, we have a cube $Q_0 \in {\mathbb{D}}(E)$ such that $E = Q_0$ and $\ell(Q_0) \approx {\text{diam}}(E)$. We now set $$\begin{aligned} \varphi(X) \coloneqq \left\{ \begin{array}{cl} \varphi_{Q_0}(X), &\text{ if } X \in T_{Q_0} \\ u(X), &\text{ if } X \in \Omega \setminus T_{Q_0} \end{array} \right. ,\end{aligned}$$ where $\varphi_{Q_0}$ is the function constructed in Section \[section:construction\_local\]. By part i) of Lemma \[lemma:local\_pointwise\_bounds\], we have $N_*(u-\varphi)(x) \le {\varepsilon}M_{\mathbb{D}}(N_*u)(x)$ on $E$. As for the ${\mathcal{C}}_{\mathbb{D}}$ bound, we first notice that for any $Q \in {\mathbb{D}}_{Q_0}$ Proposition \[proposition:local\_gradient\_bound\] gives us $$\begin{aligned} \frac{1}{\sigma(Q)} \iint_{T_Q} |\nabla \varphi| \lesssim \frac{1}{{\varepsilon}^2} M(N_*^{\alpha_0}u)(x)\end{aligned}$$ for every $x \in Q$ since $\sigma(Q) \approx \sigma(\beta \Delta_Q)$. Let us now fix a cube $F_k \in {\mathbb{D}}^*$ (recall the definition of ${\mathbb{D}}^*$ in Section \[section:cc\]), take any $\overrightarrow{\Psi} \in C_0^1(T_{F_k})$ with $\|\overrightarrow{\Psi}\|_{L^\infty} \le 1$ and modify the argument in [@hofmannmartellmayboroda p. 2353]. We denote $R \coloneqq 2^k {\text{diam}}(E)$ and thus have $T_{F_k} = B(z_0,R)$. By a suitable choice of parameters in the construction of the Whitney regions in [@hofmannmartellmayboroda], the Carleson box $T_{Q_0}$ is so large that we may fix a ball $B(z_0,r) \subset T_{Q_0}$ such that $r \ge 2{\text{diam}}(E)$. Because of this, we may fix a uniform constant $\alpha_1$ such that a small enlargement of $B(z_0,R) \setminus B(z_0,r)$ is contained in $\widehat{\Gamma}_{\alpha_1}(x)$ (recall the definition of $\widehat{\Gamma}_{\alpha_1}(x)$ in Section \[subsection:non-tangential\]) for every $x \in E$. We split $$\begin{aligned} \frac{1}{\ell(F_k)^n} \iint_{T_{F_k}} \varphi \, \text{div} \overrightarrow{\Psi} = \frac{1}{\ell(F_k)^n} \iint_{T_{Q_0}} \varphi \, \text{div} \overrightarrow{\Psi} + \frac{1}{\ell(F_k)^n} \iint_{T_{F_k} \setminus T_{Q_0}} \varphi \, \text{div} \overrightarrow{\Psi}.\end{aligned}$$ By Proposition \[proposition:local\_gradient\_bound\], we can bound the first integral by $M(N_*^{\alpha_0}u)(x)$ for any $x \in Q_0$. As for the second integral, we use the smoothness of $u$, Hölder’s inequality and Caccioppoli’s inequality to get $$\begin{aligned} \iint_{T_{F_k} \setminus T_{Q_0}} \varphi \, \text{div} \overrightarrow{\Psi} &= \iint_{T_{F_k} \setminus T_{Q_0}} u \, \text{div} \overrightarrow{\Psi} \\ &\le \iint_{B(z_0,R) \setminus T_{Q_0}} |\nabla u| \\ &\le \iint_{B(z_0,R) \setminus B(z_0,r)} |\nabla u| \\ &\lesssim \left( \iint_{B(z_0,R) \setminus B(z_0,r)} |\nabla u|^2 \right)^{1/2} R^{\tfrac{n+1}{2}} \\ &\le \left( \sum_{0 \le j \le \log_2(R/r)} \iint_{2^j r \le |z_0-X| < 2^{j+1}r} |\nabla u(X)|^2 \right)^{1/2} R^{\tfrac{n+1}{2}} \\ &\lesssim \inf_E N_*^{\alpha_1}u \cdot \left( \sum_{0 \le j \le \log_2(R/r)} (2^j r)^{n-1} \right)^{1/2} R^{\tfrac{n+1}{2}} \\ &\lesssim \inf_E N_*^{\alpha_1}u \cdot R^{\tfrac{n-1}{2}} R^{\tfrac{n+1}{2}} \\ &\le R^n M(N_*^{\alpha_1}u)(x)\end{aligned}$$ for every $x \in Q_0$. Combining the calculations and the cases gives us the desired ${\mathcal{C}}_{\mathbb{D}}$ bound. ### $E$ is an unbounded set {#section:E_unbounded} Suppose then that ${\text{diam}}(E) = \infty$. We fix a sequence of cubes $Q_i \in {\mathbb{D}}(E)$, $i \in {\mathbb{N}}$, such that $\bigcup_i Q_i = E$ and $Q_i \subsetneq Q_{i+1}$ and $\ell(Q_i) < \gamma_0 \ell(Q_{i+1})$ for every $i$, where we fix the value of the constant $\gamma_0$ later. We set $$\begin{aligned} W_1 \coloneqq T_{Q_1}, \ \ \ \ \ \ W_k \coloneqq T_{Q_k} \setminus T_{Q_{k-1}}\end{aligned}$$ and $$\begin{aligned} \varphi_k \coloneqq 1_{W_k} \varphi_{Q_k}, \ \ \ \ \ \varphi \coloneqq \sum_k \varphi_k.\end{aligned}$$ Here $\varphi_{Q_k}$ is the function constructed in Section \[section:construction\_local\] for the cube $Q_k$. The sets $W_k$ cover the whole space $\Omega$ and since $T_{Q_i} \subset T_{Q_{i+1}}$ for every $i$, they are also pairwise disjoint. Let us consider the pointwise bound for $N_*(u-\varphi)$. Fix $x \in E$ and let $Q_m$ be the smallest of the previously chosen cubes such that $x \in Q_m$. Now, if $\Gamma(x) \cap T_{Q_j} = \emptyset$ for every $j = 1,2,\ldots,m-1$, then the pointwise bound follows directly from part i) of Lemma \[lemma:local\_pointwise\_bounds\]. Suppose then that there exists a point $Y \in \Gamma(x) \cap T_{Q_j}$ for some $j < m$. We may assume that $Y \notin T_{Q_i}$ for all $i < j$. By the structure of the sets, there exist now cubes $P_1 \subset Q_m$ and $P_2 \subset Q_j$ such that $\ell(P_1) \approx \ell(P_2)$, ${\text{dist}}(P_1,P_2) \lesssim \ell(P_1)$, $Y \in U_{P_1} \cap U_{P_2}$ and $\varphi(Y) = \varphi|_{U_{P_2}}(Y)$. By the considerations in the proof of part i) of Lemma \[lemma:local\_pointwise\_bounds\], we know that $|u(Y) - \varphi(Y)| \le {\varepsilon}M_{\mathbb{D}}(N_*u)(P_2)$. By the properties of $P_1$ and $P_2$, there exists a uniform constant $\beta_0$ such that $P_1 \subset \beta_0 \Delta_{Q}$ for any $Q \in {\mathbb{D}}(E)$ such that $Q \supseteq P_2$. In particular, $$\begin{aligned} {\varepsilon}M_{\mathbb{D}}(N_*u)(P_2) &= {\varepsilon}\sup_{Q \in {\mathbb{D}}(E), P_2 \subseteq Q} \fint_{Q} N_*u \, d\sigma \\ &\lesssim {\varepsilon}\sup_{Q \in {\mathbb{D}}(E), P_2 \subseteq Q} \fint_{\beta_0 \Delta_Q} N_*u \, d\sigma \le {\varepsilon}M(N_* u)(x).\end{aligned}$$ Thus, $$\begin{aligned} N_*(u-\varphi)(x) &= \sup_{Y \in \Gamma(x)} |u(Y) - \varphi(Y)| \\ &= \sup_{k \in {\mathbb{N}}} \sup_{Y \in \Gamma(x) \cap W_k} |u(Y) - \varphi(Y)| \lesssim {\varepsilon}M_{\mathbb{D}}(N_* u)(x).\end{aligned}$$ Let us then prove the ${\mathcal{C}}_{\mathbb{D}}$ estimate. We fix a point $x \in E$ and a cube $Q \in {\mathbb{D}}(E)$ such that $x \in Q$ and split the proof to three different cases. Below, $\beta$ and $\alpha$ are uniform constants and $m$ is the smallest such number that $T_Q \subset T_{Q_m}$. 1. $T_Q \subset T_{Q_m}$ such that $T_Q \cap T_{Q_k} = \emptyset$ for every $k < m$. Now we simply have $$\begin{aligned} \iint_{T_Q} |\nabla \varphi| = \iint_{T_Q} |\nabla \varphi_m| \lesssim \frac{1}{{\varepsilon}^2} \int_{\beta \Delta_Q} N_*^{\alpha} u \, d\sigma \end{aligned}$$ by Proposition \[proposition:local\_gradient\_bound\]. 2. $T_Q \subset T_{Q_m}$ and $Q_k \subset Q$ for every $k < m$. Take any $\overrightarrow{\Psi} \in C_0^1(T_Q)$ with $\|\overrightarrow{\Psi}\|_{L^\infty} \le 1$. We get $$\begin{aligned} \iint_{T_Q} \varphi \, \text{div} \overrightarrow{\Psi} &=\iint_{T_Q \setminus T_{Q_{m-1}}} \varphi_m \, \text{div} \overrightarrow{\Psi} + \sum_{i=1}^{m-2} \iint_{T_{Q_{m-i}} \setminus T_{Q_{m-(i+1)}}} \varphi_{m-i} \, \text{div} \overrightarrow{\Psi} + \iint_{T_{Q_1}} \varphi_1 \, \text{div} \overrightarrow{\Psi} \\ &\lesssim \frac{1}{{\varepsilon}^2} \int_{\beta \Delta_Q} N_*^\alpha u \, d\sigma + \sum_{i=1}^{k-1} \frac{1}{{\varepsilon}^2} \int_{\beta \Delta_{Q_i}} N_*^\alpha u \, d\sigma \end{aligned}$$ by Remark \[remark:modified\_local\_bounds\]. We note that the balls $\beta \Delta_{Q_i}$ form an increasing sequence with respect to inclusion. If we choose the constant $\gamma_0$ to be large enough, the balls $\beta \Delta_{Q_i}$ satisfy a Carleson packing condition independent of $m$. Thus, for a large enough $\gamma_0$, we get $$\begin{aligned} \frac{1}{{\varepsilon}^2} \int_{\beta \Delta_Q} N_*^\alpha u \, d\sigma + \sum_{i=1}^{k-1} \frac{1}{{\varepsilon}^2} \int_{\beta \Delta_{Q_i}} N_*^\alpha u \, d\sigma \lesssim \frac{1}{{\varepsilon}^2} \int_{\beta \Delta_Q} M_{\mathbb{D}}(N_*^\alpha u) \, d\sigma. \end{aligned}$$ by a simple dyadic covering argument and the discrete Carleson embedding theorem (Theorem \[theorem:carleson\_embedding\]). 3. $T_Q \subset T_{Q_m}$, $Q_k \not\subset Q$ for every $k < m$ and $T_Q \cap T_{Q_{m-1}} \neq \emptyset$. Without loss of generality, we may assume that $\ell(Q) \approx \ell(Q_{m-1})$. Take any $\overrightarrow{\Psi} \in C_0^1(T_Q)$ with $\|\overrightarrow{\Psi}\|_{L^\infty} \le 1$. We get $$\begin{aligned} \iint_{T_Q} \varphi \, \text{div} \overrightarrow{\Psi} &=\iint_{T_Q \setminus T_{Q_{m-1}}} \varphi_m \, \text{div} \overrightarrow{\Psi} + \sum_{i=1}^{m-2} \iint_{(T_Q \cap T_{Q_{m-i}}) \setminus T_{Q_{m-(i+1)}}} \varphi_{m-i} \, \text{div} \overrightarrow{\Psi} \\ &\qquad + \iint_{T_Q \cap T_{Q_1}} \varphi_1 \, \text{div} \overrightarrow{\Psi} \\ &\lesssim \frac{1}{{\varepsilon}^2} \int_{\beta \Delta_Q} N_*^\alpha u \, d\sigma + \sum_{i=1}^{k-1} \frac{1}{{\varepsilon}^2} \int_{\beta \Delta_{Q_i}} N_*^\alpha u \, d\sigma \end{aligned}$$ by Remark \[remark:modified\_local\_bounds\]. Again, if we choose the constant $\gamma_0$ to be large enough, we get $$\begin{split} \frac{1}{{\varepsilon}^2} \int_{\beta \Delta_Q} N_*^\alpha u \, d\sigma + \sum_{i=1}^{k-1} \frac{1}{{\varepsilon}^2} \int_{\beta \Delta_{Q_i}} N_*^\alpha u \, d\sigma \lesssim \frac{1}{{\varepsilon}^2} \int_{\beta \Delta_Q} M_{\mathbb{D}}(N_*^\alpha u) \, d\sigma \end{split}$$ by a simple dyadic covering argument and the discrete Carleson embedding theorem (Theorem \[theorem:carleson\_embedding\]). Since $\sigma(Q) \approx \sigma(\beta \Delta_Q)$, combining the three cases gives us $$\begin{aligned} \frac{1}{\sigma(Q)} \iint_{T_Q} |\nabla \varphi| &\lesssim \frac{1}{{\varepsilon}^2} \frac{1}{\sigma(Q)} \int_{\beta \Delta_Q} M_{\mathbb{D}}(N_*^\alpha u) \, d\sigma \lesssim \frac{1}{{\varepsilon}^2} M(M_{\mathbb{D}}(N_*^\alpha u))(x)\end{aligned}$$ for almost every $x \in Q$. This completes the proof of Theorem \[thm:main\_result\_pointwise\]. Discrete Carleson embedding theorem {#appendix:embedding_theorem} =================================== For the convenience of the reader, we prove here the version of the Carleson embedding theorem that we used in Section \[section:E\_unbounded\]. \[theorem:carleson\_embedding\] Suppose that $\mu$ is a locally finite doubling Borel measure in a (quasi)metric space $X$ satisfying $\mu(B(x,r)) > 0$ for any $r > 0$ and ${\mathbb{D}}$ is a dyadic system in $X$. Let $f \ge 0$ be a locally integrable function. If ${\mathcal{A}}\subset {\mathbb{D}}$ is a collection that satisfies a Carleson packing condition with a constant $\Lambda \ge 1$, then $$\begin{aligned} \sum_{Q \in {\mathcal{A}}, Q \subset Q_0} \int_Q f \, d\mu \le \Lambda \int_{Q_0} M_{\mathbb{D}}f \, d\mu \end{aligned}$$ for any $Q_0 \in {\mathbb{D}}$. For every $m \in {\mathbb{Z}}$, we define the averaging operator ${\mathcal{T}}_m$ by setting $$\begin{aligned} {\mathcal{T}}_mf(x) = \sum_{\substack{Q \in {\mathbb{D}}\\ \ell(Q) \coloneqq 2^{-m}}} 1_{Q}(x) \fint_{Q} f \, d\mu, \end{aligned}$$ and we define the measure $\nu$ by setting $$\begin{aligned} d\nu(x,m) = \left(\sum_{Q \in {\mathcal{A}}, \ell(Q) = 2^{-m}} 1_{Q}(x) \right) d\mu(x). \end{aligned}$$ Now we have $$\begin{aligned} \sum_{Q \in {\mathcal{A}}, Q \subset Q_0} \int_Q f \, d\mu &= \sum_{Q \in {\mathcal{A}}, Q \subset Q_0} \mu(Q) \fint_Q f \, d\mu \\ &= \sum_{m: \, 2^{-m} \le \ell(Q_0)} \sum_{\substack{Q \in {\mathcal{A}}\\ \ell(Q) = 2^{-m}}} \int_{Q_0} 1_{Q} \left( \fint_{Q} f \right) \, d\mu \\ &= \sum_{m: \, 2^{-m} \le \ell(Q_0)} \int_{Q_0} {\mathcal{T}}_mf(x) \, d\nu(x,m) \\ &= \int_0^\infty \nu(E_\lambda^*) \, d\lambda, \end{aligned}$$ where $E_\lambda^* \coloneqq \{(x,m) \colon x \in Q_0, 2^{-m} \le \ell(Q_0), {\mathcal{T}}_mf(x) > \lambda\}$. Thus, to prove the claim, we only need to show that $\nu(E_\lambda^*) \le \Lambda \mu(E_\lambda)$, where $E_\lambda \coloneqq \{x \in Q_0 \colon \sup_m {\mathcal{T}}_mf(x) > \lambda\}$. If $\mu(E_\lambda) = \infty$, the claim is trivial. Thus, we may assume that $\mu(E_\lambda) < \infty$. We notice that if $x \in E_\lambda$, then there exists a subcube $Q' \subset Q_0$ such that $x \in Q'$ and $\fint_{Q'} f \, d\mu > \lambda$. By the definition of ${\mathcal{T}}_m$, we also have $y \in E_\lambda$ for every $y \in Q'$. In particular, we have maximal disjoint subcubes $R_j \subset Q_0$ such that $E_\lambda = \bigcup_j R_j$. We further observe the following two things: 1. If $x \in Q_0 \setminus \bigcup_j R_j$, then by the maximality of the cubes $R_j$ we have $\sup_m {\mathcal{T}}_mf(x) \le \lambda$. 2. If $x \in Q \subset Q_0$ and ${\mathcal{T}}_mf(x) > \lambda$ for some $m$ such that $2^{-m} > \ell(Q)$, then there exists a cube $\widetilde{Q} \supsetneq Q$ such that $\fint_{\widetilde{Q}} f \, d\mu > \lambda$. In particular, $Q \subset E_\lambda$ but $Q$ is not a maximal cube. Based on these observations, we have $$\begin{aligned} E_\lambda^* \subset \bigcup_j R_j \times \{m \colon 2^{-m} \le \ell(R_j)\}. \end{aligned}$$ By the Carleson packing condition, we get $$\begin{aligned} \nu(R_j \times \{m \colon 2^{-m} \le \ell(R_j)\}) = \sum_{m: \, 2^{-m} \le \ell(R_j)} \sum_{\substack{Q' \subset R_j, Q' \in {\mathcal{A}}\\ \ell(Q') = 2^{-m}}} \mu(Q') \le \Lambda \mu(R_j) \end{aligned}$$ for every $j$. In particular, since the cubes $R_j$ are disjoint, we get $$\begin{aligned} \nu(E_\lambda^*) \le \sum_j \nu(R_j \times \{m \colon 2^{-m} \le \ell(R_j)\}) \le \sum_j \Lambda \mu(P_j) = \Lambda \mu(E_\lambda), \end{aligned}$$ which completes the proof.
{ "pile_set_name": "ArXiv" }
--- abstract: | The precision of radial velocity (RV) measurements depends on the precision attained on the wavelength calibration. One of the available options is using atmospheric lines as a natural, freely available wavelength reference. Figueira et al. (2010) measured the RV of O$_2$ lines using HARPS and showed that the scatter was only of $\sim$10m/s over a timescale of 6yr. Using a simple but physically motivated empirical model, they demonstrated a precision of 2m/s, roughly twice the average photon noise contribution. In this paper we take advantage of a unique opportunity to confirm the sensitivity of the telluric absorption lines RV to different atmospheric and observing conditions: by means of contemporaneous in-situ wind measurements. This opportunity is a result of the work done during site testing and characterization for the European Extremely Large Telescope (E-ELT). The HARPS spectrograph was used to monitor telluric standards while contemporaneous atmospheric data was collected using radiosondes. We quantitatively compare the information recovered by the two independent approaches. The RV model fitting yielded similar results to that of Figueira et al. (2010), with lower wind magnitude values and varied wind direction. The probes confirmed the average low wind magnitude and suggested that the average wind direction is a function of time as well. However, these results are affected by large uncertainty bars that probably result from a complex wind structure as a function of height. The two approaches deliver the same results in what concerns wind magnitude and agree on wind direction when fitting is done in segments of a couple of hours. Statistical tests show that the model provides a good description of the data on all timescales, being always preferable to not fitting any atmospheric variation. The smaller the timescale on which the fitting can be performed (down to a couple of hours), the better the description of the real physical parameters. We conclude then that the two methods deliver compatible results, down to better than 5m/s and less than twice the estimated photon noise contribution on O$_2$ lines RV measurement. However, we cannot rule out that parameters $\alpha$ and $\gamma$ (dependence on airmass and zero-point, respectively) have a dependence on time or exhibit some cross-talk with other parameters, an issue suggested by some of the results. author: - | P. Figueira$^{1}$[^1], F. Kerber$^{2}$, A Chacon$^{3}$, C. Lovis$^{4}$, N.C. Santos$^{1}$, G. Lo Curto$^{2}$, M. Sarazin$^{2}$ and F. Pepe$^{4}$\ $^{1}$Centro de Astrofísica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto, Portugal\ $^{2}$European Southern Observatory, Karl-Schwarzschild-Strasse 2, D-85748 Garching bei München, Germany\ $^{3}$Universidad de Valparaíso, Av. Gran Bretaña 1111, Valparaíso, Chile\ $^{4}$Observatoire Astronomique de l’Université de Genève, 51 Ch. des Maillettes, - Sauverny - CH1290, Versoix, Suisse\ bibliography: - 'Mybibliog\_MNRAS.bib' date: 'Accepted 2011 October 14. Received 2011 October 10; in original form 2011 August 29' title: Comparing radial velocities of atmospheric lines with radiosonde measurements --- \[firstpage\] Atmospheric effects, Instrumentation: spectrographs, Methods: observational, Techniques: radial velocities Introduction ============ The research on extrasolar planets is currently one of the fastest-growing in Astrophysics. Triggered by the pioneering work of [@1995Natur.378..355M] on 51Peg, it evolved into a domain of its own, with more than 500 planets confirmed up to date. Most of these planets ($\sim$90%) were detected using the radial velocity (RV) induced on the star by the orbital motion of the planet around it. The measurement of precise RVs can only be done against a precise wavelength reference, and two different approaches were pursued extensively. The first was the usage of a Th-Ar emission lamp with the cross-correlation function (CCF) method , and the second the I$_2$ cell along with the deconvolution procedure [@1996PASP..108..500B]. In order to measure precise RV in the IR with CRIRES [@2004SPIE.5492.1218K], recovered a method known for a long time: the usage of atmospheric features as a wavelength anchor. Using CO$_2$ lines present in the H band, the authors reached a precision of $\sim$5m/s over a timescale of one week. While a similar precision had been attained in the past in the optical domain using O$_2$ lines, the studies on the stability of atmospheric lines were limited to a timescale of up to a couple of weeks. In order to assess the RV stability of atmospheric lines over longer timescales, used HARPS (High Accuracy Radial velocity Planet Searcher) archival data, spanning more than six years. Three stars – Tau Ceti, $\mu$ Arae, and $e$ Eri – were selected because they provided a strong luminous background against which the atmospheric lines could be measured, and were observed not only over a long timespan but with high temporal frequency (in astereoseismology campaigns). The spectra were cross-correlated against an O$_2$ mask using HARPS pipeline, which delivered the RV, bisector span (BIS) and associated uncertainties. The high intrinsic stability of HARPS allowed one to measure these effects down to 1m/s of precision, roughly the photon noise attained on the atmospheric lines. The r.m.s. of the velocities turned out to be of only $\sim$10m/s, and yet well in excess of the attained photon noise. An inspection of the RV pattern on one star over one night revealed not white noise but a well-defined shape on RV, BIS, contrast and FWHM. A component of the RV signal was associated with BIS variation, which in turn was linearly correlated with the airmass at which the observation was performed. A second component of the signal was interpreted as being the translation of the atmospheric lines’ center created by the projection of an average horizontal wind vector along the line of sight. These two effects were described by the formula $$\Omega = \alpha \times {\left({1\over sin(\theta)} - 1\right)} + \beta \times cos(\theta) \,.\, cos(\phi - \delta) + \gamma \label{eq_fit}$$ where $\alpha$ is the proportionality constant associated with the variation in airmass, $\beta$ and $\delta$ the average wind speed magnitude and direction, and $\theta$ and $\phi$ the telescope elevation and azimuth, respectively. The $\gamma$ represents the zero-point of the RV, which can differ from zero. The fitting of the variables $\alpha$, $\beta$, $\gamma$, and $\delta$ allowed a good description of the telluric RV signal, with the scatter around the fit being of around 2m/s, or twice the photon noise. The fitting was performed in two ways: first, allowing all parameters to vary freely and second, imposing the same $\alpha$ and $\gamma$ for the different datasets. For details the interested reader is referred to the original paper. However, the model represented by Eq.\[eq\_fit\], while being physically motivated, was not fully validated due to the absence of wind measurements against which the fitted values could be compared. Among the atmospheric parameters studied for E-ELT site testing is precipitable water vapor (PWV), the major contributor to the opacity of Earth’s atmosphere in the infrared. Hence the mean PWV established over long timescales determines how well a site is suited for IR astronomy. For the E-ELT site characterisation a combination of remote sensing (satellite data) and on-site data was used to derive the mean PWV for several potential sites, taking La Silla and Paranal as reference [@2010SPIE.7733E..48K]. In order to better understand the systematics in the archival data and to obtain data at higher time resolution, a total of three campaigns were conducted at La Silla Paranal observatory in 2009. During each campaign all available facility instruments as well as dedicated IR radiometers were used to measure PWV from the ground [Kerber et al. [*in prep.*]{} @2011PASP..123..222Q]. In addition, radiosondes were launched to measure the vertical profile of atmospheric parameters in situ, with the goal of calculating the real PWV in the atmosphere. Radiosondes are an established standard in atmospheric research and all other methods were validated with respect to the radiosonde results with very high fidelity [@2010SPIE.7733E..48K; @2010SPIE.7733E.135Q; @2010SPIE.7733E.143C]. In the current paper we present the results of exploiting data from the above campaigns: since HARPS observations and radiosonde measurements were done in parallel we are in a position to make a direct and quantitative comparison of the wind speed parameters ($\beta$ and $\delta$). The paper is structured as follows. In Sect.2 we describe the data acquisition and reduction of both observing campaigns. Section3 is dedicated to the description of the analysis of data and subsequent results. In Sect.4 we discuss the implications of our results and we conclude in Sect.5 with the lessons learned from this campaign. Observations & Data Reduction ============================= HARPS measurements ------------------ HARPS [@2003Msngr.114...20M] is a high-resolution fiber-fed cross-dispersed echelle spectrograph installed at the 3.6m telescope at La Silla Observatory. It is characterized by a spectral resolution of 110 000 and its 72 orders cover the the whole optical range, from 380 to 690nm. Its extremely high stability allows one to measure RV to a precision of better than 60cm/s when a simultaneous Th-Ar lamp is used, and of around 1m/s without the lamp. A dedicated pipeline (nicknamed DRS for [*Data Reduction Software*]{}) was created to allow for on-the-fly data reduction and RV calculation. This pipeline delivers the RV by cross-correlating the obtained spectra with a weighted binary mask. To calculate the atmospheric lines RV variation one needs then only to construct a template mask representing the lines to monitor. This weighted binary mask was built using HITRAN database [@2005JQSRT..96..139R] to select the O$_2$ lines present in HARPS wavelength domain. For the details on HARPS, the data reduction procedure and the mask construction, the reader is referred to . The procedure is identical, with the exception that the observations used in the current paper were performed without simultaneous Th-Ar. For this program, 9 stars were observed: HR3090, HR3476, HR4748, HR5174, HR5987, HR6141, HR6930, HR7830, and HR8431, which are fast-rotating A-B stars, mostly featureless in the optical domain and suitable to be used as telluric standards. For details on the stars the reader is referred to the website “Stars for Measuring PWV with MIKE"[^2] and to [@2007PASP..119..697T]. A total of 1120 measurements were collected on 8 and 9 of May, 2009, during the course of two nights of technical time. The stars were observed in a complex pattern in such a way that both low and high airmass and different patches of the sky were probed throughout the night in order to sample any variations of PWV. The main consequence is that even a fraction of the night with a couple of hours can contain observations of several stars at a wide range of airmass and elevation/azimuth coordinates, covering well the independent variables of Eq.\[eq\_fit\]. and allowing a precise estimation of the parameters to be fit. Radiosonde measurements ----------------------- The radiosonde (Vaisala RS-92) is a self-contained instrument package with sensors to measure e.g. temperature and humidity combined with a GPS receiver and a radio transmitter that relays all data in real-time to a receiver on the ground. The radiosonde is tied to a helium filled balloon and after launch ascends at a rate of a few m/s following the prevailing winds. On its ascent trajectory the sonde will sample the local atmospheric conditions up to an altitude of $\sim$20km, when the balloon will burst. By that time it has traveled horizontally $\sim$100 km from the launch site. Since it relays its 3D location based on GPS location every two seconds, the wind vector exerting force on the balloon can be deduced from the change in GPS position. A total of 17 radiosondes were launched between the 5 and the 15 of May of 2009 from La Silla site. One or two launches were conducted every day/night. On the 13th no data were collected due to a technical problem when radio contact with the radiosonde was lost shortly after lunch. From the collected physical parameters, the six of interest for our study, as well as the nominal precision of the measurements are presented in Tab. \[Table\_sondes\_prec\]. As the sondes rise in height, they measure the two horizontal wind components on each layer with a nominal precision of 1$\times 10^{-3}$m/s, much higher than that of contemporaneous RV measurements. Radiosondes form the backbone of the global network coordinated by the World Meteorological Organisation (WMO) for measuring conditions at the surface and in Earth’s atmosphere by combining the in-situ atmospheric sounding with measurements taken onboard ships aircraft and satellites. Coordinated radiosonde launches (one launch at 12:00 UTC is the minimum requirement, other launch times are 00, 06 and 18 hours UTC) provide a global snapshot of atmospheric conditions which are then used as basic input for describing its current state and for modeling future conditions. The recommended maximum distance between stations is 250 km but the global distribution is very uneven and biased towards heavily populated areas in the Northern hemisphere. South America is sparsely covered with Chile operating 4 stations only one of which (Santo Domingo, WMO station number 85586) launches two radiosondes per day at 00 and 12 UTC. Data from all active launch sites can be found at http://weather.uwyo.edu/upperair/sounding.html. The WMO also defines the requirements in terms of equipment and procedures such as number of barometric pressure levels, etc. A number of different radiosondes from different manufacturers are used in the various countries. To ensure comparability the WMO regularly conducts cross-calibration campaigns with parallel measurements (Jauhiainen & Lehmuskero, 2005)[^3]. The Vailsala radiosonde RS-92 used in our campaign is considered to be the most reliable and accurate commercial device available. Its minor biases in particular for day-time launches are well-documented [Jauhiainen & Lehmuskero, 2005, @Milosh2009]. The global snapshot of the state of the atmosphere, taken at 00:00 and 12:00 UT is used as initial conditions of global meteorological numerical models (GFS(1), ECMWF(2), GME(3) among others[^4]). These initial conditions are employed in numerical approximations using dynamical equations, which predict the future state of the atmospheric circulation [@Holton2005]. The models are a simplification of the atmosphere because the horizontal resolution of the grid can be between 60 km (GME) to 100 km (GFS) - and sometimes more - and the vertical resolution provides only very small number of layers, but on a global scale the results are very good and have improved considerably over the past decades. There are other models called mesoscales models (MM5 (4), WRF(5), MesoNH(6), among others), which provide higher spatial resolution (horizontal and vertical) use more specific dynamical equations (physics parameterization) and better resolution of the surface terrain. The initial conditions for these models are usually the global model augmented by some local weather stations. Details on the models are available from the sites mentioned above. Concerning the applicability of RS data to our purpose it is important to note that the radiosonde is the accepted standard in atmospheric and meteorological research. For global weather forecasting a distance of order 250 km between radiosonde launch stations is the desired but by no means always achieved standard, while the cadence is between 6 and 24 h. Hence, the radiosonde data set that we use for comparison with HARPS observations is well within the accepted limits of applicability in terms of spatial and time resolution. It is evident that local topography and diurnal variations may limit the value of a set of radiosonde data to smaller distances and shorter periods of time. To this end there is a very instructive analysis by [@Norbert-Kalthoff:2002cr] that is directly applicable to our case. They use the Karlsruhe Atmospheric mesoscale model (KAMM) and compare with wind measurements taken at stations around 30 degrees South in Chile, including the Cerro Tololo Interamerican Observatory (CTIO). La Silla (70$^{\circ}$44’4“5 W 29$^{\circ}$15’15”4 S) is located within that region only about 100 km N of CTIO. [@Norbert-Kalthoff:2002cr] find that the wind patterns over this region are stable, their diurnal variations are highly reproducible and that wind conditions are mostly stable during night time. Their main finding is that for altitudes between 2 and 4 km northerly winds prevail whereas above 4 km large scale westerly winds dominate. The reason for the Northerly wind is a deflection of westerly winds by the Cordillera de los Andes which forms a barrier. They provide a physical explanation (their section 4) in terms of the Froude number (ratio between inertial forces and buoyancy) demonstrating that this deflected northerly flow is a naturally stable phenomenon. As mentioned La Silla is located in the same region and the wind roses of Cerro Tololo (2200 m) (their Fig. 7) and La Silla (2400m)[^5] are very similar, clearly showing a predominance of northerly wind. In particular winter months (June-August) are characterized by very constant daily ground wind properties (Fig 5 of the same paper). Our observations were made in May. In addition it is a well-established fact that wind conditions in the free atmosphere are much more stable than in the turbulent and highly variable ground layer [see e.g. @Holton2005; @Wallace2006]. As a consequence of the very homogeneous overall wind structure between 2 km and 4 km and above 4 km we have reason to believe, that the information on the wind vectors obtained by a radiosonde will be representative of conditions over the time span of at least a good fraction of a night for our campaigns.   Parameter Unit Precision \[Unit\] Comment ------------- ------ -------------------- -------------------------------------- time s 0.1 measurements cadence of one every 2s T K 0.1 — P hPa 0.1 — Height m 1 limited to 30km u m/s 0.01 E-W wind component, East positive v m/s 0.01 N-S wind component, North positive \[Table\_sondes\_prec\] Analysis & Results ================== HARPS measurements ------------------ We analyzed the data from the 9 stars as if coming from a single data set, as there is no reason to treat them separately. As done in , we discarded the 27 datapoints with photon noise precision worse than 5m/s, which correspond to only 2.5% of the observations. The total RV scatter and average photon noise were 5.01m/s and 2.82m/s, respectively. If one separates the set in the two nights that constitute it, the values for the first night are of 5.36 and 2.92m/s, and 4.60 and 2.72m/s for the second night. We note that the photon noise contribution to the precision from the stellar spectrum is larger than 1m/s, validating the choice of not using the lamp simultaneously with the observations. We fitted the RV variation on the two nights using Eq.\[eq\_fit\], as described in . When fitting, we considered splitting the dataset in three different ways and making two different hypothesis for the parameters variation. On the splitting of the dataset we employed: 1) the same parameters for all the observations, 2) an independent set of parameters per night, and 3) a set of parameters for each one-third of the night. After allowing all parameters to vary freely at first, we repeated this imposing $\alpha$ and $\gamma$ to be the same for the the whole dataset in 2) and 3). The resulting parameters, $\chi^2_{red}$, and scatter around the fit are presented in Tab. \[fitstats\] for each case. The error bars were estimated by bootstrapping the residuals and repeating the fitting 10000 times. The 95% confidence intervals were drawn from the distribution of the parameters, and the 1$\sigma$ uncertainty estimations are presented. [lccccccccc]{}    data set & $\#$obs & $\sigma$\[m/s\] & $\sigma_{(O-C)}$\[m/s\] & $\sigma_{ph}$\[m/s\] & $\chi^{2}_{red}$ & $\alpha$\[m/s\] & $\beta$\[m/s\] & $\gamma$\[m/s\] & $\delta$\[$^{o}$\]\ 08+09-05-2009 & 1093 & 5.01 & 4.14 & 2.82 & 2.20 & 7.79$_{-0.73}^{+0.76}$ & 8.47$_{-0.68}^{+0.76}$ & 220.56$_{-0.31}^{+0.05}$ & 126.10$_{-5.56}^{+4.41}$\ 08-05-2009 (1$^{st}$ n.) & 554 & 5.36 & 4.38 & 2.92 & 2.12 & 5.74$_{-1.81}^{+1.87}$ & 13.93$_{-2.61}^{+3.12}$ & 220.32$_{-0.79}^{+0.31}$ & 145.14$_{-13.00}^{+5.58}$\ 09-05-2009 ($2^{nd}$ n.) & 539 & 4.60 & 3.68 & 2.72 & 1.83 & 8.20$_{-0.81}^{+0.83}$ & 6.75$_{-0.50}^{+0.60}$ & 219.95$_{-0.04}^{+0.36}$ & 97.75$_{-7.91}^{+6.67}$\ (1$^{st}$ n., section1/3) & 185 & 3.46 & 2.89 & 1.88 & 2.25 & 8.97$_{-1.41}^{+1.41}$ & 4.44$_{-0.89}^{+3.31}$ & 223.58$_{-0.75}^{+0.97}$ & 42.17$_{-17.60}^{+47.29}$\ (1$^{st}$ n., section2/3) & 185 & 4.58 & 4.58 & 3.27 & 1.71 & 10.79$_{-17.98}^{+21.27}$ & 15.99$_{-8.00}^{+43.88}$ & 223.96$_{-4.91}^{+5.24}$ & 17.22$_{-96.66}^{+77.04}$\ (1$^{st}$ n., section3/3) & 184 & 6.26 & 4.51 & 3.60 & 1.53 & 29.77$_{-12.13}^{+11.86}$ & 74.86$_{-31.07}^{+41.90}$ & 230.09$_{-5.30}^{+5.25}$ & 4.28$_{-2.35}^{+63.12}$\ (2$^{nd}$ n., section1/3) & 180 & 4.80 & 3.30 & 2.52 & 1.73 & 15.50$_{-1.71}^{+1.70}$ & 15.98$_{-3.19}^{+4.50}$ & 223.51$_{-1.18}^{+1.28}$ & 24.96$_{-7.08}^{+18.08}$\ (2$^{nd}$ n., section2/3) & 180 & 4.50 & 3.74 & 2.84 & 1.75 & 8.76$_{-2.38}^{+2.47}$ & 3.61$_{-0.78}^{+5.17}$ & 220.10$_{-0.50}^{+0.91}$ & 90.87$_{-47.05}^{+31.05}$\ (2$^{nd}$ n., section3/3) & 180 & 3.86 & 3.48 & 2.82 & 1.58 & -15.06$_{-18.04}^{+18.00}$ & 7.78$_{-1.77}^{+2.77}$ & 220.61$_{-0.70}^{+0.74}$ & 66.26$_{-27.05}^{+17.98}$\ 08-05-2009 (1$^{st}$ n.)\* & 554 & 5.36 & 4.38 & 2.92 & 2.12$^\dagger$ & 6.85 $_{- 0.77}^{+ 0.76}$ & 13.57 $_{- 1.25}^{+ 1.31}$ & 220.19 $_{- 0.23}^{+ 0.13}$ & 143.17 $_{- 3.72}^{+ 2.94}$\ 09-05-2009 ($2^{nd}$ n.)\* & 539 & 4.60 & 3.68 & 2.72 & 1.83$^\dagger$ & 6.85 $_{- 0.77}^{+ 0.76}$ & 6.95 $_{- 0.60}^{+ 0.69}$ & 220.19 $_{- 0.23}^{+ 0.13}$ & 102.14 $_{- 8.25}^{+ 6.81}$\ global fit parameters & 1093 & 5.01 & 4.06 & 2.82 & 1.98 & — & — & — & —\ (1$^{st}$ n., section1/3)\* & 185 & 3.46 & 2.97 & 1.88 & 2.34$^\dagger$ & 8.81 $_{- 1.09}^{+ 1.09}$ & 6.05 $_{- 1.41}^{+ 2.00}$ & 221.39 $_{- 0.39}^{+ 0.42}$ & 127.53 $_{- 23.16}^{+ 10.59}$\ (1$^{st}$ n., section2/3)\* & 185 & 4.58 & 4.62 & 3.27 & 1.73$^\dagger$ & 8.81 $_{- 1.09}^{+ 1.09}$ & 14.38 $_{- 4.70}^{+ 8.65}$ & 221.39 $_{- 0.39}^{+ 0.42}$ & 131.27 $_{- 53.92}^{+ 7.37}$\ (1$^{st}$ n., section3/3)\* & 184 & 6.26 & 4.55 & 3.60 & 1.55$^\dagger$ & 8.81 $_{- 1.09}^{+ 1.09}$ & 11.17 $_{- 0.55}^{+ 1.44}$ & 221.39 $_{- 0.39}^{+ 0.42}$ & 77.45 $_{- 15.42}^{+ 16.67}$\ (2$^{nd}$ n., section1/3)\* & 180 & 4.80 & 3.49 & 2.52 & 1.90$^\dagger$ & 8.81 $_{- 1.09}^{+ 1.09}$ & 8.00 $_{- 0.73}^{+ 1.01}$ & 221.39 $_{- 0.39}^{+ 0.42}$ & 71.89 $_{- 13.51}^{+ 14.36}$\ (2$^{nd}$ n., section2/3)\* & 180 & 4.50 & 3.78 & 2.84 & 1.78$^\dagger$ & 8.81 $_{- 1.09}^{+ 1.09}$ & 9.23 $_{- 2.92}^{+ 4.15}$ & 221.39 $_{- 0.39}^{+ 0.42}$ & 5.39 $_{- 7.01}^{+ 36.17}$\ (2$^{nd}$ n., section3/3)\* & 180 & 3.86 & 3.53 & 2.82 & 1.62$^\dagger$ & 8.81 $_{- 1.09}^{+ 1.09}$ & 13.47 $_{- 1.59}^{+ 1.82}$ & 221.39 $_{- 0.39}^{+ 0.42}$ & 95.93 $_{- 9.63}^{+ 8.41}$\ global fit parameters & 1093 & 5.01 & 3.87 & 2.82 & 1.81 & — & — & — & —\ Note that $\delta$=0$^{o}$ when wind direction points towards North, and positive eastwards. The error bars on each of the fitted parameters were drawn by bootstrapping the residuals (see text for details). Note that the $\chi^{2}_{red}$ marked with $^\dagger$ are not defined in the strict sense: they are calculated assuming 4 fitting parameters for the considered subset, with the objective of allowing comparison with the corresponding unconstrained fitting. While one might be tempted to compare the $\chi^2_{red}$ of the data as a way of quantifying the quality of the fit, there are several reasons not to do so. The first is that as one divides the data into subsets that are fitted independently, there is some ambiguity in how the $\chi^{2}_{red}$ of a set is compared with the combined $\chi^{2}_{red}$ of the subsets. However, more important is that we are considering a problem with [*priors*]{}, as the reader will realize when noting that $\beta \in$ \[0, $\infty$\[ . The consequence is that this corresponds to the fitting of a non-linear model, for which the number of degrees of freedom is ill-defined, as recently underlined by [@2010arXiv1012.3754A]. In order to compare the quality of the data description by the different models, we follow the recommendations of the same authors. We calculate the probability that the normalized residuals of the fitting are drawn from a gaussian distribution with $\mu$=0 and $\sigma$=1, as expected if no signal is present and the scatter is dominated by the measurement uncertainty. To do so we use the Kolmogorov-Smirnov test [as implemented in @1992nrfa.book.....P] and compute the probability $P_{KS}$ which, loosely speaking, corresponds to the probability that the residuals after fitting the model are drawn from a gaussian distribution. The larger the value of $P_{KS}$, the more appropriate the model is to describe the data-set in hands. We also calculated the probability $P_{KS}(no fit)$ for normalized RVs of each dataset without fitting the model, but to which only the average value was subtracted (which corresponds to fitting only a constant). The probability for each case on each data set is presented in Tab.\[KS\_prob\].   data set P$_{KS}$ P$_{KS}$(const. fit) P$_{KS}$(no fit) --------------------------- ---------- ---------------------- ------------------ 08+09-05-2009 1.47e-12 — 4.32e-25 08-05-2009 (1$^{st}$ n.) 2.10e-06 8.62e-06 6.28e-19 09-05-2009 ($2^{nd}$ n.) 9.29e-05 4.44e-04 2.62e-10 (1$^{st}$ n., section1/3) 1.36e-01 4.01e-03 1.22e-05 (1$^{st}$ n., section2/3) 4.68e-02 5.06e-02 3.02e-02 (1$^{st}$ n., section3/3) 1.46e-01 4.22e-02 1.07e-05 (2$^{nd}$ n., section1/3) 1.81e-01 3.31e-02 5.79e-08 (2$^{nd}$ n., section2/3) 1.22e-01 3.41e-01 4.66e-03 (2$^{nd}$ n., section3/3) 1.84e-01 7.93e-02 5.33e-02 Radiosonde measurements ------------------------ The measurement of the radiosonde wind vector ($u,v$) as a function of time, or height, while interesting, is hardly insightful for our objective. We need to calculate the effect of this wind as integrated along the line of sight, such as it is measured by any telescope and spectrograph on the ground. This will deliver an average wind vector which can then be compared with the one obtained with HARPS (see the previous section). A way of calculating this average wind is to consider a plane-parallel atmosphere that is composed of horizontal layers. Every radiosonde measurement probes the properties of a layer in its ascent. To obtain the average wind speed we weight the wind speed of each of these layers with its absorptivity. In doing so we are considering that the absorption line we measure with our spectrograph is the result of the product of the transmission of all layers, and each one of these creates a small line shifted by its respective horizontal wind. It is important to note that we chose doing so because absorptivity is proportional to the depth of the line at the central wavelength, and thus proportional to the spectral information contribution for the CCF as described in . The absorptivity on each layer A$_i$ is $A_i$ = 1 - e$^{-\tau}$ where $\tau$ is the optical depth and calculated as $\tau = I(T) \times Amplitude_{Lorentz} \times \sigma_{\mathrm{O}_2}(T,P) \times \Delta h$ where $I$ is the spectral line intensity, $Amplitude_{Lorentz}$ the relative amplitude of a Lorentzian function, $\sigma_{\mathrm{O}_2}$ the surface density of O$_2$, and $\Delta h$ the height of the layer in question. The first component of $\tau$ is $I$, the spectral line intensity (basically, the line area) and is given in \[cm$^{-1}$ / (molecule.cm$^{-2}$)\] in HITRAN. Since $I$ is a function of $T$ we calculated a grid of HITRAN $I$ from the minimum to the maximum temperature measured by the radiosondes, with a step of 0.1K for all the O$_2$ lines within HARPS wavelength domain. For each temperature an average $I$ was assigned to the overall spectrum. This gives us $I(T)$, and to obtain values for T in between two grid points we fitted second-degree polynomials, which provided a very smooth description of the data. Interpolating the values provided the same wind values down to 0.01m/s. In order to derive the line depth, one has to apply a correction to get the amplitude of the Lorentzian function that has the equivalent area, given by 1.0/($\pi \times HWHM$). The HWHM was set to 1.0, but its absolute value does not affect the results significantly, for it affects all layers in the same way. Subsequent tests showed that changing it from 0.1 to 10 led to variations of the order of 0.01m/s and 0.01$^{\circ}$ on wind magnitude and direction, much smaller than the error bars of the measurements. The surface density $\sigma_{\mathrm{O}_2}(T,P)$ was calculated using the ideal gas law and assuming a constant volume mixing ratio (VMR) of O$_2$, of 20.946% as function of height, which is a reasonable assumption up to 80km, hence well justified in the range of interest of up to 30km. With this we calculated the weighted average of the velocities, and a weighted standard deviation as well. The error on the average is estimated as being the weighted standard deviation. To allow comparison with the results from the previous section, one can calculate the vector magnitude and direction. The results are presented in Tab. \[Table\_radiosondes\] and plotted in Fig. \[windplot\].    \# probe Observation date and hour $u$\[m/s\] $v$\[m/s\] $\|u + v\|$\[m/s\] $\delta\,[^{\circ}]$ ------------- ------------------------------ --------------------- ------------------- -------------------- ---------------------- 1 EDT / 05 / 05 / 09 / 1200UTC $-$9.43 $\pm$ 5.71 7.91 $\pm$ 10.95 12.31 $\pm$ 8.28 $-$50.01 $\pm$ 42.62 2 EDT / 06 / 05 / 09 / 1200UTC $-$11.10 $\pm$ 5.92 5.48 $\pm$ 8.05 12.38 $\pm$ 6.39 $-$63.71 $\pm$ 35.54 3 EDT / 07 / 05 / 09 / 0600UTC $-$9.00 $\pm$ 7.53 2.40 $\pm$ 7.30 9.31 $\pm$ 7.52 $-$75.07 $\pm$ 45.00 4 EDT / 08 / 05 / 09 / 0000UTC 3.44 $\pm$ 8.55 4.09 $\pm$ 10.27 5.35 $\pm$ 9.60 40.03 $\pm$ 99.69 5 EDT / 08 / 05 / 09 / 0600UTC $-$0.58 $\pm$ 7.76 7.22 $\pm$ 9.54 7.25 $\pm$ 9.53 $-$4.58 $\pm$ 61.42 6 EDT / 09 / 05 / 09 / 0000UTC $-$3.43 $\pm$ 6.96 8.61 $\pm$ 8.40 9.27 $\pm$ 8.22 $-$21.70 $\pm$ 44.49 7 EDT / 09 / 05 / 09 / 0600UTC $-$2.50 $\pm$ 7.46 10.85 $\pm$ 8.55 11.13 $\pm$ 8.50 $-$12.97 $\pm$ 38.69 8 EDT / 09 / 05 / 09 / 1200UTC $-$3.34 $\pm$ 4.82 16.05 $\pm$ 11.78 16.40 $\pm$ 11.57 $-$11.75 $\pm$ 18.49 9 EDT / 10 / 05 / 09 / 0000UTC 5.16 $\pm$ 8.98 9.81 $\pm$ 9.48 11.09 $\pm$ 9.37 27.74 $\pm$ 46.99 10 EDT / 10 / 05 / 09 / 0600UTC 3.12 $\pm$ 6.95 7.88 $\pm$ 7.33 8.48 $\pm$ 7.28 21.58 $\pm$ 47.31 11 EDT / 11 / 05 / 09 / 0000UTC 3.24 $\pm$ 3.54 10.67 $\pm$ 7.00 11.15 $\pm$ 6.78 16.89 $\pm$ 20.30 12 EDT / 11 / 05 / 09 / 0600UTC 2.76 $\pm$ 3.34 10.20 $\pm$ 6.31 10.57 $\pm$ 6.16 15.14 $\pm$ 19.63 13 EDT / 12 / 05 / 09 / 0000UTC 3.63 $\pm$ 3.46 11.27 $\pm$ 6.62 11.84 $\pm$ 6.39 17.83 $\pm$ 18.70 14 EDT / 14 / 05 / 09 / 0000UTC 14.69 $\pm$ 10.42 12.54 $\pm$ 7.69 19.31 $\pm$ 9.37 49.53 $\pm$ 26.52 15 EDT / 14 / 05 / 09 / 1200UTC 11.00 $\pm$ 7.69 8.09 $\pm$ 5.96 13.66 $\pm$ 7.13 53.69 $\pm$ 27.77 16 EDT / 15 / 05 / 09 / 0000UTC 10.66 $\pm$ 8.44 6.66 $\pm$ 4.69 12.57 $\pm$ 7.57 57.98 $\pm$ 27.27 17 EDT / 15 / 05 / 09 / 0600UTC 10.32 $\pm$ 8.41 8.84 $\pm$ 6.49 13.59 $\pm$ 7.66 49.43 $\pm$ 31.05 Note that $\delta$=0$^{o}$ when wind direction points towards North, and positive eastwards. ![Evolution of average wind magnitude ([*upper panel*]{}) and direction ([*lower panel*]{}), as measured by the radiosondes. The two nights during which observations with HARPS were performed are represented by shadowed, colored zones. The values are presented in Tab. \[Table\_radiosondes\].[]{data-label="windplot"}](radiosondes_wind_new){width="9cm"} Discussion ========== The RV data ----------- The first point to note when it comes to the results of Tab.\[fitstats\] is the low value of the r.m.s. of RVs over the two nights: 5m/s, less than twice the average photon noise value. As one selects smaller sets of data, first individual nights and then subsets of these nights, one obtains different fitted parameters. This suggests that the parameters are variable on a timescale smaller than one night. The higher probability $P_{KS}$ presented by the short-timescales datasets, attests to the fact that the model is more suitable to test the variability on smaller timescales. The fitting performed imposing the same $\alpha$ and $\gamma$ provides results with similar quality, better for the complete nights fitting, poorer if the nights are divided into subsections. Moreover, when comparing the fitted data with the raw data, one concludes that fitting the data with the models leads to residuals that are closer to gaussian than subtracting a constant from it. This means that the model description, even when less precise, still describes a fraction of the signal contained in the data and is always preferable over using raw data. It is insightful to compare the data with that of . The fitted $\alpha$ and $\beta$ are comparable, and tend to be even lower, while $\gamma$ is similar in value. Importantly, $\delta$ varies significantly from one data-set to the other, just like it varied between the data sets considered in the previous paper. It is important to note that the error bars on some measurements are rather large, and the discrepancy between some consecutive measurements can be explained by this alone. In particular the last two thirds of the first night were affected by a high photon noise contribution, and these two sets yield the fits with the largest and more asymmetric error bars and largest residuals for the unconstrained fit. However, the scatter is already smaller than twice the photon noise, with or without subtracting the fit. While the RV data were obtained with a different scientific objective than the one presented here (that of determining the PWV content in the atmosphere), the observations were still done in order to sample as many different patches of the sky as possible. However, it is extremely difficult to find suitable stars in all directions and thus sample evenly ($\theta$, $\phi$), our independent variables. We succeeded in obtaining observations for elevation $\theta \in$ \[30, 85\] $^o$ for both nights, but most observations were taken between azimuth angles $\phi \in$ \[100, 250\] $^o$, which might limit the accuracy with which the wind direction can be determined. When one looks at the distribution of ($\theta$, $\phi$) as a function of time (Fig.\[altaz\]) one concludes that the uneven distribution of the parameters to fit can limit the performance of the fit. ![The distribution of elevation and azimuth ($\theta$, $\phi$) as function of time, for the two observation nights. The three slices used for independent fitting are identified as shadowed regions with different colors.[]{data-label="altaz"}](altaz){width="9cm"} However, the main question that remains is if the model can be used imposing the same $\alpha$ and $\gamma$ for a given set of observations. While the results for the constrained fit are slightly worse, they do not allow us to reach a firm conclusion with regards to this aspect. The radiosondes data -------------------- The data from the radiosondes provide some interesting clues on the behavior of the atmosphere. The calculation of the average horizontal wind was affected by large uncertainties, a consequence of the complex and variable structure of winds as one travels across the atmosphere. However, the average wind magnitude is remarkably low, being between 5 and 16m/s. The values close in time are in agreement within error bars, both in what concerns magnitude and direction. Note, in particular, the values obtained using probes \#6,7, and 8, released 6 hours apart and perfectly compatible within their assigned uncertainty. It is interesting to point out that the measurement yielding the largest uncertainty on wind direction is the one with the smallest wind magnitude value, as expected. The wind direction measurements suggest also a slow variation of wind direction as a function of time. Comparing RV and radiosondes data --------------------------------- In order to compare the two datasets we computed the time center of the observations for each block and selected the probe measurement that was closest in time to it. Table \[comparison\] displays the data in a way that allows an easy comparison of the average wind vector magnitude and direction, and the difference between the quantities obtained with the two different methods is presented in Fig.\[comparison\_plot\]. We considered for this purpose the unconstrained fit values, for they show higher $P_{KS}$. In what concerns wind magnitude the values from the probes agree with the fitted values from RV data, for all datasets. The only outlier is the third section of the first night, which presents a very large value of $\beta$ and strongly asymmetric error bars. As discussed before, the corresponding HARPS dataset has the largest photon noise contribution associated, and very poor azimuth coverage (as can be seen in Fig.\[altaz\]) which can explain the lower quality of the fit. In terms of wind direction, the values concerning the fit of a subsection of the night agree with those derived from radiosonde measurements; those on a longer timescale do not. The most straightforward interpretation is that the constant horizontal wind hypothesis does not hold for large timescales. This is not surprising since wind vectors are variable over time. In other words, the fit provides a better description of the data than no fit, and the residuals are closer to gaussianity than the raw data, as seen, but the direction has no physical correspondence. However, and as stated before, one has to note that the $\sigma$ and $ \sigma_{(O-C)}$ values are already quite close to the $\sigma_{ph}$ level, and that the ratio between any of the former and the latter is smaller than 2. Such ratio values are smaller than those obtained in , and we cannot discard the fact that we might be approaching the limit of extractable information from the current dataset. [cccccc]{}    data set & $\beta$\[m/s\] & $\delta$\[$^{o}$\] & \# probe (time distance) & $\|u + v\|$\[m/s\] & $\delta\,[^{\circ}]$\ 08+09-05-2009 & 8.47$_{-0.68}^{+0.76}$ & 126.10$_{-5.56}^{+4.41}$ & 8 (4h)& 16.40 $\pm$ 11.57 & -11.75 $\pm$ 18.49\ 08-05-2009 (1$^{st}$ n.) & 13.93$_{-2.61}^{+3.12}$ & 145.14$_{-13.00}^{+5.58}$ & 7 (1h) & 11.13 $\pm$ 8.50 & -12.97 $\pm$ 38.69\ 09-05-2009 ($2^{nd}$ n.) & 6.75$_{-0.50}^{+0.60}$ & 97.75$_{-7.91}^{+6.67}$ & 10 (1.5h) & 8.48 $\pm$ 7.28 & 21.58 $\pm$ 47.31\ (1$^{st}$ n., section1/3) & 4.44$_{-0.89}^{+3.31}$ & 42.17$_{-17.60}^{+47.29}$ & 6 (1.5h) & 9.27 $\pm$ 8.22 & -21.70 $\pm$ 44.49\ (1$^{st}$ n., section2/3) & 15.99$_{-8.00}^{+43.88}$ & 17.22$_{-96.66}^{+77.04}$ & 7 (0.5h) & 11.13 $\pm$ 8.50 & -12.97 $\pm$ 38.69\ (1$^{st}$ n., section3/3) & 74.86$_{-31.07}^{+41.90}$ & 4.28$_{-2.35}^{+63.12}$ & 7 (2.5h) & 11.13 $\pm$ 8.50 & -12.97 $\pm$ 38.69\ (2$^{nd}$ n., section1/3) & 15.98$_{-3.19}^{+4.50}$ & 24.96$_{-7.08}^{+18.08}$ & 9 (1h) & 11.09 $\pm$ 9.37 & 27.74 $\pm$ 46.99\ (2$^{nd}$ n., section2/3) & 3.61$_{-0.78}^{+5.17}$ & 90.87$_{-47.05}^{+31.05}$ & 10 (1h) & 8.48 $\pm$ 7.28 & 21.58 $\pm$ 47.31\ (2$^{nd}$ n., section3/3) & 7.78$_{-1.77}^{+2.77}$ & 66.26$_{-27.05}^{+17.98}$ & 10 (2h) & 8.48 $\pm$ 7.28 & 21.58 $\pm$ 47.31\ Note that $\delta$=0$^{o}$ when wind direction points towards North, and positive eastwards. ![Difference between the average wind magnitude ([*upper panel*]{}) and direction ([*lower panel*]{}) as measured by the two different methods. The values are presented in Tab. \[comparison\]. The full dataset fit values are coded in red, those corresponding to single-night sets are coded in green, and the night subdivisions are coded in blue ([*electronic version only*]{}).[]{data-label="comparison_plot"}](comparison){width="9cm"} It is arguable that the model might be over-simplistic, and the small number of parameters and observables might fundamentaly limit the RV signal it can reproduce. Some leads point in this direction, and we followed these to propose and test alternative models. However, no improvements were verified relative to the basic model presented before. We present the results of this rather more exploratory digression in Appendix A. Perfect agreement between the HARPS observations and the radiosonde data can not be expected for a number of reasons listed below. The HARPS spectrum samples a pencil beam through the atmosphere when the star is being tracked, while the radiosonde performs in situ-measurements along its trajectory governed by the prevailing winds. Another drawback is that the atmosphere is only sampled up to an altitude of  20km; however, at this altitude the density of O$_2$ is ten times lower than at the top of the mountain, so the weight $A_i$ is ten times smaller. Finally, the radiosonde is expected to oscillate like a pendulum in its ascent, introducing a signal in the measured RV which is not rooted in the wind vector it is intended to probe. In spite of all these limitations, the two data sets agree and provide a coherent picture of the atmospheric impact on RV variation, down to better than 5m/s and less than twice the estimated photon noise contribution on O$_2$ lines RV measurement. A quantitative assessment of the stability of atmospheric absorption lines as presented here is of very practical value for astronomy. Telluric absorption features are imprinted on observations with astronomical spectrographs over a wide wavelength range, particularly in the infrared. On the one hand this constitutes a complication since the features overlay the spectrum of the astronomical target leading to blends and line shifts. Hence, all observations aiming at a high spectral fidelity in certain regions of the infra-red need to correct for atmospheric transmission , and not only necessarily in the context of RV measurements [e.g. @2010SPIE.7735E.237U]. With a comprehensive characterization of the atmospheric stability, one can assess for the first time the impact of considering atmospheric lines to be at rest or characterized by a constant speed over a given period of time. Their stability (or lack thereof) can explain a fraction of the residuals obtained today when fitting the atmospheric transmission with a forward model, which yields residuals around a few %. On the other hand the telluric features are also used for wavelength calibration again in particular in the infra-red where technical calibration sources are less common. For physical reasons atomic spectra emitted by lamps show fewer lines and a more uneven distribution than in the optical. The Th-Ar hollow cathode lamp is the only source whose spectrum has been fully characterized in the IR [@2008ApJS..178..374K], and which is being used on CRIRES but there are limitations in line density and wavelength coverage. Gas cells usually also cover only a limited wavelength range. As a result telluric absorption lines are an attractive alternative in parts of the IR. Based on the results presented here the stability of the atmosphere will easily support low and medium-reslution spectroscopy, while in particular for high resolution and high precision work caution has to be applied. The actual wind velocity vector, and its variation, during the astronomical observations of course remains unknown without independent measurements. Hence, it is not possible to derive proper error bars for a quantitative analysis down to the m/s level unless a full analysis following the method described in is performed. Conclusions =========== We used HARPS to monitor the RV variation of O$_2$ lines in the optical wavelength domain. We compared the fitting of a model as described in and the obtained parameters with those delivered by contemporaneous radiosondes. The two approaches deliver the same results in what concerns wind magnitude and agree on wind direction when fitting is done in chunks of a couple of hours. The large uncertainty bars on the values obtained from radiosondes are likely to be a consequence of a complex wind structure as a function of height, a fact that weakens the applicability of the assumption of a strong horizontal wind. We cannot conclude if the $\alpha$ and $\gamma$ parameters should be constant as a function of time or not, or if a cross-term between them should be included. However, when these are fixed the wind direction does not agree with that extracted from the radiosonde, which suggests that the model might be incomplete at this level. We tested two different alternative models that tried to address this possible incompleteness of the physical description, but the results were poorer than with the base model. Statistical tests showed that the base model provides a good description of the data on all timescales, being always preferable to not fitting any atmospheric variation, and that the smaller the timescale on which it can be performed (down to a couple of hours), the better the description of the real physical parameters. It is important to note that it is for the datasets with higher $P_{KS}$ that the wind parameters derived from RV are compatible with those extracted from radiosondes measurement. Thus, even though the model presented in can probably still be refined, the agreement is proven down to better than 5m/s and less than twice the estimated photon noise contribution on O$_2$ lines RV measurement. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by the European Research Council/European Community under the FP7 through Starting Grant agreement number 239953, as well as by Fundação para a Ciência e a Tecnologia (FCT) in the form of grant reference PTDC/CTE-AST/098528/2008. NCS would further like to thank FCT through program Ciência2007 funded by FCT/MCTES (Portugal) and POPH/FSE (EC). This work has been funded by the E-ELT project in the context of site characterization. The measurements have been made possible by the coordinated effort of the PWV project team and the observatory site hosting it. We would like to thank all the technical staff, astronomers and telescope operators at La Silla who have helped us in setting up equipment, operating instruments and supporting parallel observations. We thank the Directors of La Silla Paranal observatory (A. Kaufer, M. Sterzik, U. Weilenmann) for accommodating such a demanding project in the operational environment of the observatory and for granting technical time. We particularly thank the ESO chief representative in Chile, M. Tarenghi for his support. It is a pleasure to thank the Chilean Direction General de Aeronautica Civil (in particular J. Sanchez) for the helpful collaboration and for reserving airspace around the observatories to ensure a safe environment for launches of radiosonde balloons. Special thanks to the members of the Astrometeorological Group at the Universidad de Valparaiso who supported the radiosonde launches. Improving the model =================== The global picture obtained by analyzing the datasets first separately and then together allows one to raise some interesting questions; within this questions is the potential for improving the model. The first point to note is the impact on the error bars of the fitted parameters when splitting the RV data in subsets for the fitting. When separating the data set in two nights, the average error bars for each parameter increase by a factor which can be slightly in excess of $\sqrt{2}$ (depending on the case, for some the increase is much more modest), but when the nights are divided into subsets, the increase in the error bars exceeds that expected from the reduction of the number of data points used for the fit. In particular, the relative increase for the $\delta$ error bars is much larger than those for the other parameters. Another interesting point comes then into view: the second and third sections of the first night show comparatively large error bars for the four parameters. However, this point is to be taken carefully because, as discussed, the photon noise was higher and the azimuth coverage not as complete as for the other datasets. These two elements point towards a cross-talk between the model parameters. Given the simplicity of the model and large number of data points available, it is more likely that this behavior comes from trying to fit a too simple model rather than being caused by lack of conditioning. In addition, a poorer match between wind direction from the two methods for the constrained fit suggests that $\alpha$ and $\gamma$ might not be constant for the datasets at hand. This is particularly clear for $\alpha$, while variations on $\gamma$ are only of a couple of m/s. Such a behavior can be explained by a correlation between $\alpha$ and a parameter which represents a quantity expected to change with time. It can also be explained if this coefficient has an intrinsic dependence on time. And, naturally, it can also be explained by a dependence of the model parameters – or even the RV itself – on a single (unrepresented) quantity. The most important hint is probably the high variability of $\delta$ and large error bars on its determination: this suggests that either the variation associated with this coefficient is defined in an incomplete fashion or that some other quantity has a similar functional dependence on the parameter which the $\delta$ variation tries to accommodate. When this information is put together one concludes that the most likely improvement to the model is an additional dependence of the airmass impact on RV on the direction of the wind. This is not completely unexpected as a consequence of the chosen model parametrization. In we had already suggested that if the atmosphere has a complex vertical wind structure which cannot be represented by a single average wind value, $\alpha$ might not be considered as constant. It is so because an increased broadening of the CCF (due to the span of velocities that displace the absorber) will change the correlation coefficient between the broadening and the impact on the RV. As a consequence it will change the coefficient between airmass and RV, our $\alpha$. To fully characterize the impact of this wind broadening contribution to the $\alpha$ coefficient is extremely difficult and requires a line-formation model of the atmosphere, which is beyond the scope of this work. However, we can propose a refinement of our model in order to include this effect, and we test it tentatively in two alternative parametrizations to Eq.\[eq\_fit\]: $$\Omega' = \left[ \alpha \times {\left({1\over sin(\theta)} - 1\right)} + \beta \right] \times cos(\theta) \,.\, cos(\phi - \delta) + \gamma \label{eq_fit_alt1}$$ $$\Omega'' = \left[ \alpha \times {\left({1\over sin(\theta)} - 1\right)} + \beta \,.\, cos(\theta) \right] \times cos(\phi - \delta) + \gamma \label{eq_fit_alt2}$$ In Eq.\[eq\_fit\_alt1\] we consider $\alpha$ to be dependent on the colinearity with the wind direction. This is expected to be the case if there is a scatter of velocity around the central velocity $\beta$. In this parametrization $\alpha$ contains two components: the dependence on airmass and the broadening created by the scatter in velocity associated with it. In Eq. \[eq\_fit\_alt2\] we consider a variation on this assumption in which only the wind direction (and not the projection of this direction along the line of sight) has an impact on the measured RV. The fitted parameters and quantities associated with each dataset are presented in Tab. \[fitstats\_alt\]. The results of the aplication of the Kolmogorov-Smirnov test and the P$_{KS}$ derived for the two cases are presented in Tab. \[KS\_prob\_alt\]. [lccccccccc]{}    data set & $\#$obs & $\sigma$\[m/s\] & $\sigma_{(O-C)}$\[m/s\] & $\sigma_{ph}$\[m/s\] & $\chi^{2}_{red}$ & $\alpha$\[m/s\] & $\beta$\[m/s\] & $\gamma$\[m/s\] & $\delta$\[$^{o}$\]\ 08+09-05-2009 & 1093 & 5.01 & 4.25 & 2.82 & 2.35 & 22.67$_{-5.58}^{+5.34}$ & 7.16$_{-1.93}^{+1.98}$ & 220.51$_{-0.49}^{+0.19}$ & 151.66$_{-2.26}^{+2.25}$\ 08-05-2009 & 554 & 5.36 & 4.39 & 2.92 & 2.16 & 12.39$_{-7.81}^{+7.62}$ & 15.41$_{-4.31}^{+4.62}$ & 219.89$_{-1.07}^{+0.30}$& 156.35$_{-3.04}^{+3.30}$\ 09-05-2009 & 539 & 4.60 & 3.86 & 2.72 & 2.09 & 26.96$_{-9.82}^{+7.52}$ & 3.91$_{-2.38}^{+2.83}$ & 219.94$_{-0.01}^{+0.84}$ & 145.13$_{-4.94}^{+3.29}$\ (1$^{st}$ n., section1/3) & 185 & 3.46 & 2.95 & 1.88 & 2.41 & 18.02$_{-8.11}^{+4.62}$ & 2.47$_{-2.47}^{+5.51}$ & 222.22$_{-0.75}^{+1.51}$ & 159.31$_{-3.64}^{+20.49}$\ (1$^{st}$ n., section2/3) & 185 & 4.58 & 4.58 & 3.27 & 1.71 & -9.49$_{-211.15}^{+194.11}$ & 8.30$_{-8.30}^{+97.43}$ & 222.94$_{-13.76}^{+3.62}$ & 24.38$_{-15.99}^{+306.33}$\ (1$^{st}$ n., section3/3) & 184 & 6.26 & 4.52 & 3.60 & 1.54 & -25.62$_{-129.03}^{+33.19}$ & 24.41$_{-24.41}^{+34.99}$ & 225.04$_{-9.71}^{+4.60}$& 27.60$_{-18.90}^{+151.36}$\ (2$^{nd}$ n., section1/3) & 180 & 4.80 & 3.87 & 2.52 & 2.36 & 18.43$_{-8.38}^{+7.45}$ & 16.77$_{-16.77}^{+5.29}$ & 216.41$_{-0.85}^{+4.34}$ & 152.53$_{-2.60}^{+18.40}$\ (2$^{nd}$ n., section2/3) & 180 & 4.50 & 3.74 & 2.84 & 1.78 & 11.72$_{-75.87}^{+92.59}$ & 2.96$_{-2.96}^{+5.99}$ & 220.56$_{-0.51}^{+1.49}$ & 71.21$_{-55.11}^{+116.77}$\ (2$^{nd}$ n., section3/3) & 180 & 3.86 & 3.85 & 2.82 & 1.91 & -11.42$_{-154.65}^{+140.58}$ & 0.00$_{-0.00}^{+7.31}$ & 219.11$_{-0.85}^{+1.74}$ & 179.39$_{-107.80}^{+58.91}$\ 08+09-05-2009 & 1093 & 5.01 & 4.26 & 2.82 & 2.37 & 20.23$_{-5.12}^{+4.91}$ & 4.95$_{-2.36}^{+2.37}$ & 220.53$_{-0.48}^{+0.20}$ & 151.58$_{-2.38}^{+2.26}$\ 08-05-2009 & 554 & 5.36 & 4.39 & 2.92 & 2.18 & 10.54$_{-7.39}^{+6.98}$ & 14.53$_{-4.85}^{+5.59}$ & 219.87$_{-1.11}^{+0.31}$ &156.45$_{-3.14}^{+3.30}$\ 09-05-2009 & 539 & 4.60 & 3.86 & 2.72 & 2.09 & 23.80$_{-8.87}^{+5.73}$ & 1.40$_{-1.40}^{+3.67}$ & 219.97$_{-0.01}^{+0.84}$ & 144.63$_{-5.13}^{+3.15}$\ (1$^{st}$ n., section1/3) & 185 & 3.46 & 2.98 & 1.88 & 2.48 & 16.13$_{-6.94}^{+3.51}$ & 0.68$_{-0.68}^{+6.20}$ & 222.22$_{-0.84}^{+1.27}$ & 159.69$_{-3.70}^{+15.13}$\ (1$^{st}$ n., section2/3) & 185 & 4.58 & 4.59 & 3.27 & 1.71 & -48.80$_{-164.41}^{+148.22}$ & 16.10$_{-16.10}^{+104.09}$ & 220.82$_{-12.73}^{+7.64}$ & 153.95$_{-146.59}^{+173.86}$\ (1$^{st}$ n., section3/3) & 184 & 6.26 & 4.52 & 3.60 & 1.54 & -25.90$_{-84.33}^{+32.05}$ & 29.80$_{-29.80}^{+39.14}$ & 225.28$_{-10.44}^{+3.96}$ & 25.39$_{-15.85}^{+152.00}$\ (2$^{nd}$ n., section1/3) & 180 & 4.80 & 3.91 & 2.52 & 2.41 & 15.60$_{-8.13}^{+6.85}$ & 15.36$_{-15.36}^{+5.78}$ & 216.40$_{-0.86}^{+4.60}$ & 152.52$_{-2.54}^{+20.44}$\ (2$^{nd}$ n., section2/3) & 180 & 4.50 & 3.75 & 2.84 & 1.78 & 10.24$_{-39.05}^{+52.66}$ & 2.00$_{-2.00}^{+7.20}$ & 220.43$_{-0.56}^{+1.74}$ & 91.03$_{-67.21}^{+84.63}$\ (2$^{nd}$ n., section3/3) & 180 & 3.86 & 3.83 & 2.82 & 1.89 & -9.03$_{-54.32}^{+45.37}$ & 0.00$_{-0.00}^{+6.91}$ & 219.01$_{-1.10}^{+1.87}$ & 179.46$_{-101.71}^{+68.26}$\ Note that $\delta$=0$^{o}$ when wind direction points towards North, and positive eastwards. The error bars on each of the fitted parameters were drawn by bootstrapping the residuals (see text for details).   data set P$_{KS}(\Omega')$ P$_{KS}(\Omega'')$ P$_{KS}$(no fit) --------------------------- ------------------- -------------------- ------------------ 08+09-05-2009 2.51e-11 6.74e-11 4.32e-25 08-05-2009 7.64e-07 3.19e-07 6.28e-19 09-05-2009 3.08e-06 3.51e-06 2.62e-10 (1$^{st}$ n., section1/3) 1.15e-02 8.14e-03 1.22e-05 (1$^{st}$ n., section2/3) 4.17e-02 3.22e-02 3.02e-02 (1$^{st}$ n., section3/3) 1.66e-01 1.84e-01 1.07e-05 (2$^{nd}$ n., section1/3) 1.09e-03 1.36e-03 5.79e-08 (2$^{nd}$ n., section2/3) 1.03e-01 9.67e-02 4.66e-03 (2$^{nd}$ n., section3/3) 2.95e-02 4.80e-02 5.33e-02 Unfortunately, these modifications do not lead to an improvement. The P$_{KS}$ are smaller than in the previously considered cases (and the $\chi^{2}_{red}$ are larger). The cross-talk between the different parameters is increased, with the $\beta$ parameter reaching zero within the 1-$\sigma$ uncertainties (given the way they were calculated this only means that a large number of datasets of the MC was best fitted by $\beta$=0.) As a consequence, one is forced to conclude that these alternative models increase the correlation or cross-talk between parameters, instead of reducing it. One can conceive a model in which the dependence on altitude and azimuth is concentrated in the parameters in a different way, but this dependence should stem from a physical motivation. We are then led to conclude that we probably reached the limit of extractable information from this dataset and an improvement on the quality of the measurements is required to take this kind of analysis any further. \[lastpage\] [^1]: E-mail: pedro.figueira@astro.up.pt [^2]: http://www.lco.cl/operations/gmt-site-testing/stars-for-measuring-pwv-with-mike/stars-for-measuring-pwv [^3]: Performance of the Vaisala Radiosonde RS92-SGP and Vaisala DigiCORA Sounding System MW31 in the WMO Mauritius Radiosonde Intercomparison, February 2005. webpage http://www.vaisala.com/Vaisala%20Documents/White%20Papers/ Vaisala%20Radiosonde%20RS92%20in%20Mauritius%20Intercomparison.pdf [^4]: \(1) Global Forecast System (http://www.emc.ncep.noaa.gov/gmb/ moorthi/gam.html) \(2) European Center of Medium range Weather Forecasting (http://www.ecmwf.int/products/data/operational\_system/description/ brief\_history.html) \(3) Global Numerical Weather Prediction Model (http://journals.ametsoc.org/doi/abs/10.1175/1520-0493(2002) 130%3C0319%3ATOGIHG%3E2.0.CO%3B2) \(4) The Fifth-Generation NCAR/ Penn State Mesoscale Model. NCAR=National Center of Atmospheric Research (http://www.mmm.ucar.edu/mm5/) \(5) Weather Research and Forecasting model (http://www.wrf-model.org/index.php) \(6) Non-Hydrostatic Mesoscale Atmospheric Model (http://mesonh.aero.obs-mip.fr/mesonh/) [^5]: http://www.eso.org/gen-fac/pubs/astclim/lasilla/humidity/ LSO\_meteo\_stat-2002-2006.pdf
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a simple heuristic model to demonstrate how feedback related to the galaxy formation process can result in a scale-dependent bias of mass versus light, even on very large scales. The model invokes the idea that galaxies form initially in locations determined by the local density field, but the subsequent formation of galaxies is also influenced by the presence of nearby galaxies that have already formed. The form of bias that results possesses some features that are usually described in terms of stochastic effects, but our model is entirely deterministic once the density field is specified. Features in the large-scale galaxy power spectrum (such as wiggles that might in an extreme case mimic the effect of baryons on the primordial transfer function) could, at least in principle, arise from spatial modulations of the galaxy formation process that arise naturally in our model. We also show how this fully deterministic model gives rise to apparently stochasticity in the galaxy distribution.' address: - '$^1$ School of Physics & Astronomy, Cardiff University, Queens Buildings, The Parade, Cardiff CF24 3AA, United Kingdom' - '$^2$ School of Physics & Astronomy, University of Nottingham, University Park, Nottingham NG7 2RD, United Kingdom' author: - Peter Coles$^1$ and Pirin Erdoğdu$^2$ title: 'Scale–dependent Galaxy Bias' --- Introduction ============ Thanks to large-scale spectroscopic surveys such as the Anglo-Australian 2dF Galaxy Redshift Survey (2dFGRS: Norberg et al. 2001; Wild et al. 2004; Conway et al. 2005) and the Sloan Digital Sky Survey (Zehavi et al. 2002; Tegmark et al. 2004; Swanson et al. 2007) it is now well established that the clustering of galaxies depends subtly on their internal properties. Since galaxies of different types display different spatial distributions it follows that not all galaxies can trace the distribution of underlying dark matter. In other words galaxies are biased tracers of the cosmological mass distribution. Theories of cosmological structure formation must explain the relationship between galaxies and the distribution of gravitating matter which probably yields important clues to the process by which they were assembled. Galaxy formation involves complex hydrodynamical and radiative processes alongside the merging and disruption of dark matter haloes. This entails a huge range of physical scales that poses extreme challenges even for the largest supercomputers. The usual approach is therefore to encode the non-gravitational physics into a series of simplified rules to be incorporated in a code which evolves the dark matter distribution according to Newtonian gravity (e.g. Benson et al. 2000). This “semi-analytic” approach has many strengths, including the ability to make detailed models for direct testing against observations, but it difficult to use it to make models with which one can make inferences from data. For this reason, simplified analytical models of bias are still extremely useful if one hopes to proceed from observations to theory rather than vice-versa. In the new era of “precision cosmology” the presence of bias is more an obstacle than a key to understanding (Zheng & Weinberg 2007). Attempts to infer parameter values from cosmological observations are hampered by the unknown relationship between visible objects and the underlying mass fluctuations they trace. For example, the relatively weak residual baryon acoustic oscillations (BAO) one expects to be present in the matter power spectrum (Pen 1998; Meiksin, White & Peacock 1999; Blake & Glazebrook 2003; Eisenstein et al. 2005; Seo & Eisenstein 2005; Wang 2006) are potentially extremely important diagnostics of the presence of dark energy if they can be observed at high redshift. However, when matter fluctuations are inferred from galaxy statistics, the form and evolution of bias must be understood and controlled if the required level of accuracy is to be reached. Here again simplified anaytical models have an important role to play. In this paper we introduce a simple yet general theoretical model which can describe various aspects of galaxy bias is a unified way. We describe biasing models in general in the next section. In Section 3 we present our model and in Sections 4 and 5 we describe a couple of applications. We discuss the results in Section 6. From Local Bias to the Halo Model ================================= The idea that galaxy formation might be biased goes back to the realization by Kaiser (1984) that the reason Abell clusters display stronger correlations than galaxies at a given separation is that these objects are selected to be particularly dense concentrations of matter. As such, they are very rare events, occurring in the tail of the distribution function of density fluctuations. Under such conditions a “high-peak” bias prevails: rare high peaks are much more strongly clustered than more typical fluctuations (Bardeen et al. 1986). More generally, in [*local bias*]{} models, the propensity of a galaxy to form at a point where the total (local) density of matter is $\rho$ is taken to be some function $f(\rho)$ (Coles 1993; Fry & Gaztanaga 1993). It is possible to place stringent constraints on the effect this kind of bias can have on galaxy clustering statistics without making any particular assumption about the form of $f$. In particular, it can be shown that the large-scale two–point correlation function of galaxies typically tends to a constant multiple of the mass autocorrelation function in these models. Coles (1993) proved that under weak conditions on the form of $f(\rho)$ as discussed in the introduction, the large-scale biased correlation function of galaxies would generally have a leading-order term proportional to $\xi_{\rm m}(r)$. In other words, one cannot change the large-scale slope of the correlation function of locally-biased galaxies with respect to that of the mass. This was a serious problem for the standard cold dark matter model of times past (which had $\Omega_0=1$ and $\Lambda=0$) because there is insufficient power in the matter spectrum in this model to match observations unless one incorporates a strongly scale dependent bias (Bower et al. 1993). The local bias “theorem” was initially proved for biasing applied to Gaussian fluctuations only and did not necessary apply to galaxy clustering where, even on large scales, deviations from Gaussian behaviour are significant. Steps towards the plugging of this gap began with Fry & Gaztanaga (1993) who used an expansion of $f$ in powers of the dimensionless density contract $\delta$ and weakly non-linear (perturbative) calculations of $\xi_{\rm m}(r)$ to explore the statistical consequences of biasing in more realistic (i.e. non-Gaussian) fields. Based largely on these arguments, Scherrer & Weinberg (1998) showed explicitly that non-linear evolution always guarantees the existence of a linear leading-order term regardless of the form of $f$, thus strengthening the original argument of Coles (1993) at the same time as confirming the validity of the theorem in the non-linear regime. A similar result holds under the hierarchical ansatz, as discussed by Coles et al. (1999). It is worth noting that the original form of the local bias theorem has a minor loophole: for certain peculiar forms of $f$ the leading order term is proportional to $[\xi_{\rm m} (r)]^2$ (Coles 1993). However, $\xi_{\rm m} (r)$ must be a convex function of $r$ because its Fourier transform, the power spectrum, is non-negative definite (i.e. it can be positive or exactly zero). Higher order terms in $\xi_{\rm m}^n$ therefore fall off more sharply than $\xi_{\rm }(r)$ on large scales so this loophole does not have any serious practical consequences for large-scale structure. Such results greatly simplify attempts to determine cosmological parameters using galaxy clustering surveys, as well as facilitating the interpretation of any specific features in large-scale clustering statistics because they require the galaxy spectrum to have the same shape as the underlying mass spectrum. This reduces the possible effect of bias to a single parameter which can be estimated and removed by marginalisation. On the other hand, it results in a drastic truncation of the level of complexity in the assumed relationship between galaxies and dark matter. In hierarchical models, galaxy formation involves the formation of a dark matter halo, the settling of gas into the halo potential, and the cooling and fragmentation of this gas into stars. This all happens within a population of haloes which is undergoing continuous merging and disruption. Rather than attempting to model these stages in one go by a simple function $f$ of the underlying density field it is better to study the dependence of the resulting statistical properties on the various ingredients of this process. Bardeen et al. (1986), following Kaiser (1984), pioneered this approach by calculating detailed statistical properties of high-density regions in Gaussian fluctuations fields. Mo & White (1996) and Mo et al. (1997) went further along this road by using an extension of the Press-Shechter (1974) theory to calculate the correlation bias of halos, this making an attempt to correct for the dynamical evolution absent in the Bardeen et al. approach. The extended Press-Schechter theory forms the basis of many models for halo bias in the subsequent literature (e.g. Matarrese et al. 1997; Moscardini et al. 1998; Tegmark & Peebles 1998). It is worth stressing that by “local bias” we mean some form of coarse–graining to select objects on a galaxy scale. In the earlier models described above, galaxy correlations arise because the underlying matter field is correlated but the process of galaxy formation does not itself influence the formation of structure on scales larger than this resolution scale. More recent developments involve the Halo Model (Seljak 2000; Peacock & Smith 2000; Cooray & Sheth 2002; Neyrinck & Hamilton 2005; Blanton et al. 2006; Schulz & White 2006; Smith, Scoccimarro & Sheth 2006, 2007). This model generally assumes that galaxy properties are derived from the underlying mass or halo field. Some degree of scale–dependence then arises because galaxies interact on the scale of an individual halo to provide some degree of self-organisation within the resolution scale. This model has scored some notable successes at explaining features in observed galaxy correlations. It has also been suggested that bias might not be a deterministic function of $\rho$, and that consequently there is a stochastic element in the relationship between mass and light (Dekel & Lahav 1999). In the following sections we present a model that extends a number of these different lines of thought. In particular we consider the possibility that large-scale interactions between galaxies or proto-galaxies might induce a significant scale dependent bias that is qualitatively different from that which arises even in the halo model. Self-interacting Galaxy Formation ================================= As described in the previous section, the idea of local bias models is that the density of matter at a given spatial position ${\bf x}$ is responsible for generating the propensity that a galaxy will form there (after suitable coarse-graining of the density field). In its simplest terms we can represent this idea in terms of a galaxy fluctuation field $$\delta_{\rm g} ({\bf x})\equiv \frac{n({\bf x})}{\bar{n}}-1,$$ where $n({\bf x})$ is the number density of galaxies at ${\bf x}$ and $\bar{n}$ is the mean number density of galaxies. The simplest way to account for discreteness is to use the Poisson cluster model of Layzer (1956) in which galaxies form with a probability proportional to $\delta_{\rm g}$. If there are interactions within the resolution scale then the Poisson model does not necessarily hold (Coles 1993). In order to keep the presentation of our model as simple as possible we ignore discreteness effects and restrict ourselves to large scale clustering properties. In local bias theories the galaxy field is a deterministic function of the local matter density field at the same point ${\bf x}$. Our model for scale–dependent bias has the form: $$\delta_{\rm g}({\bf x}) = \delta_{\rm s}({\bf x}) + \alpha \int h ({\bf x-x'}) \delta_{\rm g} ({\bf x'}) d{\bf x'}$$ In this equation the field $\delta_{\rm s}({\bf x})$ represents a “seed” field and the second term models the interactions. In a realistic situation the parameter $\alpha$ might well be stochastic, varying in a complicated way from galaxy to galaxy, but for simplicity we will assume it to be a constant in this paper. In principle a galaxy may either enhance or suppress the formation of others around it so $\alpha$ may be either positive or negative. In the absence of interactions (i.e. taking $\alpha=0$), the model reduces to a standard biasing picture where the clustering of galaxies is, at some level, reducible directly to the clustering ot the mass. In the “no-bias” case the seed field will simply be the underlying density fluctuation field, i.e. $\delta_{\rm s}=\delta_{\rm m}$. Galaxies could then form as a Poisson sampling of the mass field as suggested by Layzer (1956). For linear bias models, we would take $\delta_{\rm s}=b\delta_{\rm m}$. In such cases the resulting galaxy spectrum $P_{\rm g}(k)=b^{2} P_{\rm m}(k)$ for all $k$. In general local bias models we might take the seed field to be some local function $f(\delta_{\rm m})$, as described in the previous section. In these cases $P_{\rm g}(k)\simeq b^{2} P_{\rm m}(k)$ for small $k$ via the local bias theorems. More realistically perhaps, $\delta_{\rm s}$ could be the “halo field”. Explicitly in this case, and indeed implicitly in the other cases discussed above, $\delta_{\rm s}$ does possess a filtering scale of its own, with the width of the smoothing kernel representing the characteristic size of a galaxy halo. If the seed field is simply the halo field, the galaxies do not form a Poisson sample; the distribution of galaxies within a given halo is a degree of freedom within the halo model which must be fixed by reference to observations (Seljak 2000; Peacock & Smith 2000; Cooray & Sheth 2002). The seed field might also include stochastic terms (Dekel & Lahav 1999; Blanton et al. 1999; Matsubara 1999), i.e. terms which can not be expressed as any function of $\rho_{\rm m}$ but which might instead be modelled as random variables. The first term on the right hand side of equation (2) therefore includes the traditional bias models discussed in the previous section. If $\alpha=0$ we recover models in which the clustering of galaxies is, at some level, reducible directly to the clustering of the mass. In such cases if the seed field were uncorrelated then all these models would produce uncorrelated galaxies. If $\alpha=0$ and the seed field is uncorrelated then all these models would produce uncorrelated galaxies. If $\alpha\neq 0$, however, then we have a qualitatively different form of bias. The galaxy field then not only depends on the seed field, but also on the galaxy field itself. This “bootstrap” effect allows a greater degree of flexibility in modelling galaxy correlations. In particular, even if the seed field were completely uncorrelated, interactions could produce a non-zero galaxy-galaxy correlation function in the bootstrap model. This can not happen in local bias models. In this respect our model is similar to the autoregressive (AR) models used to simulate time series: these are correlated processes that are seeded by random (uncorrelated) noise. More relevantly for cosmology, as we shall see shortly, the bootstrap model allows us to generate scale-dependent bias that violates the theorems referred to in Section 2. The initial seed field $\delta_{s}({\bf x})$ plays the same role as the “innovation” in autoregressive time series models. The presence of the kernel in equation (2) gives the model the ability to generate non-local interactions if it extends over a relatively large scale. The kernel $h({\bf y})$ determines the size of the zone of influence of one galaxy on the formation of others in its neighbourhood; we denote this scale by $R_{h}$. Just as with the parameter $\alpha$, we take this scale to be constant for simplicity. Note, however, that since both the scale and level of feedback may be difficult to predict given only the ambient density field, it may be more realistic to model the kernel scale as stochastic variable. The filter should be defined in such a way that it preserves the statistical homogeneity of the density field and does not lead to diverging moments. For sensible filters $h$ will have the following properties: $h=~{\rm constant}\simeq R_{h}^{-3}$ if $\vert {\bf x}-{\bf x'} \vert \ll R_{h}$, $h \simeq 0$ if $\vert {\bf x}-{\bf x'} \vert \gg R_{h}$, $\int h({\bf y};R_{h}) d{\bf y} = 1$. We discuss a couple of specific examples in the subsequent sections of this paper. The integral on the right hand side of equation (2) represents the galaxy fluctuation field convolved with a low pass filter. One can write (2) in the form $$\delta_{\rm g}({\bf x}) = \delta_{\rm s}({\bf x}) + \alpha \delta_{\rm g} ({\bf x}; R_{\rm h}).$$ The filtered field, $\delta_g({\bf x}; R_{h})$, may be obtained by convolution of the “raw” galaxy density field with some function $h$ having a characteristic scale $R_{h}$: $$\delta_{\rm g} ({\bf x}; R_{h}) = \int \delta_{\rm g}( {\bf x'}) h( \vert {\bf x}- {\bf x'}\vert; R_{h}) d {\bf x'}.$$ To recover the local bias model with $\alpha \neq 0$ we simply take $h({\bf x}-{\bf x'})=\delta_D({\bf x}-{\bf x'})$ in which case $\delta_{\rm g}=\delta_{\rm s}/(1-\alpha)=b\delta_{\rm s}$. Scale independence and linearity of the bias are therefore both recovered in this limit. Equation (2) is a Fredholm integral equation of the second type. Assuming that the interaction kernel $h$ is well-behaved we can solve it quite straightforwardly. Defining the Fourier transform of $\delta_{\rm s}({\bf x})$ to be $\tilde{\delta}_{\rm m}({\bf k})$ etc and using the convolution theorem, the $k$-space version of the equation (2) is seen to be $$\tilde{\delta}_{\rm g}({\bf k})=\tilde{\delta}_{\rm s}({\bf k})+ \alpha \tilde{h} ({\bf k})\tilde{\delta}_{\rm g}({\bf k}),$$ which gives a solution for $\tilde{\delta}_{\rm g}({\bf k})$: $$\tilde{\delta}_{\rm g}({\bf k})=\frac{\tilde{\delta}_{\rm s}({\bf k})}{1-\alpha \tilde{h} ({\bf k})}.$$ The power spectrum of the filtered field is given by $$P(k; R_{h}) = \tilde{h}^2 (k; R_{h}) P_{\rm g}(k),$$ where $P_{\rm g}(k)$ is the power spectrum of the galaxy field. Assuming that $h({\bf y})$ is isotropic, the galaxy-galaxy power spectrum can be expressed as $$P_{\rm g}(k)= \frac{P_{\rm s}(k)}{|1-\alpha \tilde{h}(k)|^2},$$ where $k=|{\bf k}|$. It is clear that the kernel can imprint features into the power spectrum through the dependence on $\tilde{h}(k)$, even in the case where $P_{\rm s}(k)$ is completely flat. This means it is considerably more general than the simpler models discussed above. It possesses some features that resemble the cooperative galaxy formation model of Bower et al. (1993) but with significantly more generality. We shall illustrate some of its properties in the following sections. Bogus Baryon Wiggles? ===================== In this section we present an extreme example of scale–dependent bias which is based on the idea that some violent astrophysical process connected with galaxy formation (such as the ionizing radiation produced by quasar activity) could seriously influence the propensiy of galaxies to form in the neighbourhood of a given object. This concept is not new (Rees 1988; Babul & White 1991), and has been recently revived in a milder form (Pritchard, Furlanetto & Kamionkowski 2006). To give an illustration of the extreme effects that could arise in the galaxy power spectrum, consider the extreme example where the zone of influence of a galaxy (or quasar) has a sharp edge similar to an HII region. We can use our model to describe this situation if we adopt a kernel which has the form of a [*“top hat’*]{} filter, with a sharp cut off, defined by the relation $$h_{\rm T}( \vert {\bf x}-{\bf x'} \vert; R_{\rm h}) = {3\over 4 \pi R_{h}^3 } \Theta \Bigl( 1 - {\vert {\bf x}-{\bf x'} \vert \over R_{h}}\Bigr),$$ where $\Theta$ is the Heaviside step function: $\Theta(y)=0$ for $y\leq 0$ and $\Theta(y)=1$ for $y>0$. The form of the kernel in Fourier space is then $$\tilde{h}_{\rm T}(k; R_{h}) = {3(\sin kR_{h} - kR_{h} \cos k R_{h}) \over (kR_{h})^3}~.$$ Oscillatory features can be generated in the galaxy power spectrum by this form of interaction and with a suitable choice of scale $R_h$ they could even mimic the BAOs mentioned in the Introduction. To establish the required parameters we refer to the 2dFGRS redshift-space power spectrum data given in Table 2 of Cole et al. (2005) for the 2dFGRS. We do not attempt to fit the small-scale clustering in this data set. This could be done by fiddling with the form of $\delta_{\rm s}$, but our interest lies here in illustrating the large–scale behaviour only. We also ignore redshift–space distortions. In Cole et al. (2005), the error bars on the spectrum are derived from the diagonal elements of the covariance matrix calculated from model lognormal density fields. The model power spectrum for these lognormal fields has $\Omega_{\rm m}h=0.168$, $\Omega_{\rm b}/\Omega_{\rm m}=0.17$ and $\sigma^{\rm g}_8=0.89$ and agrees very well with the best fit model for the overall 2dFGRS power spectrum. This model, convolved with the 2dFGRS survey window function, is also given in Table 2 of Cole et al. (2005) and plotted in Figure \[fig1\] (solid line) & Figure \[fig2\]. Using the full covariance matrix, Cole et al (2005) find $\chi^2/{\rm d.o.f}=37/33$ for $k<0.2$ $h {\rm Mpc}^{-1}$. As this analysis is for illustrative purposes only , we do not perform a full likelihood analysis, rather we calculate the $\chi^2$ for the same model using only the error bars. In this case the fit is characterized by $\chi^2/{\rm d.o.f}=12/33$. As discussed in Section 5 of Cole et al (2005), since the convolution with the survey window function causes the errors to be correlated, resulting in a very low value of $\chi^2$. The goodness of fit does however provide a useful benchmark for our alternative explanation of the wiggles seen in $P(k)$. In order to explain the shape of the galaxy spectrum using only galaxy interactions and without the baryon oscillations, we use a top-hat kernel for our biasing model and fit it to the same data using the Eisenstein & Hu (1998) transfer functions and assuming $n_{\rm s}=1$, $h=0.72$ and $\Omega_{\rm b}=0$. In other words we use an underlying cosmology without baryon oscillations and seek to explain the shape of the galaxy spectrum using only galaxy interactions. Our best fit cosmological parameters are $\Omega_{\rm m}=0.23$, $\sigma^{\rm g}_8=0.85$ and for the bias model we get $\alpha=0.25$ and $R_h=114 {\rm Mpc}$. This model has $\chi^2/{\rm d.o.f}=9/33$. The value of $\chi^2$ is again very low due to correlations between the data points, but a comparison with the result of the previous paragraph for which the same problem also holds, demonstrates that the fit is if anything marginally better for our model than for the reference model used by Cole et al. (2005). = = Of course one does not know for sure whether and how ionization influences galaxy formation, but this example illustrates that in principle the observed wiggles in the galaxy power spectrum could have an astrophysical rather than cosmological origin. This would pose problems for their use as cosmological probes. On the other hand, the scale required is very large. Rees (1988) pointed out that a quasar of luminosity $L{\rm uv}$ lasting for a time $t_{\rm Q}$ produces sufficient energetic photons to ionize all the baryons within a radius $$R_h \simeq 67 \left( \frac{L_{\rm uv}}{10^{46} \, {\rm erg}\, {\rm s}^{-1}} \right)^{1/3} \left(\frac{t_{\rm Q}}{2\times 10^9 \,{\rm yrs}}\right)^{1/3} \,\mbox{Mpc}.$$ In order to be able to contribute at a redshift $z$, the ionizing photons must have been emitted in less than the lifetime of the Universe at that redshift, $t(z)$. This places a minimal requirement that $t_Q<t(z)$. In the concordance cosmology, $t(z=3)\simeq 2.2$ Gyrs, $t(z=6)\simeq 0.95$ Gyr and $t(z=10)=0.48$ Gyr. The actual lifetime of quasars may well depend on their mass, but recent estimates suggest $t_Q\simeq 10^8$ yrs is more likely than $10^9$ yrs (Mclure & Dunlop 2004). If this is the case then equation (11) implies that the corresponding value is more like $R_h\simeq 25$ Mpc by equation (11); for this value of $t_Q$ the required ionization could easily have been achieved early, but the scale of resulting wiggles would be relatively small. For $R\simeq 100$ Mpc one needs to push the parameters excessively hard: a high value of $t_Q>2\times 10^{9}$ and a redshift of reionization $z<3$ would be necessary. This seems to be at odds with the general consensus that reionization of the Universe happened relatively early (Becker et al. 2001; Fan et al. 2002). There are other problems with this model. Quasars have a range of lifetimes and luminosities. Their radiation may also be beamed rather than isotropic. And in any case it is not known to what extent the galaxy formation process is sensitive to this form of feedback anyway. Moreover, the baryon acoustic oscillations inferred from galaxy clustering have the same characteristic scale as that derived from cosmic microwave background observations. This would be a sheer coincidence in our model. This model is therefore unlikely to be the correct interpretation of observed wiggles, but it does at least demonstrate that large-scale interactions can have a significant impact on the shape of the clustering power spectrum. Notice also that even if the scale $R_h$ is not sufficiently large to match the observed oscillations, any non-zero astrophysical effect could seriously degrade the ability to recover cosmological information from galaxy surveys. Mass tracers selected in some way other than counting galaxies may well display clustering that is less susceptible to this type of feedback bias. Galaxy clusters may be detected not only detected through X-ray emission or Sunyaev-Zel’dovich measurements, both of which are sensitive to the properties of the extremely hot gas the clusters contain. If these properties vary systematically on large scales then scale-dependent bias may also apply to such objects. However, the strong non-linear merging and heating processes that create this intracluster gas are likely to swamp any primordial effects generated on smaller scales. One would therefore expect cluster correlations to be less vulnerable to astrophysical modulation than galaxy correlations; complementary observations on the same length scales could be be used to identify and eliminate this source of uncertainty. Scale-dependence versus Stochasticity ===================================== Even if the scale and form of the interaction kernel do not produce very large scale features in the galaxy correlation function or power spectrum, it is still possible for scale–dependence to manifest itself in more subtle ways. In particular, it is possible for scale-dependence to appear as a form of stochastic bias (Dekel & Lahav 1999) even though the relationship (2) is entirely deterministic once the density field is specified. To see how this happens consider a simplified version of our general model in which the seed field $\delta_s$ is simply the matter density field $\delta_m$. Let us assume explicitly that the fields were are considering are filtered on a scale $R_0$ to represent the selection of galaxy sized objects. Let the scale of feedback–induced interactions be $R_{\rm F}$, so that $$\delta_{\rm m}(R_0)=\delta_{\rm g} (R_0) - \alpha \delta_{\rm g} (R_{\rm F}).$$ It is straightforward to see that $$\begin{aligned} \langle \delta_{\rm m} \delta_{\rm g} \rangle & = & \langle \delta_{\rm g} (R_0)^2 \rangle - \alpha \langle \delta_{\rm g} (R_{\rm F}) \delta_{\rm g} (R_0) \rangle \nonumber\\ & = & \langle \delta_{\rm g}^2 \rangle \left( 1-\alpha \frac{\langle \delta_{\rm g}(R_{\rm F}) \delta_{\rm g}(R_0)\rangle}{\langle \delta_{\rm g} (R_0)^2 \rangle}\right)\end{aligned}$$ and $$\begin{aligned} \langle \delta_{\rm m}^2 \rangle & = &\langle \delta_{\rm g} (R_0)^2 \rangle + \alpha^2 \langle \delta_{\rm g}(R_{\rm F})^2\rangle - 2\alpha \langle \delta_{\rm g} (R_0) \delta_{\rm g} (R_{\rm F}) \rangle\nonumber \\ &=& \langle \delta_{\rm g}^{2}\rangle \left\{ 1+ \alpha^2 \frac{\langle \delta_{\rm g}(R_{\rm F}) ^2\rangle}{\langle \delta_{\rm g} (R_0)^2 \rangle} - 2 \alpha \frac{\langle \delta_{\rm g} (R_0) \delta_{\rm g}(R_{\rm F}) \rangle}{\langle \delta_{\rm g}(R_0)^2 \rangle} \right\},\end{aligned}$$ where we have dropped the dependence on $R_0$ in the terms outside the curly brackets. It is useful to define the quantities $$\gamma \equiv \frac{\langle \delta_{\rm g} (R_{\rm F}) \delta_{\rm g} (R_0) \rangle}{\langle \delta_{\rm g}(R_0)^2 \rangle}$$ and $$\omega^2 \equiv \frac{\langle \delta_{\rm g} (R_{\rm F})^2 \rangle}{\langle \delta_{\rm g}^2 (R_0) \rangle},$$so that the cross-correlation coefficient between the mass and galaxy fluctuation fields is $$r \equiv \frac{\langle \delta_{\rm m} \delta_{\rm g} \rangle}{\langle \delta_{\rm g}^2 \rangle^{1/2} \langle \delta_{\rm m}^2 \rangle^{1/2}} = \frac{1-\alpha \gamma}{(1+\alpha^2 \omega^2 - 2\alpha \gamma)^{1/2}}.$$ To provide a simple illustrative model we assume a [*Gaussian filter*]{}: $$h_{\rm G}( \vert {\bf x}-{\bf x'} \vert; R_{\rm F}) = {1 \over (2 \pi R_{\rm F}^2)^{3/2}} \exp \Bigr (- {\vert {\bf x}- {\bf x'}\vert ^2 \over 2 R_{\rm F}^2}\Bigr),$$ for which the appropriate window function is $$\tilde{h}_{\rm G} (k R_{\rm F}) = \exp \Bigl[-{(k R_{\rm F})^2 \over 2} \Bigr].$$ We then need to tackle quantities of the form $$\langle \delta_{\rm g} (R_1) \delta_{\rm g} (R_2) \rangle = \frac{1}{2\pi^2} \int dk k^2 P_{\rm g} (k) \exp [ - k^2(R_1^2+R_2^2)],$$ which can be evaluated straightforwardly if we assume, for simplicity, that the (unsmoothed) galaxy power spectrum is a power-law: $P_{\rm g}(k) \propto k^{n}$. In this case we find that $$\langle \delta_{\rm g} (R_0) \delta_{\rm g} (R_{\rm F}) \rangle = \sigma^2 \left( \frac{2R_0^2}{R_0^2 + R_{\rm F}^2}\right)^{n+3/2},$$ where $\sigma^2$ is the variance of the unsmoothed density field. This gives $$\gamma = \left( \frac{2R_0^2}{R_0^2 + R_{\rm F}^2}\right)^{n+3/2}$$ and $$\omega^2 = \left( \frac{R_{0}}{R_{\rm F}} \right)^{(n+3)}.$$ Note that if $R_0=R_{\rm F}$ so that the feedback scale is no larger than a galaxy scale then $\omega=1$, $\gamma=1$ and consequently $r=1$. If, however, $R_{\rm F}>R_0$ then $\gamma<1$. However, it is always true that $\omega^2 > \gamma$ so that $(1-\alpha\gamma)^2< 1+ \alpha^2 \omega^2 - 2 \alpha\gamma$ and consequently that $r<1$. The larger the value of $R_{\rm F}$ compared to $R_{0}$ the smaller the resulting value of $r$. Assuming the fields $\delta_{\rm m}$ and $\delta_{\rm g}$ are jointly Gaussian one can express the conditional distribution of one given a specific value of the other. Suppose the (unconditional) variance of $\delta_{\rm g}$ is $\sigma^2$ then the variance after conditioning on $\delta_{\rm m}=a$, say, reduces to $\sigma^2(1-r^2)$. Only if $|r|=1$ is there no scatter in the relationship. For this reason a value of $r<1$ is usually taken to indicate the presence of stochastic bias (e.g. Tegmark & Bromley 1999), but in this case the scatter in the relationship between $\delta_{\rm m}$ and $\delta_{\rm g}$ arises from non-locality in a fully deterministic way. This suggests that considerable care needs to be exercised in the interpretation of measured values of $r$ : they may be indicative of scale–dependence rather than stochastic effects. If we instead look at the galaxy and matter fields (assuming $\delta_s=\delta_m$) in Fourier space the situation is quite different. In this case, by equation (8) we get $$P_{\rm g}(k) = b^2(k) P_{\rm m}(k)$$ with $b(k)=1-\alpha\tilde{h}(k)$. The cross-spectrum in Fourier space is usually defined to be $P_{\rm mg}=r(k) b(k)P_{\rm m}$ (Tegmark & Bromley 1998) for stochastic bias, with $r(k)$ playing a role analogous to the correlation coefficient discussed above. In this case, however, it reduces to $P_{\rm mg}= b(k)P_{\rm m}$ indicating a complete absence of stochasticity. The apparent stochasticity in real space is actually due to non-locality, but the model is local (and linear) in Fourier space so no stochasticity appears in this representation. This is an example of a phenomenon noted by Matsubara (1999). Discussion and Conclusions ========================== In this paper we have presented a new model for scale-dependent astrophysical bias. Although it is inspired to some extent by Bower et al. (1993), this model is considerably more general and easier to use. In the absence of any more complete theory of galaxy formation we hope it will provide a useful way to parametrise the possible level and scale of interactions so that they can be determined from observations and eliminated from cosmological considerations. We illustrated the generality of this model by pushing it to an extreme and showing that it can produce features that mimic baryon oscillations. Although the required effect is quite small in amplitude it does require astrophysical processes to be coordinated over very large scales. This, together with the concordance between clustering observations and the cosmic microwave background, suggests that the observed wiggles have a primordial origin. Nevertheless, in the precision era, any scale dependence in clustering bias could seriously degrade the business of cosmological parameter estimation. However, as we have argued in Section 4, different forms of mass tracer are unlikely to suffer from this bias to the same extent as galaxies. Using complementary observations should provide sufficient data to estimate the parameters in our bias model. This will not only allow us to learn whether there is significant evidence for scale-dependent bias at all but also, by marginalization, provide a way to remove this uncertainty from cosmological studies. Some of the observations will go towards estimating and eliminating a nuisance parameter rather than reducing the statistical uncertainty in interesting ones so the existence of scale-dependent bias will degrade the cosmological value of surveys to some extent even if it can be modelled satisfactorily. As a second, less extreme example of our approach we showed how non–locality in the feedback relationship described by equation (2) bears many of the hallmarks of stochastic bias. In particular, although our model is deterministic once the density field is specified, it is characterized by an imperfect correlation between galaxy and mass fluctuations. The difference between our model and a truly stochastic one is that in our case the residuals are not random but correlated through the interaction terms. One might learn more from observations by looking for correlated scatter than by giving up and treating them as completely stochastic. In any case the model we have presented shows up a terminological deficiency: stochasticity and non-locality can be easily confused. Acknowledgments {#acknowledgments .unnumbered} =============== We acknowledge support from PPARC grant PP/C501692/1. Babul A and White S D M 1991 [*Mon. Not. R. astr. Soc.*]{} [**253**]{} L31 Bardeen J M, Bond J R, Kaiser N and Szalay A S 1986 [*Astrophys. J.*]{} [**304**]{} 15 Becker R H et al 2001 [*Astron. J.*]{} [**122**]{} 2850 Benson A J, Cole S, Frenk C S, Baugh C M and Lacey C G 2000 [*Mon. Not. R. astr. Soc.*]{} [**311**]{} 793 Blake C and Glazebrook K 2003 [*Astrophys. J.*]{} [**594**]{} 665 Blanton M R, Cen R, Ostriker J P and Strauss M A 1999 [*Astrophys. J.*]{} [**522**]{} 590 Blanton M R, Eisenstein D H, Hogg D W and Zehavi I 2006 [*Astrophys. J.*]{} [**645**]{} 977 Bower R G, Coles P, Frenk C S and White S D M 1993 [*Astrophys. J.*]{} [**405**]{} 403 Cole S et al. 2005 [*Mon. Not. R. astr. Soc.*]{} [**362**]{} 505 Coles P 1993 [*Mon. Not. R. astr. Soc.*]{} [**262**]{} 1065 Coles P, Melott A L and Munshi D 1999 [*Astrophys. J.*]{} [**521**]{} L5 Conway E et al 2005 [*Mon. Not. R. astr. Soc.*]{} [**356**]{} 456 Cooray A and Sheth R 2002 [*Phys. Rep.*]{} [**372**]{} 1 Dekel A and Lahav O 1999 [*Astrophys. J.*]{} [**520**]{} 24 Eisenstein D J et al 2005 [*Astrophys. J.*]{} [**633**]{} 560 Eisenstein D J and Hu W., 1998, [*Astrophys. J.*]{}, [**496**]{} 605 Fan X et al. 2002 [*Astron. J.*]{} [**123**]{} 1247 Fry J N and Gaztanaga E 1993 [*Astrophys. J.*]{} [**413**]{} 447 Kaiser N 1984 [*Astrophys. J.*]{} [**284**]{} L9 Layzer D 1956 [*Astrophys. J.*]{} [**61**]{} 383 Matarrese S, Coles P, Lucchin F and Moscardini L 1997 [*Mon. Not. R. astr. Soc.*]{} [**286**]{} 115 Matsubara T 1999 [*Astrophys. J.*]{} [**525**]{} 543 McClure R J and Dunlop J S [*Mon. Not. R. astr. Soc.*]{} [**352**]{} 1390 Meiksin A, White M and Peacock J A 1999 [*Mon. Not. R. astr. Soc.*]{} [**304**]{} 851 Mo H and White S D M 1996 [*Mon. Not. R. astr. Soc.*]{} [**282**]{} 347 Mo H, Jing Y and White S D M 1997 [*Mon. Not. R. astr. Soc.*]{} [**284**]{} 189 Moscardini L, Coles P, Lucchin F and Matarrese S 1998 [*Mon. Not. R. astr. Soc.*]{} [**299**]{} 95 Neyrinck M C, Hamilton A J S and Gnedin N Y 2005 [*Mon. Not. R. astr. Soc.*]{} [**362**]{} 337 Norberg P et al 2001 [*Mon. Not. R. astr. Soc.*]{} [**328**]{} 64 Peacock J A and Smith R E 2000 [*Mon. Not. R. astr. Soc.*]{} [**318**]{} 1144 Pen U 1998 [*Astrophys. J.*]{} [**504**]{} 601 Press W H and Schechter P L 1974 [*Astrophys. J.*]{} [**187**]{} 425 Pritchard J R, Furlanetto S R and Kamionkowski M 2006 astro-ph/0604358 Rees M J 1988 in [Large Scale Structures of the Universe]{}, IAU Symposium No. 130, eds Audouze J, Pelletan M-C and Szalay A S. Kluwer, Dordrecht Scherrer R J and Weinberg D H 1998 [*Astrophys. J.*]{} [**504**]{} 607 Schulz A E and White M 2006 [*Astroparticle Phys.*]{} [**25**]{} 172 Seljak U 2000 [*Mon. Not. R. astr. Soc.*]{} [**318**]{} 2003 Seo H J and Eisenstein D J 2005 [*Astrophys. J.*]{} [**633**]{} 575 Smith R E, Scoccimarro R and Sheth R K 2006 astro-ph/0609547 Smith R E, Scoccimarro R and Sheth R K 2007 astro-ph/0703620 Swanson M E C, Tegmark M, Blanton M and Zehavi I 2007 astro-ph/0702584 Tegmark M and Bromley B C 1999 [*Astrophys. J.*]{} [**518**]{} L69 Tegmark M and Peebles P J E 1998 [*Astrophys. J.*]{} [**500**]{} L79 Tegmark M et al 2004 [*Astrophys. J.*]{} [**606**]{} 702 Wang Y 2006 [*Astrophys. J.*]{} [**647**]{} 1 Wild V et al 2004 [*Mon. Not. R. astr. Soc.*]{} [**356**]{} 247 Zehavi I et al. 2002 [*Astrophys. J.*]{} [**571**]{} 172 Zheng Z and Weinberg D M 2007 [*Astrophys. J.*]{} [**659**]{} 1
{ "pile_set_name": "ArXiv" }
[**Applications of Physics and Mathematics to Social Science**]{} D. Stauffer\* and S. Solomon Racah Institute of Physics, Hebrew University, IL-91904 Jerusalem, Israel Institute for Theoretical Physics, Cologne University, D-50923 Köln, Euroland Glossary I. Definition II\. Introduction III\. Some models and concepts IV\. Applications V. Future Directions [**Cellular Automata**]{} Discrete variables on a discrete lattice change in discrete time steps. [**Ising model**]{} Neighbouring variables prefer to be the same but exceptions are possible. The probability for such exceptions is an exponential function of “temperature”. Each site is randomly either occupied or empty, leading to random clusters. At the percolation threshold for the first time an infinite cluster is formed. [**Universality**]{} Certain properties are the same for a whole set of models or of real objects. Definition ========== This article introduces into the whole section on Social Sciences, edited by A. Nowak for this Encyclopedia, concentrating on the applications of mathematics and physics. Here under “mathematics” we include also all computer simulations if they are not taken from physics, while physics applications include simulations of models which basically existed already in physics before they were applied to social simulations. Thus obviously there is no sharp border between applications from physics and from mathematics in the sense of our definition. Also social science is not defined precisely. We will include some economics as well as some linguistics, but not social insects or fish swarms, nor human epidemics or demography. Also, we mention not only this section by also the section on agent-based modelling edited by F. Castiglione as containing articles of social interest. Introduction ============ If mathematical/physical methods are applied to social sciences, a major problem is the mutual lack of literature knowledge. Take for example the Schelling model of racial segregation in cities [@schelling]. Sociologist don’t cite the better and simpler Ising model, physicists ignored the Schelling model for decades, and sociologists also ignored better sociology work [@jones]. For simulations of financial markets, many econophysicists thought that they had introduced Monte Carlo and agent-based simulations to finance, not knowing of earlier work from some forward-looking Nobel laureates in economics [@stigler; @markowitz]. For inter-community relations, already 25 centuries ago analogies with liquids were pointed out by Empedokles in Sicily [@stauffer]. More recently, Ettore Majorana [@mantegna] around 1940 suggested to apply quantum-mechanical uncertainty to socio-economic questions. With emphasis shifted to statistical physics, sociophysics and econophysics became fashionable around the change of the millennium, but continuous lines of research by some physicists started [@weidlich] already in 1971. In the same year Journal of Mathematical Sociology started and published Schelling’s model of urban segregation [@schelling], which is a modification of the Ising magnet at zero temperature. 1982 saw the start of two other lines of research by physicists on socio-economic questions [@galam; @roehner]. Languages have been simulated on computers by decades, while the interest of physicists is more recent [@gomes; @zanette], triggered mostly by a model of language competition [@abrams]. We do not mention chemists since at present they play no major role in this field. However, the 1921 chemistry Nobel laureate F. Soddy [@soddy], to whom we owe the “isotope” concept, already worked on economic, social and political theories, and his finance work of the 1930’s was still cited in 2007. The present authors try it the other way around: First apply physics to social sciences, and then get the Nobel prize (for literature: science fiction.) Some Models and Concepts ======================== Physicist Albert Einstein said that models should be as simple as possible, but not simpler. In this spirit we now introduce some basic physics models and concepts for readers from social sciences. They don’t have to study physics for many years, the following examples give the spirit. All models are complex in the sense that the behaviour of large systems cannot be predicted from the properties of the single element. Cellular Automata ----------------- Mathematicians denote cellular automata often as “interacting particle systems”, but since many other models or methods in physics use interacting particles, we do not use this term here. A large $d$-dimensional lattice of $L^d$ sites carries variables $S_i \; (i=1,2,\dots L^d)$ which can be either zero or one; more generally, they are small integers between 1 and $Q > 2$. The lattice may be square (four nearest neighbours), triangular (six nearest neighbours), or simple cubic (also six nearest neighbours, but in $d=3$ dimensions); many other choices are also possible. Time $t = 1,2, ...$ increases in steps. At each time step, each $S_i(t+1)$ is calculated anew from a deterministic or probabilistic rule involving the neighbouring $S_k(t)$ of the previous time step. This way of updating is called “simultaneous” or “parallel”; one may also use sequential updating where $S_i$ depends in the current values of $S_k$; then the order of updating is important: random sequential, or regular like a typewriter. An example is a biological infection process: Each site $i$ becomes permanently infected, $S_i = 1$, if at least one of its nearest neighbours is already infected. Computers handle that efficiently if each computer word of, say, 32 bits stores 32 sites, and if then 32 possible infections are treated at once by bit-by-bit logical-OR operations [@jphysa]. Temperature ----------- We know temperature $T$ from the weather reports, but in physics it enters according to Boltzmann into the probability $$p \propto \exp(-E/k_BT) \eqno(1)$$ to observe some configuration with an energy $E$. Here $T$ is the temperature measured in Kelvin (about 273 + the Celsius or centigrade temperature), and $k_B$ the Boltzmann constant relating the scales of energy and temperature. For simplicity we now set $k_B = 1$, i.e. we measure temperature and energy in the same units. If $g$ different configurations have the same energy, then $S = \ln(g)$ is called the entropy, and the probability to observe this energy is $\propto g \exp(-E/T) = \exp(-F/T)$ with the “free energy” $F = E - TS$. In a social application we may think of peer pressure or herding: If your neighbours drink Pepsi Cola, they influence you to also drink Pepsi, even though at present you drink Coca Cola. Thus let $E$ be the number of nearest neighbours drinking Pepsi Cola, minus the number of Coke drinking nearest neighbours. The probability for you to switch then is given by the energy difference and equal to exp$(-2E/T)$ (or 1 if $E < 0$) in the Metropolis algorithm, or $1/(1 + \exp(2E/T)$ in the Glauber or Heat Bath algorithm. In both cases there is a tendency to decrease $E$. In the limit $T = 0$ one never makes a change which increases $E$, while for small positive $T$ one increases $E$ with a low but finite probability. In the opposite limit of infinite temperature, the energy becomes unimportant and all possible configurations become equally probable. Neither zero nor infinite temperature are usually realistic. In this sense, decreasing the energy $E$ is the most simple or most plausible choice, and the temperature measures the willingness or ability to deviate from this simplest option, e.g. to withstand peer pressure. But temperature also incorporates all those random accidents of life which influence us but are not part of the social model. For example, it may happen that there is no Pepsi Cola available even though all your neighbours drink Pepsi and you want to follow them. Investors have to make their financial choices under the influence of their clients, whose life is shaped by births, marriages, deaths, or other personal events which are not included explicitly into a financial market model. These accidents are then simulated by a finite temperature, entering the probability that one does not follow the usual rule. The ability to withstand peer pressure and the randomness of personal lives are in principle two different things, and if one wants to include them both one needs two different temperatures $T_1$ and $T_2$, which do not exist in traditional physics [@odor]. Ising Model ----------- In the model published by Ernst Ising in 1925, the variables $S_i$ are not 0 or 1, but $\pm 1$: $$E = - \sum_{i,k} S_iS_k - B \sum_i S_i \eqno(2)$$ and for $B=0$ this corresponds to the above Coke versus Pepsi example. The first summation runs over all neighbour pairs, the second over all sites. Thus if site $i$ considers changing its variable, the energy change is $\pm \Delta E = 2(\sum_k S_k - B)$ and enters through exp$(-\Delta E/T)$ into the probabilities to flip $S_i$; now $k$ runs over the neighbours of $i$ only. (If instead of flipping one $S_i$ one wants to exchange two different variables $S_i$ and $S_j$, moving $S_i$ into site $j$ and $S_j$ into site $i$, then one has to calculate the energy changes for both sites $i$ and $j$ in this “Kawasaki” kinetics.) A computer program and pictures from its application are given elsewhere in this Encyclopedia [@stauffer]. In physics, the $S_i$ are magnetic dipole moments of the atoms, often called spins, and $B$ is proportional to the magnetic field. Usually, physicists write an exchange constant $J$ before the first sum, but we set $J = 1$ for simplicity here. The model was invented to describe ferromagnetism, like in the elements iron, cobalt or nickel. Later it was found to describe liquid-vapour equilibria and other phase transitions. We know that iron at room temperature is magnetic, and this corresponds to the fact that for $0 < T < T_c$ and zero field $B$ the Ising model has the majority of its spins in one direction (either mostly +1 or mostly –1), while for $T > T_c$ half of the spins point in one and the other half in the opposite direction. The magnetisation $M = \sum_i S_i$, often normalised by the number $L^d$ of spins, is therefore an order parameter. The critical temperature $T_c$ is often named after Pierre Curie. In one dimension, we have $T_c$ = 0; in the square lattice in two dimensions we know $T_c = 2/\ln(1 + \sqrt 2)$ exactly, while on the simple cubic lattice $T_c \simeq 4.5115$ is estimated only numerically. Of course, one has generalized the model to more than nearest neighbours, to more than two states $\pm 1$ for each spin, and to disordered lattices and networks. Percolation ----------- Simpler than the Ising model but less useful is percolation theory, reviewed more thoroughly in this Encyclopedia in the section edited by M. Sahimi. Each site of a large lattice is randomly occupied with probability $p$, empty with probability $1-p$, and clusters are sets of occupied neighbouring sites. There is one percolation threshold $p_c$ such that for $p< p_c$ only finite clusters exist, for $p > p_c$ also one infinite cluster, and at $p = p_c$ even several infinite clusters may co-exist, which are fractal: The number of occupied sites belonging to the infinite clusters varies at $p_c$ as $L^D$ where $D$ is the fractal dimension. Here “infinite” means: spanning from one end of the sample of $L^d$ sites to the opposite end, or: increasing in average number of sites with a positive power of $L$. In one dimension, again one has no phase transition ($p_c = 1$), on the square lattice $p_c \simeq 0.5927462$ and on the simple cubic lattice $p_c \simeq 0.311608$ are known only numerically, with a fractal dimension of 1, 91/48 and $\simeq 2.5$ in one to three dimensions. In the resulting disordered lattices, each site has from 0 to $z$ neighbours, where $z$ is the number of neighbours in the ordered lattice $p = 1$. If one neglects the possibility of cyclic links one finds $p_c = 1/(z-1)$ in this Bethe lattice or Cayley tree. Near this percolation threshold the critical exponents with which several quantities diverge or vanish are the same as in the random graphs of Erdös and Rényi. But this percolation theory was published nearly two decades earlier, in 1941, by the later chemistry Nobel laureate P. Flory. Mean Field Approximations ------------------------- What is called “mean field” is called “representative agent” theory in economics, and is widespread in chemistry where the changes in the concentrations of various reacting compounds are approximated as functions of these time-dependent concentrations. A particularly simple example is Verhulst’s logistic equation $dx/dt = ax(1-x)$ , known as Bass diffusion in economics. We now explain why this approximation is unreliable. Let us return to the above Ising model of Eq.(2) and replace the $S_k$ there by it’s average $<S_k> = m = M/L^d$ which is a real number between –1 and +1 instead of being just –1 or +1; $m$ is the normalised magnetisation. Then the total energy $E$ is approximated as the sum over single energies $E_i$: $$E = \sum_i E_i\; \quad E_i = (-\sum_k <S_k> - B)S_i = - B'S_i$$ with a mean magnetic field $B' = B + \sum_k <S_k> = B + mz$ where $z$ again is the number of lattice neighbours. The system now behaves as if each spin $S_i$ is in an effective field $B'$ influenced only by the average magnetisation $m$ and no longer directly by its neighbours $S_k$. The two possible orientations of $S_i$ have the energies $\pm B'$, giving an average $$m = <S_i> = {\rm tanh}(B'/T) = {\rm tanh}[(B + zm)/T] \eqno (3a)$$ and thus a self-consistency equation for $m$. Expanding the hyperbolic tangent into a Taylor series for small $m$ and $B$ we get $$B = (1 - z/T)m + m^3/3 + \dots \eqno (3b)$$ which gives a Curie temperature $T_c = z$, since for $T<T_c$ the magnetisation is $m = \pm [3(z/T-1)]^{1/2} \propto (T_c-T)^{1/2}.$ Similar approximations for liquid-vapour equilibria lead to the Van der Waals equation of 1872, which may be regarded as the first quantitative theory of a complex phenomenon. ($m$ there is the difference between the liquid and the vapour density.) Nowhere in Eqs(2,3a) have we put in that there is a phase transition to ferromagnetism; it just arises from the very simple interaction energy $S_iS_k$ between neighbouring spins, and similarly the formation of raindrops emerges from the interaction between the molecules of water vapour. The water molecule is the same H$_2$O in the vapour, the liquid or the ice phase. But this nice approximation contradicts the results mentioned above. For the chain, square and simple cubic lattice it predicts a $T_c = z = 2$, 4 and 6 while the correct values are 0, 2.2, and 4.5. Particularly in one dimension it predicts a phase transition at a positive $T_c$ while no such transition is possible: $T_c=0$. This was the main result of Ernst Ising’s thesis in 1925. And even in three dimensions, where the difference in $T_c$ between 4.5 and 6 is less drastic, the above square-root law for $m$ is wrong, since $m$ varies for $T$ slightly below $T_c$ roughly as $(T_c-T)^{0.32}$. Thus mean field theory, Van der Waals equation, and similar approximations averaging over many particles are at best qualitatively correct. They become exact when each particle interacts equally with all other particles. Analogously for percolation, Flory’s approximation of neglecting cyclic links and the Erdös-Rényi random graphs lead to results corresponding to mean field approximations and should not be relied upon in one, two, or three dimensions with links between nearest neighbours only. For cellular automata a particularly drastic failure of analogous mean field approximations (differential equations) was given by Shnerb et al [@shnerb] for a biological problem. Even simpler, many cellular automata on the square lattice lead to blinking pairs of next-nearest neighbours: at even times one site of the pair is 1 and the other is 0, while at odd times the first is 0 and the second is 1. Averaging over many sites destroys these local correlations which keep the blinking pair alive. Applications ============ A thorough review of “sociophysics” was given recently by Castellano et al [@fortunatoRMP], a long list of reference by Carbone et al [@carbone]. Some work of social scientists is reviewed by Davidsson and Verhagen in the section on agent-based simulations in sociology, while Troitzsch in this section reviews both social scientists and physicists. His book with Gilbert [@gilbert] is, of course, more complete. Thus we merely sketch here some the areas covered in greater detail in the other articles or in the cited literature. Elections --------- ![The vote distribution in several countries and elections is a function only of the scaled variable $vQ/N$. From [@fortunato]. ](fortunato1.eps) A social scientist may be interested to predict the fate of one particular party or candidate in one particular election, or to explain it after this election. A physicist, accustomed to electrons, hydrogen atoms and water molecules being the same all over the world may be more interested to find which universal properties all elections have in common. Figure 1, kindly sent by Santo Fortunato, is an example. Let $v$ be the number of votes which a candidate got, $Q$ the number of candidates in that election, and $N$ the total number of votes cast. Then the probability distribution $P(v,Q,N)$ for the number of votes is actually a function $f(vQ/N)$ of only one scaled variable, and that variable $vQ/N$ is the ratio of the actual number $v$ of votes to the average number $N/Q$ of votes per candidate. Various countries and various electsions, all using a proportional election system, gave the same curve $f(vQ/N)$ which is a parabola on this double-logarithmic plot and thus corresponds to a log-normal distribution. In Brazil, however, where the personality of a candidate play a major role, not only the party membership, the results were different. These authors [@fortunato] also present a model to explain the log-normal distribution. Other models for opinion dynamics are reviewed elsewhere in this Encyclopedia [@stauffer]. Financial Markets ----------------- Agent-based simulation of stock markets [@levy] are a typical example of complex systems applications: In these models not the single agent but their (unconscious) cooperation produces the ups and downs on the stock market, the bubbles and the crashes. These models deal with the more or less random fluctuations, not with well founded market changes due to new inventions or major natural catastrophes. Real markets give at each time interval a return $r$ which is the relative change of the price. Typically, an index of the whole market like Dow Jones changes each trading day by about one percent. Much larger fluctuations are more rare, and the probability to have a change larger than $r$ decays for large $r$ as $1/r^3$: Fat tails. The sign of the change is barely predictable, but its absolute value is: Volatility clustering. Thus in calm times when $|r|$ was small, tomorrow’s $|r|$ probably is also small, whereas for turbulent times with high $|r|$ in the past one should also expect a large $|r|$ tomorrow. The daily weather behaves similarly: presumably tomorrow will be like today. Perhaps even multifractality exists in real markets, similar to hydrodynamic turbulence. A simple model, going back to Bachelier more than a century ago, would throw a coin to determine whether the market tomorrow will go up or down. This simple random-walk or diffusion model was shown by Mandelbrot in the 1960’s not to describe a real market; it lacks fat tails and volatility clustering but may be good for monthly changes. Many better agent-based models have been invented during the last decade and reproduce these real properties, Fig. 2; the Cont-Bouchaud model is based on the above percolation theory [@cont], while the Minority Game tells you it is better not to be with the big crowd [@challet]. Languages --------- The versatility of human languages distinguishes us from the simpler communication systems of other living beings. With computers or mathematically exact solutions [@komarova] models have been studied for the learning of a language by children or for the evolution of human languages out of simpler forms. Closer to simulations in biology with the Darwinian selection of the fittest are the models of competition between various languages of adult humans: Will the Welsh language survive against English in Great Britain? Similar to Lotka-Volterra equations for prey and predator in biology, some nonlinear differential equations [@abrams] seem to describe the extinction of the weaker language. Better statistics are available for the size distribution of languages, where “size” is the number of people speaking this language. Here one model of de Oliveira et al found good agreement with reality, Fig.3; other models [@langssw] were less successful, inspite of many simulations from physicists. Future Directions ================= The future should see more work in what we have shown here through our three figures: Searching for universal properties, or the lack of them, in the multitudes of models and in reality. Biology became a real science when the various living beings were classified into horses, mammals, vertebrates etc. Within each such taxonomic set all animals have certain things in common, which animals in other taxonomic sets do not share. This check for universality is different from improving our ability to ride horses. Thus making money on the stock market, or explaining the crash of 1987, is nice, but investigating the exponents of the fat tails, Fig.2, of all markets may give us more insight into what drives a market and what differences exist between different markets. Winning one particular election and predicting the winner is important, but universal scaling properties as in Fig.1 may help us to understand democracy better. Preventing the extinction of French language in Canada is important for the people there, but explaining the overall statistics of languages in Fig.3 is relevant globally. It is in these general aspects where the methods of mathematics and physics seem to be most fruitful. One specific problem is better solved by the local people who know that problem best, not by general simplified models. A useful future approach for interacting agents would be their realisation by neural network models [@wischmann]. [99]{} [**Primary literature:**]{} Schelling TC (1971) J. Math. Sociol. 1: 143 Dethlefsen E, Moody C (July 1982) Byte 7: 178; Jones FL (1985) Aust. NZ J. Sociol. 21: 431 Stigler GJ (1964) J. Buniness 37: 117 Kim GW, Markowitz HM (Fall 1989) J. Portfolio Management 16: 45 Stauffer D (2008) Opinion Dynamics and Sociophysics. Draft for this section in this Encyclopedia. arXiv:0705.0891 Mantegna RN (2005) Presentation of the English translation of Ettore Majorana’s paper: “The value of statistical laws in physics and social sciences”. Quantitative Finance 5: 133 Weidlich W (2000) [*Sociodynamics; A Systematic Approach to Mathematical Modelling in the Social Sciences*]{} Harwood Academic Publishers; 2006 reprint: Dover, Mineola (New York) Galam S (2008) Int. J. Mod. Phys. C 19: issue 3; Galam S, Gefen Y, Shapir Y (1982) J. Math. Sociol. 9: 1 Roehner B, Wiese KE (1982) Environment and Planning A 14: 1449 Gomes MAF, Vasconcelos GL, Tang IJ, Tang IR (1999) Physica A 271: 489 Zanette D (2001) Adv. Complex Syst. 4: 281 Abrams MH, Strogatz SH (2003) Nature 424: 900 http://nobelprize.org/nobel\_prizes/chemistry/laureates/1921/soddy-bio.html Stauffer Di (1991) J. Phys. A 24: 909 Ódor G (2008) Int. J. Mod. Phys. C 19: issue 3 Shnerb NM, Louzoun Y, Bettelheim E, Solomon S (2000) Proc. Natl. Acad. Sci. USA 97: 10322 Castellano C, Fortunato S, Loreto V (2007) arXiv:0710.3256 submitted to Reviews of Modern Physics. Carbone A, Kaniadakis G, Scarfone AM (2007) Eur. Phys. J. B 57: 121 Fortunato S, Castellano C (2007) Phys. Rev. Lett. 99: 138701. Levy M, Levy H, Solomon S (2000) [*Microscopic Simulation of Financial Markets*]{}, Academic Press, New York; Samanidou E, Zschichang R., Stauffer D, Lux T (2007) Rep. Progr. Phys. 70: 404, Lux T (2007) “Stochastic behavioral asset pricing models and the stylized facts”, draft for [ *Handbook of Finance*]{}, Hens T, Schenk-Hoppé K Stauffer D (2001) Adv. Complex Syst.4: 19 Challet D, Marsili M, Y.-C. Zhang YC (2004) [*Minority Games*]{}, Oxford University Press, Oxford. Komarova NL (2004) J. Theor. Biologya230, 227 Schulze S, Stauffer D, Wichmann S (2007) Comm. Comput. Phys. 3: 271 de Oliveira PMC, Stauffer D, Wichmann S, Moss de Oliveira S (2007) arXiv:0709.0868 Klüver J (2008) part VI.2 in “Social-cognitive complexity”, this section in this Encyclopedia; Wischmann S, Hulse M, Knabe JF, Pasemann F (2006) Adaptive Behavior 14: 117; Stauffer D (2007) arXiv:0712.4364 Stauffer D, Moss de Oliveira S, de Oliveira PMC, Sá Martins JS (2006) [*Biology, Sociology, Geology by Computational Physicists*]{}, Elsevier, Amsterdam Billari FC, Fent T, Prskawetz A, Scheffran J (2006) [*Agent-based computational modelling*]{}, Physica-Verlag, Heidelberg Gilbert N, Troitzsch KG (2005) [*Simulation for the Social Scientist*]{}, $2^{nd}$ edition, Open University Press, Maidenhead and New York
{ "pile_set_name": "ArXiv" }
--- abstract: 'We use the [*Spitzer*]{} SAGE survey of the Magellanic Clouds to evaluate the relationship between the 8 [$\mu$m]{} PAH emission, 24 [$\mu$m]{} hot dust emission, and [[Hii]{}]{} region radiative transfer. We confirm that in the higher-metallicity Large Magellanic Cloud, PAH destruction is sensitive to optically thin conditions in the nebular Lyman continuum: objects identified as optically thin candidates based on nebular ionization structure show 6 times lower median 8 [$\mu$m]{} surface brightness (0.18 mJy arcsec$^{-2}$) than their optically thick counterparts (1.2 mJy arcsec$^{-2}$). The 24 [$\mu$m]{} surface brightness also shows a factor of 3 offset between the two classes of objects (0.13 vs 0.44 mJy arcsec$^{-2}$, respectively), which is driven by the association between the very small dust grains and higher density gas found at higher nebular optical depths. In contrast, PAH and dust formation in the low-metallicity Small Magellanic Cloud is strongly inhibited such that we find no variation in either 8 [$\mu$m]{} or 24 [$\mu$m]{} emission between our optically thick and thin samples. This is attributable to extremely low PAH and dust production together with high, corrosive UV photon fluxes in this low-metallicity environment. The dust mass surface densities and gas-to-dust ratios determined from dust maps using [*Herschel*]{} HERITAGE survey data support this interpretation.' author: - 'M. S. Oey, J. López-Hernández, J. A. Kellar, E. W. Pellegrini, K. D. Gordon, K. E. Jameson, A. Li, S. C. Madden, M. Meixner, J. Roman-Duval, C. Bot, M. Rubio, and A. G. G. M. Tielens' title: 'Dust emission at 8 [$\mu$m]{} and 24 [$\mu$m]{} as Diagnostics of [[Hii]{}]{} Region Radiative Transfer' --- Introduction ============ The ionizing radiation from massive stars has fundamental consequences on scales ranging from individual circumstellar disks to the ionization state of the entire universe. On galactic scales, the escape fraction of Lyman continuum radiation from galaxies is crucial to the ionization state of the intergalactic medium and cosmic reionization of the early universe; and radiative feedback is also a major driver for the energetics and phase balance of the interstellar medium (ISM) in star-forming galaxies. Thus, determining the fate of ionizing photons from high-mass stars is critical to understanding the formation and evolution of galaxies throughout cosmic time. Within star-forming galaxies, it has long been recognized that the diffuse, warm ionized medium (WIM), which is the most massive component of ionized gas in galaxies [@walterbos98], is energized by OB stars [e.g. @haffneretal09]. The WIM is a principal component of the multi-phase ISM, and strongly prescribes galactic ecology, which drives evolutionary processes like star formation and galactic dynamics. The standard paradigm is that the WIM is powered both by ionizing radiation escaping from classical [[Hii]{}]{} regions, and by field OB stars [e.g. @oeykennicutt97; @hoopes00]. While additional ionizing sources are sometimes suggested, it is clear that only massive stars can provide enough power to generate the WIM [e.g. @reynolds84], although other mechanisms may be secondary contributors. The relative importance of optically thin [[Hii]{}]{} regions vs field star ionization of the WIM is still poorly understood. Comparison of predicted and observed [[Hii]{}]{} region luminosities in nearby galaxies had suggested that both sources are not only viable, but necessary [@oeykennicutt97; @hoopes00; @hoopesetal01]. However, modern stellar atmosphere models for massive stars [e.g. @martinsetal05; @pauldratchetal01] exhibit lower ionizing fluxes than those of the previous generation, casting doubt that a significant fraction of classical [[Hii]{}]{} regions are density-bounded [optically thin; @voguesetal08]. On the other hand, [@woodmathis04] find that the emission-line spectrum of the WIM is consistent with the harder spectral energy distributions (SEDs) expected from density-bounded [[Hii]{}]{} regions, and studies of radiative transfer in the global ISM suggest that ionizing radiation travels over long path lengths, on the order of hundreds of pc in the galactic plane, and 1 – 2 kpc outside the plane [e.g. @collinsrand01; @zuritaetal02; @seon09]. It is also well-known that the WIM surface brightness is highest around [[Hii]{}]{} regions [@fergusonetal96]. We recently developed the technique of ionization-parameter mapping (IPM) to more directly evaluate nebular optical depth in the Lyman continuum [@p12]. This technique uses emission-line ratio maps to determine the nebular ionization structure, and hence, infer the optical depth. For conventional, optically thick Strömgren spheres, there is a transition zone between the central, highly excited region and the neutral environment. These transition zones are characterized by a strong decrease in the ionization state, and hence, the gas ionization parameter, which is the ratio of radiation energy density to gas density. Objects that are optically thick to ionizing photons reflect stratified ionization structure, showing low-ionization envelopes around highly ionized central regions. In contrast, optically thin nebulae will exhibit weak or nonexistent lower-ionization transition zones, and thus they show high ionization projected across the entire object. These usually show irregular and disrupted morphology, which is consistent with radiation-MHD simulations by [@arthuretal11] for highly ionized [[Hii]{}]{} regions. This simple IPM technique allowed us to estimate the optical depths of the [[Hii]{}]{} regions in the Magellanic Clouds using [H$\alpha$]{}, \[[[Oiii]{}]{}\] $\lambda\lambda$4959,5007, and \[[[Sii]{}]{}\] $\lambda\lambda$6717,6732 data from the Magellanic Clouds Emission-Line Survey [MCELS; @smithetal05].We were thus able to determine that optically thick nebulae dominate at low [H$\alpha$]{} luminosity, while high-luminosity objects are mostly optically thin, dominating at luminosities above $10^{37}\ \rm erg\ s^{-1}$ in both galaxies [@p12]. This implies that most of the bright [[Hii]{}]{} regions observed in star-forming galaxies are optically thin. Similarly, we found that the frequency of optically thick [[Hii]{}]{} regions strongly correlates with [[Hi]{}]{} column, although at the lowest [$N$([Hi]{})]{}, the optically thin objects dominate. Thus, despite strongly differing properties of the neutral ISM of these galaxies, the quantitative properties of the nebular radiative transfer are remarkably similar. Our results demonstrate that IPM is a vivid and powerful tool for constraining the optical depth to ionizing radiation [@p12]. However, we need to further evaluate this technique and understand it in the context of other ISM properties and diagnostics. In particular, dust properties are a significant factor in the radiative transfer of ionizing radiation, and they also offer multifaceted probes of this process. Polycyclic aromatic hydrocarbon (PAH) emission is sensitive to Lyman continuum radiation and is destroyed by it [e.g., @tielens08], while larger dust grains absorb and re-emit this radiation. We therefore use 8 [$\mu$m]{} and 24 [$\mu$m]{} data from the [*Spitzer*]{} survey of the Magellanic Clouds, SAGE \[Surveying the Agents of Galaxy Evolution; [@meixneretal06]\], and dust maps from @gordonetal14 based on the analogous far-infrared [*Herschel*]{} survey, HERITAGE \[[*Herschel*]{} Inventory of The Agents of Galaxy Evolution; [@meixneretal13]\] to examine the Lyman continuum radiative transfer. 8 [$\mu$m]{} PAH Emission ========================= The 8 [$\mu$m]{} bandpass probes the bright, 7.7 [$\mu$m]{} and 8.6 [$\mu$m]{} PAH features, particularly ionized PAHs [e.g., @lidraine01a]. @bauschlicher08 [2009] attribute the 7.7 [$\mu$m]{} band to C-C stretch and C-H in-plane bending vibrations in small and large charged PAHs, and the 8.6 [$\mu$m]{} emission to C-H in-plane bending vibrations in large, charged, compact PAH molecules ($>70$ C atoms). In the Large Magellanic Cloud (LMC), PAH emission is typically an order of magnitude brighter than other contributions to this band in both star-forming and diffuse ISM [@bernard08]. Even in the low-metallicity SMC, spectral analysis of objects with low PAH fractions shows that these emission features still dominate the continuum [@sandstrom10]. PAHs are generally found to be anticorrelated with ionized gas, indicating that they are destroyed by ionizing radiation [e.g., @povichetal07; @pavlyuchenkovetal13]. Indeed, aromatics are a major component of the Lyman continuum opacity [@lidraine01]. We therefore expect that optically thin [[Hii]{}]{} regions should show less PAH emission in their peripheries relative to optically thick objects. Thus, the spatial distribution of PAHs near optically thin [[Hii]{}]{} regions might behave similarly to that of low-ionization atomic species. Therefore, mapping of 8 [$\mu$m]{} PAH emission relative to a high-ionization atomic species (e.g., \[[[Oiii]{}]{}\]) might yield results similar to ionization-parameter mapping based on a low-to-high ionization ratio map as done by [@p12]. Figure \[f\_pelleg\] shows example 8[$\mu$m]{}/\[[[Oiii]{}]{}\] ratio maps of an [[Hii]{}]{} region simulated with [Cloudy]{} [@ferland13]. We show an object ionized by an O6 V star for Lyman continuum optical depths of $\tau = 0.5$ and 20. This figure is analogous to Figure 2 of @p12, and illustrates that, in principle, 8[$\mu$m]{}/\[[[Oiii]{}]{}\] should behave similarly to \[[[Sii]{}]{}\]/\[[[Oiii]{}]{}\]. In what follows, we use the high-quality, 8 [$\mu$m]{} residual images from the SAGE survey [@gordonetal11; @meixneretal06], for which the stellar point sources were removed via PSF fitting [@sewilo09], alleviating stellar contamination. Figure \[fig\_8m2o3ratio\] (top panel) shows the 8$\mu$m/\[[[Oiii]{}]{}\] ratio map for a region in the LMC, constructed from the continuum-subtracted SAGE image and the \[[[Oiii]{}]{}\] image from the MCELS survey [@smithetal05]; white indicates high values. The apertures defining the [[Hii]{}]{} regions from [@p12] are overplotted, with green and blue showing optically thick and thin objects, respectively, as determined by IPM in that work. Figure \[fig\_8m2o3ratio\] shows that objects previously identified as optically thin tend to show less PAH emission compared to those identified as optically thick. Using the same continuum-subtracted images, we measured the 8 [$\mu$m]{} flux densities of the [[Hii]{}]{} regions using Funtools[^1] routines for ds9. This was done for all the objects catalogued as optically thick or thin, including “blister” regions, by [@p12], using the apertures defined in that work. These apertures are defined based on the nebular emission and ionization structure, and we note that physically associated 8 [$\mu$m]{} flux may not always correlate well with the aperture boundaries. We tried to determine a systematic method to modify the apertures to avoid this problem. However, the 8 [$\mu$m]{} spatial morphology varies strongly from that of the nebular emission and is fraught with confusion from background and neighboring emission. Thus, there is no obvious way to redefine the apertures to accurately define the boundaries between physically associated and unassociated emission for most objects. We caution that the 8 [$\mu$m]{} flux density measurements across the samples are therefore subject to larger uncertainties in terms of their association with the specified [[Hii]{}]{} regions. It is hard to quantify these uncertainties, but they can be on the order of 50% for some objects, and much less for others. Figure \[fig\_my80mdists\] shows the 8 [$\mu$m]{} flux surface brightness distributions for the [[Hii]{}]{} regions in the LMC (metallicity $0.6Z_\odot$) and SMC ($0.25Z_\odot$; Russell & Dopita 1992), respectively. Objects identified as optically thick by [@p12] are shown with thick lines, and those identified as optically thin by thin lines. Figure \[fig\_my80mdists\] also shows the distribution of background, diffuse 8 [$\mu$m]{} emission (dashed lines) for each galaxy, defined by the regions shown in Figure \[fig\_masks8\]. It is apparent in the upper panel of Figure \[fig\_my80mdists\] that the candidate optically thick objects show more 8 [$\mu$m]{} emission than candidate optically thin ones in the LMC, which is consistent with the destruction of PAHs by the Lyman continuum radiation. This is also confirmed by the fact that the optically thin objects are seen to be at the background levels. In contrast, the lower panel of Figure \[fig\_my80mdists\] shows that in the SMC, the 8 [$\mu$m]{} surface brightness distributions for the optically thick and thin objects are essentially the same: for the optically thick objects, the median 8 [$\mu$m]{} surface brightness in the LMC is 1.2 mJy arcsec$^{-2}$, while in the SMC, it is much lower, only 0.18 mJy arcsec$^{-2}$. This is likely linked to the extremely low PAH emission found in low-metallicity environments [e.g., @madden06; @wu06; @engelbracht05], which is due to actual low PAH abundance in these conditions [@draine07; @munozmateos09]. @sandstrom10 examined the spatially resolved PAH abundance across the SMC, confirming the overall low PAH fraction, but finding strong differentiation between molecular clouds and diffuse ISM, with clouds showing PAH fractions 2 – 3 times higher than diffuse gas. This resolved study points to a model in which these aromatics form within molecular clouds via photoprocessing in the mantles of larger dust grains [@greenberg00]; the PAHs are subsequently destroyed by stellar UV radiation, which is less inhibited by dust in low metallicity environments [e.g., @madden06; @gordon08]. PAH destruction is further enhanced by their smaller average sizes, as found in the SMC by @sandstrom12. This contrasts with PAH abundance models at higher metallicity in which additional processes contribute to PAH production, and dustier environments inhibit the propagation of UV radiation [e.g., @paradis09]. The large observed variation in PAH abundances of star-forming regions in the SMC is thus modulated by their remaining molecular gas, and the local UV photon flux or ionization parameter. This model is consistent with the observed presence and variation of the 2175 Å bump in the SMC B1-1 cloud [@maizapellaniz12]. If the PAH production indeed depends on the existence of larger dust grains, it is necessarily much lower in metal-poor environments. Thus, our results in Figure \[fig\_my80mdists\] can be understood such that the large stochastic variation in PAH abundance masks any systematic differences between optically thick and thin [[Hii]{}]{} regions. Can 8 [$\mu$m]{} PAH imaging be useful for estimating the nebular optical depth when combined with, for example, mapping in a high-ionization atomic species? This would be similar to the IPM technique based on \[[[Sii]{}]{}\]/\[[[Oiii]{}]{}\] mapping. For objects with at least LMC metallicity, the data suggest that the 8 [$\mu$m]{} imaging can provide valuable information. At lower metallicity, as seen in the SMC, PAHs are not abundant enough to be used for such a diagnostic. A couple of example objects from the LMC are shown in Figures \[fig\_L215\] and \[fig\_L258\], which show region MCELS-L372 (optically thick) and MCELS-L258 (optically thin) in [H$\alpha$]{}, \[[[Sii]{}]{}\]/\[[[Oiii]{}]{}\], 8[$\mu$m]{}/\[[[Oiii]{}]{}\], 24[$\mu$m]{}/\[[[Oiii]{}]{}\], and 8[$\mu$m]{}/24[$\mu$m]{}. There is similarity between the \[[[Sii]{}]{}\]/\[[[Oiii]{}]{}\] and 8[$\mu$m]{}/\[[[Oiii]{}]{}\] ratio maps, although we also see that the 8 [$\mu$m]{} emission extends beyond the nebular boundaries defined for the regions. In many cases it also appears morphologically unrelated to the [[Hii]{}]{} region, as in MCELS-L258 (Figure \[fig\_L258\]). We can therefore expect that evaluating the optical depth based only on 8[$\mu$m]{}/\[[[Oiii]{}]{}\] will not be as straightforward as when using only nebular atomic lines. We reclassified all the LMC objects by visual inspection of the regions, following the @p12 methodology, but using the 8[$\mu$m]{}/\[[[Oiii]{}]{}\] map instead of \[[[Sii]{}]{}\]/\[[[Oiii]{}]{}\], and allowing consideration of PAH emission outside the nebular boundaries specified by @p12. We also imposed a threshold value of 0.5 and 0.3 in these ratio maps for the LMC and SMC, respectively, above which the objects are considered optically thick. Our classifications are listed in the Appendix. We then compare with the objects’ classifications by @p12 as optically thick or thin (including blister) based on \[[[Sii]{}]{}\]/\[[[Oiii]{}]{}\] maps. The sample for which this comparison can be done corresponds to almost two-thirds of the objects (256 out of 401 objects) in the LMC, since a number of objects were either not classified as optically thick or thin by [@p12] or by us, or did not correspond to adequate detection in 8 [$\mu$m]{}. We find that of the 256 objects, 185 (72%) maintain the same classifications and 71 objects (28%) switch classification from optically thick to thin (59 objects) or vice versa (12 objects). In the SMC, however, more objects change their classification (115 objects out of 189, or 61%) than remain the same (74 objects, or 39%) when evaluated with PAH emission. This again suggests that PAHs are simply not abundant enough in this galaxy to provide a useful diagnostic of radiative transfer. However, in the LMC, for objects whose classifications are consistent for both nebular and PAH-based methods, the 8 [$\mu$m]{} data can provide important confirmation. As in the LMC, we also find in the SMC that more objects switch classification from optically thick to thin (88) than vice versa (27). This trend is consistent with PAHs being a more sensitive indicator of UV flux than low-ionization atomic species. As discussed by @p12, although it usually indicates optically thick conditions, the presence of a low-ionization envelope is also seen in some optically thin objects, especially those with softer ionizing sources. The nebular-based classifications therefore might discriminate at somewhat higher optical depths than the PAH-based ones. More data is needed to determine how much of the discrepancy between the methods is due to this effect, and how much is due to errors caused by PAH spatial distribution, background confusion, and lower spatial resolution in the 8 [$\mu$m]{} image, as well as misclassifications from the nebular lines. 24 [$\mu$m]{} Hot Dust emission =============================== Very small dust grains within [[Hii]{}]{} regions absorb energetic photons produced by the massive stars and re-emit this energy in the 24 [$\mu$m]{} band, which is an indicator of hot dust [e.g., @draineli07]. Hence 24 [$\mu$m]{} emission has been used as a tracer of obscured star formation [e.g., @calzettietal07]. Optically thick objects, with higher gas-to-photon densities, might be expected to have more dust, and thus, correspondingly stronger 24 [$\mu$m]{} emission. However, we note that these dust grains, which are on average larger than PAHs, are not as easily destroyed by UV radiation. Thus, they tend to associate with individual dense knots, and also remain somewhat more uniformly distributed in the star-forming regions than PAHs. This is seen in the spatial distribution of 24 [$\mu$m]{} emission in Figures \[fig\_L215\], and \[fig\_L258\]. We measure the 24 [$\mu$m]{} surface brightnesses for our sample objects in the same way as for the 8 [$\mu$m]{} emission. The 24 [$\mu$m]{} data are not continuum-subtracted, since there is no significant stellar continuum contributing to this band. Figure \[fig\_my24mdist\] shows the 24 [$\mu$m]{} surface brightness distributions for the LMC (top) and SMC (bottom). We see that, as expected, optically thick regions in the LMC have higher 24 [$\mu$m]{} surface brightness than optically thin ones. The median values are 0.44 and 0.13 mJy arcsec$^{-2}$ for the thick and thin regions, respectively, in this galaxy. However, for the SMC, the 24 [$\mu$m]{} surface brightness distributions are essentially the same for the optically thick and thin objects (Figure \[fig\_my24mdist\]). As in the case of the 8 [$\mu$m]{}  emission, this is likely due to the low SMC metallicity and hence, low dust content, as well as generally lower ISM density relative to the LMC. The mean 24 [$\mu$m]{} surface brightness for the thick and thin regions in the SMC is about 0.05 mJy arcsec$^{-2}$, an order of magnitude lower than the values for the LMC. We do note that the diffuse background is still slightly lower than in the [[Hii]{}]{} regions. Dust Mass ========= We interpret our findings above to suggest that the SMC is simply too metal-poor to sustain enough dust, both PAHs and larger grains, to generate differential trends between optically thick and thin [[Hii]{}]{} regions as seen in a more metal-rich environment like the LMC (Figures \[fig\_my80mdists\] and \[fig\_my24mdist\]). To evaluate this possibility, we use the dust map constructed by @gordonetal14 to measure the integrated dust masses using the same method as before. In the SMC, 129 (63%) of 203 objects are detected, whereas in the LMC 220 (83%) of 262 objects are detected in the dust maps. For objects with detections, Figure \[fig\_dustmassdist\] shows the distribution of dust mass surface density [$\Sigma_d$]{} for the optically thick and thin objects in each galaxy, analogous to the earlier distribution plots. The top panel of Figure \[fig\_dustmassdist\] indeed confirms that optically thick objects in the LMC have 1.6 times higher median [$\Sigma_d$]{} than their optically thin counterparts; the median [$\Sigma_d$]{} are $5.0\times10^{-3}$ and $3.0\times10^{-3}\ \rm M_\odot\ pc^{-2}$, respectively. In contrast, there is no differentiation between optically thick and thin objects in the SMC: $1.1\times10^{-3}$ and $1.2\times10^{-3}\ \rm M_\odot\ pc^{-2}$, respectively. This value may well correspond to a diffuse background emission, and Figure \[fig\_dustmassdist\] may imply that optically thin [[Hii]{}]{} regions have negligible [$\Sigma_d$]{}. These trends are further confirmed by the gas-to-dust ratios (GDR) obtained in the same apertures. We computed these using the GDR maps of @romanduval14, where the dust surface density is derived from the HERITAGE data used above [@gordonetal14], and the gas surface density includes both [[Hi]{}]{} [@kim03; @stanimirovic99] and H$_2$, inferred from CO [@wong11; @mizuno01]. We adopt the maps with CO-to-H$_2$ conversion factors of $X_{\rm CO,20} = 2$ and 10 in the LMC and SMC, respectively [@bolatto13]. Figure \[fig\_gdr\] shows the distribution in GDR for the optically thick and thin objects in both Magellanic Clouds, analogous to our previous figures, along with the diffuse emission. We see that, in the LMC, optically thin objects tend to have higher GDR than the optically thick objects, although the effect is not dramatic. The mean values for thin and thick objects are 265 and 243, respectively, and the optically thin distribution is again intermediate between the optically thick objects and diffuse gas, as seen in Figures \[fig\_my80mdists\] and \[fig\_my24mdist\]. This behavior is consistent with the conventional correlation between dust and optically thick conditions. As before, the behavior is different in the SMC, now with the optically thick objects showing significantly higher GDRs than the optically thin objects: the mean values are 1701 and 959 for the two samples, respectively. Figure \[fig\_gdr\] shows that the optically thin distribution peaks at similar GDR as the diffuse gas, supporting our premise that the dust abundance in these objects is similar to that of the diffuse ISM, since both are governed by destruction from the UV interstellar radiation field [@madden06; @gordon08]. The higher GDR for optically thick objects may be attributable to the fact that these [[Hii]{}]{} regions are also subject to UV radiation, but must have higher gas masses necessary for optically thick [[Hii]{}]{} regions. Conclusion ========== We have examined the 8 [$\mu$m]{} PAH and 24 [$\mu$m]{} hot dust emission associated with [[Hii]{}]{} regions in the Magellanic Clouds to evaluate how the emission in these bands relates to the nebular optical depth in the Lyman continuum. Specifically, we examined IR emission and dust properties derived from the SAGE and HERITAGE surveys of the Magellanic Clouds associated with [[Hii]{}]{} regions that were classified by @p12 as candidate optically thick and thin objects. Since PAHs are easily destroyed by UV radiation, in principle, nebular optically thin conditions may be confirmed by low peripheral PAH abundance. We find that the use of PAHs as a diagnostic for nebular conditions is compromised by the strongly non-uniform spatial distribution of dust relative to the ionized gas. Nevertheless, for metallicities allowing significant dust formation, as in the LMC, optically thick [[Hii]{}]{} regions clearly show much higher 8 [$\mu$m]{} surface brightness, with a median value about 6 times higher than for optically thin objects. The lower 8 [$\mu$m]{} emission in optically thin objects is unlikely to be due to lower heating rates since on average the stellar ionizing fluxes are higher in optically thin objects [@p12]. Thus, the 8 [$\mu$m]{} emission can offer important supporting diagnostic data on optical depth at higher metallicities. In contrast to the LMC, we find no differentiation in the low PAH levels seen in the optically thick and thin nebulae of the SMC. These results are consistent with the model of @sandstrom10 in which low-metallicity PAH abundance is regulated by low production rates in molecular clouds and high destruction rates by stellar UV radiation. This dominates the variations in PAH abundances of star-forming regions and masks any differentiation due to optical depth effects. Thus, at this much lower metallicity, it appears that PAHs are simply too underabundant to serve as diagnostics for Lyman continuum opacity. The very small dust grains that produce the 24 [$\mu$m]{} emission are more resilient to UV radiation and well known to correlate with star-forming regions. We confirm that it is associated with star formation, having more uniform morphological correspondence to luminous [[Hii]{}]{} regions and star-forming knots. We again find that the optically thick [[Hii]{}]{} regions show a significant offset, a factor of about 3, in median 24 [$\mu$m]{} surface brightness relative to the optically thin objects in the LMC. However, the offset here is due to the association with denser gas in optically thick regions, rather than destruction in optically thin regions. As with the PAH emission, there is no discernible difference with nebular optical depth in the SMC, again attributable to low dust abundance. Thus, we find that the low metallicity in the SMC apparently inhibits the formation of PAHs and dust such that we cannot use the 8 [$\mu$m]{} and 24 [$\mu$m]{} emission as diagnostics of nebular radiative transfer. This is further confirmed by inspection of the dust mass surface densities, showing no significant difference between the optically thick and thin objects in the SMC. In contrast, the LMC shows that the optically thick objects have higher median dust mass surface density by a factor of 1.7 compared to the optically thin objects, and the median GDR similarly is 1.8 times higher. This contrast in PAH diagnostics is consistent with the suggestions of a transition in ISM dust conditions at metallicities just above the SMC value [@draine07; @engelbracht05], such that the PAH contribution to dust mass drops precipitously in metal-poor environments. For our purposes, the decrease in 24 [$\mu$m]{}-emitting hot dust also precludes the use of this emission as a useful diagnostic of nebular conditions in these environments. Hence, our findings suggest that at higher metallicities, the 8 [$\mu$m]{} PAH and 24 [$\mu$m]{} hot dust emission can offer useful diagnostics of [[Hii]{}]{} region radiative transfer. We do caution that there is significant overlap in the distributions of properties between the optically thick and thin objects. Much of this degeneracy is due to the fact that optical depth is not a binary classification, but rather, a continuous quantity, and efforts to bin objects into two categories will necessarily cause overlap in the distributions. We further caution that the optical depth classifications of @p12 have a large degree of subjectivity, as do our reclassifications based on the 8[$\mu$m]{}/\[[[Oiii]{}]{}\] maps in Section 2. As stressed by [@p12], IPM can only offer a first-order estimate of optical depth for a single ratio map, and so classifications of individual objects should be regarded as tentative. The 8 [$\mu$m]{} and 24 [$\mu$m]{} emission can therefore provide valuable additional diagnostics when combined with the nebular emission-line ratio maps. As discussed in Section 2, since PAHs are more sensitive to UV radiation than atomic species, they seem to be sensitive to a somewhat higher optical depth threshold. This work was supported by the National Science Foundation, grant AST-1210285. M.R. acknowledges support from CONICYT (Chile) through FONDECYT grant No. 1140839 and partial support through project BASAL PFB-06. We also thank the anonymous referee for helpful comments. Arthur, S. J., Henney, W. J., Mellema, G., de Colle, F., & V[á]{}zquez-Semadeni, E. 2011, , 414, 1747 Bauschlicher, C. W., Peeters, E., & Allamandola, L. J. 2009, 697, 311 Bauschlicher, C. W., Peeters, E., & Allamandola, L. J. 2008, 678, 316 Bernard, J.-P., et al. 2008, , 136, 919 Bolatto, A., D., Wolfire, M., & Leroy, A. K. 2013, 51, 207 Calzetti, D., Kennicutt, R. C., Engelbracht, C. W., et al. 2007, , 666, 870 Collins, J. A., & Rand, R. J. 2001, , 551, 57 Draine, B. T., & Li, A. 2007, , 657, 810 Draine, B. T., Dale, D. A., Bendo, K. D., et al. 2007, , 663, 866 Engelbracht, C. W., Gordon, K. D., Rieke, G. H., Werner, M. W., Dale, D. A., & Latter, W. B. 2005, , 628, L29 Ferguson, A. M. N., Wyse, R. F. G., & Gallagher, J. S. 1996, , 112, 2567 Ferland, G. J., Porter, R. L., van Hoof, P. A. M., Williams, R. J. R., Abel, N. P., Lykins, M. L., Shaw, G., Henney, W. J., & Stancil, P. C. 2013, RMxAA 49, 137 Gordon, K. D., Engelbracht, C. W., Rieke, G. H., Misselt, K. A., Smith, J.-D. T., & Kennicutt, R. C. 2008, 682, 336 Gordon, K. D., Meixner, M., Meade, M. R., et al. 2011, , 142, 102 Gordon, K. D., Roman-Duval, J., Bot, C., et al. 2014, , 797, 85 Greenberg, J. M., Gillette, J. S., Muñoz Caro, G. M., et al. 2000, , 531, L71 Haffner, L. M., Dettmar, R.-J., Beckman, J. E., et al. 2009, Reviews of Modern Physics, 81, 969 Hoopes, C. G., & Walterbos, R. A. M. 2000, , 541, 597 Hoopes, C. G., Walterbos, R. A. M., & Bothun, G. D. 2001, , 559, 878 Kim, S., Staveley-Smith, L., Dopita, M. A., Sault, R. J., Freeman, K. C, Lee, Y., & Chu, Y.-H. 2003, , 148, 473 Li, A., & Draine, B. T. 2001, , 550, 214L Li, A., & Draine, B. T. 2001, , 554, 778 Madden, S. C., Galliano, F., Jones, A. P., & Sauvage, M. 2006, , 446, 877 Maí z-Apellániz, J. & Rubio, M. 2012, , 541, A54 Martins, F., Schaerer, D., & Hillier, D. J. 2005, , 436, 1049 Meixner, M., Gordon, K. D., Indebetouw, R., et al. 2006, , 132, 2268 Meixner, M., Panuzzo, P., Roman-Duval, J., et al. 2013, , 146, 62 Mizuno, N., Rubio, M., Mizuno, A., Yamaguchi, R., Onishi, T., & Fukui, Y. 2001, , 53, 45 Muñoz-Mateos, J. C., Gil de Paz, A., Boissier, S., et al. 2009, , 701, 1965 Oey, M. S., & Kennicutt, R. C., Jr. 1997, , 291, 827 Paradis, D., Reach, W. T., Bernard, J.-P., et al. 2009, , 138, 196 Pauldrach, A. W. A., Hoffmann, T. L., & Lennon, M. 2001, , 375, 161 Pavlyuchenkov, Y. N., Kirsanova, M. S., & Wiebe, D. S. 2013, Astronomy Reports, 57, 57 Pellegrini, E. W., Oey, M. S., Winkler, P. F., et al. 2012, , 755, 40 Povich, M. S., Stone, J. M., Churchwell, E., et al. 2007, , 660, 346 Reynolds, R. J. 1984, , 282, 191 Roman-Duval, J., Gordon, K. D., Meixner, M., et al. 2014, , 797, 86 Russell, S. C. & Dopita, M. A. 1992, , 384, 508 Sandstrom, K., M., Bolatto, A. D., Draine, B. T., Bot, C., & Stanimirović, S. 2010, , 715, 701 Sandstrom, K., M., Bolatto, A. D., Bot, C., et al. 2012, , 744, 20 Seon, K.-I. 2009, , 703, 1159 Sewilo, M., et al. 2009, “SAGE Data Products Description,” http://irsa.ipac.caltech.edu/data/SPITZER/SAGE/doc/SAGESpecDataDelivery-v3.pdf Smith, R. C., Points, S., Chu, Y.-H., et al. 2005, Bulletin of the American Astronomical Society, 37, 145.01 Stanimirović, S., Staveley-Smith, L., Dickey, J. M., Sault, R. J., & Snowden, S. L. 1999, , 302, 417 Tielens, A. G. G. M. 2008, ARAA 46, 289 Voges, E. S., Oey, M. S., Walterbos, R. A. M., & Wilkinson, T. M. 2008, , 135, 1291 Walterbos, R. A. M. 1998, Publications of the ASA, 15, 99 Wong, T., Hughes, A., Ott, J., et al. 2011, , 197, 16 Wood, K., & Mathis, J. S. 2004, , 353, 1126 Wu, Y., Charmandaris, V., Hao, L., Brandl, B. R., Bernard-Salas, J., Spoon, H. W. W., & Houck, J. R. 2006, , 639, 157 Zurita, A., Beckman, J. E., Rozas, M., & Ryder, S. 2002, , 386, 801 As described in Section 2, we classified all the MCELS objects as optically thick or thin, based on the 8[$\mu$m]{}/\[[[Oiii]{}]{}\] ratio map. The classifications were evaluated by J.L.-H. Objects marked with asterisks indicate ones for which our classifications differ from those of @p12, which were based on \[[[Sii]{}]{}\]/\[[[Oiii]{}]{}\] ratio maps.\ Our LMC classifications are as follows.\ Optically thick: MCELS-L4, L6, L8, L11, L13\*, L15\*, L25, L28, L29\*, L32, L33, L35, L47, L54, L60, L65\*, L69, L70, L73, L78, L93, L95, L96, L108\*, L125, L127, L130, L131, L132, L134, L135\*, L136, L140, L143, L144, L149, L162, L173, L181, L188, L192, L193, L194, L197, L201\*, L204, L206, L208, L212, L213, L215, L216, L218, L219, L222, L226, L227, L229, L230, L237, L238, L244, L251, L255\*, L257, L261\*, L264, L268, L274, L278, L285, L286, L290, L292, L304, L310, L311, L318, L320, L332, L334, L335, L336, L339, L340, L341, L342, L343\*, L345, L348, L352\*, L353\*, L354, L355, L357, L369, L372, L374, L377, L382, L384, L385, L389, L390, L391, L393, L400.\ Optically thin: MCELS-L1, L2, L3, L5\*, L9\*, L10, L12, L14\*, L16, L17\*, L18\*, L20, L21, L22\*, L23\*, L24, L27, L34\*, L36\*, L38, L39, L40, L42, L43\*, L44, L45, L48, L49, L52\*, L55\*, L56, L58, L59, L61\*, L63, L67, L71, L72\*, L74\*, L75\*, L77, L79\*, L80\*, L86, L92, L97\*, L98\*, L99, L101, L102, L103, L104\*, L106, L107\*, L109\*, L114\*, L118\*, L119, L121, L122\*, L128\*, L137, L138\*, L141\*, L146, L147, L148, L150, L151, L152, L155, L157\*, L163, L165, L167\*, L168, L169, L170\*, L171, L174, L175\*, L176, L177, L180, L182, L184, L191, L200, L202, L203\*, L207\*, L209, L210\*, L211, L217\*, L223, L231, L232, L239, L240, L241, L242, L248, L250\*, L252\*, L253, L254, L258, L259, L260, L267, L277\*, L284\*, L288\*, L295, L300\*, L302, L303, L305, L306, L307, L315\*, L316\*, L319\*, L321\*, L323, L325\*, L326, L328, L333, L337\*, L338, L344\*, L346\*, L347\*, L351\*, L356, L361\*, L362\*, L365, L367, L373, L379, L380\*, L386, L394\*, L395\*, L396, L401\*.\ Our SMC classifications are as follows.\ Optically thick:\ MCELS-S1\*, S4, S6\*, S7\*, S9, S10\*, S14\*, S27, S32, S33, S34, S42, S47, S71, S80, S81, S85, S86, S92, S93\*, S96\*, S97, S101, S104\*, S105\*, S107, S113\*, S115, S119, S123, S126\*, S131\*, S132\*, S139, S140\*, S142, S143\*, S149, S151\*, S157\*, S161, S162\*, S164, S166, S167, S169, S170, S172\*, S173, S175\*, S176, S177\*, S178, S179, S183\*, S184, S185\*, S187\*, S188, S189, S192\*, S196, S198, S204\*, S206\*, S208.\ Optically thin:\ MCELS-S2\*, S3\*, S5, S8\*, S15\*, S16, S17\*, S18\*, S19\*, S20\*, S22\*, S23\*, S24\*, S25\*, S26\*, S28\*, S29\*, S30\*, S31\*, S35\*, S36, S37, S38\*, S39\*, S40\*, S43\*, S44\*, S45\*, S46, S48\*, S49\*, S51\*, S52\*, S54\*, S55\*, S56\*, S57\*, S59\*, S60\*, S61, S62, S63, S64\*, S65, S66, S67, S68\*, S70\*, S72\*, S73, S74\*, S77\*, S78\*, S79\*, S82, S83\*, S84\*, S87\*, S88\*, S89\*, S90\*, S91\*, S94, S95\*, S98, S99, S102\*, S103\*, S106\*, S108, S109, S110\*, S111, S112\*, S114, S116\*, S117\*, S121\*, S124, S125, S127, S128\*, S130\*, S133\*, S134, S135\*, S137\*, S138, S141\*, S144\*, S145, S146\*, S147\*, S148, S150\*, S152, S153, S154, S155\*, S156\*, S158\*, S159, S160, S168\*, S171\*, S174\*, S180\*, S181\*, S182, S186\*, S190\*, S191\*, S195\*, S197\*, S199, S200\*, S207\*, S209\*, S210\*, S211\*, S212\*, S213, S214\*. [^1]: http://hea-www.harvard.edu/RD/funtools/
{ "pile_set_name": "ArXiv" }
--- abstract: 'The unit of quantum information is the qubit, a vector in a two-dimensional Hilbert space. On the other hand, quantum hardware often operates in two-dimensional subspaces of vector spaces of higher dimensionality. The presence of higher quantum states may affect the accuracy of quantum information processing. In this Letter we show how to cope with [*quantum leakage*]{} in devices based on small Josephson junctions. While the presence of higher charge states of the junction reduces the fidelity during gate operations we demonstrate that errors can be minimized by appropriately designing and operating the gates.' address: | $^{(1)}$Dipartimento di Metodologie Fisiche e Chimiche (DMFCI), Università di Catania, viale A.Doria 6, I-95125 Catania, Italy\ $^{(2)}$Dipartimento di Scienze Fisiche ed Astronomiche, (DSFA) Università di Palermo, via Archirafi 36, I-90123 Palermo, Italy\ $^{(3)}$Istituto Nazionale per la Fisica della Materia (INFM), Unità di Catania e Palermo\ author: - 'Rosario Fazio$^{(1,3)}$, G. Massimo Palma$^{(2,3)}$, and Jens Siewert$^{(1,3)}$' title: Fidelity and leakage of Josephson qubits --- The most widely accepted paradigm of quantum computation describes quantum information processing in terms of quantum gates whose input and output are two-state quantum systems called qubits [@Deutsch95]. Quantum Computation (QC) is performed by means of a controllable unitary evolution of the qubits [@Ekert96]. Due to the intrinsic quantum parallelism, problems which are intractable on classical computers can be solved efficiently by using quantum algorithms. Probably the most striking example is the factorization of large numbers [@Shor94]. Parallel to the developement of the theory of quantum information there has been an increasing interest in finding physical systems where quantum computation could be implemented. In an (almost) ideal situation one should identify a suitable set of two-level systems (sufficiently decoupled from any source of decoherence [@Zurek91]) with some controllable couplings among them needed to realize single qubit and two-qubit operations. These requirements are sufficient to implement any computational task [@UniversalQC]. Various physical systems have been suggested for the implementation of quantum algorithms, [*e.g.*]{} ions traps [@Cirac95], QED cavities [@Turchette95] and NMR [@Gershenfeld97]. The quest for large scale integrability and flexibility in the design has very recently stimulated an increasing interest in the field of nanostructures. Up to now promising proposals are based on small-capacitance Josephson junctions [@Shnirman97; @Averin98; @Makhlin99; @Ioffe99; @Mooij99], coupled quantum dots [@Loss98; @Zanardi98] and phosphorus dopants in silicon crystals [@Kane98]. The experiments on the superposition of charge states in Josephson junctions [@Matters95; @Bouchiat98] and the recent achievements in controlling the coherent evolution of quantum states in a Cooper pair box [@Nakamura99] render superconducting nanocircuits interesting candidates to implement solid state quantum computers. Physical realizations of QC are never completely decoupled from the environment. Since decoherence will ultimately limit the performance of a quantum computer a lot of attention is being devoted to this problem. Besides decoherence, for each proposed scheme a detailed analysis of the errors induced by the gate operations themselves is crucial in order to assess their reliability and the feasibility of fault-tolerant quantum computation [@Preskill97; @Kitaev97]. Errors may occur due to a variety of reasons. An obvious example are fluctuations in the control parameters of the gate which act as a random noise and thus affects the unitarity of the time evolution. Alternatively gate operations can change the coupling of the qubits to the environment (even if this coupling is negligible during storage periods) thereby enhancing decoherence. All these error sources can be analyzed by properly modelling the qubit-environment coupling. However, there are errors which are not due to (or cannot be described in terms of) the action of an external environment. Much rather they are inherent in the design of the gate. In this Letter we consider one (intrinsic) source of error in gate operations which is common to several of the proposed solid state implementations, the [*quantum leakage*]{}. It occurs when the computational space is a subspace of a larger Hilbert space. This is the case [*e.g.*]{} when the information is encoded in trapped ions or in charge (or flux) states of devices based on Josephson junctions (or SQUIDS). We start by introducing a general scheme to characterize the leakage and then we focus on devices based on small-capacitance tunnel junctions. Our analysis applies to the situation illustrated in Fig. \[fig1\]. The two low-energy states constitute the computational Hilbert space. The system, however, can leak out to the higher states. If the energy difference between the low-lying and the excited states is large compared to the other energy scales of the system (as in Refs. [@Shnirman97; @Averin98; @Makhlin99; @Ioffe99; @Mooij99]) the probability to leak out is small. One might wonder whether it is necessary to discuss this effect at all. As we will see the consequences of leakage are more severe than a simple estimate of energy scales might suggest. The presence of states outside the computational space modifies the time evolution of the qubit states compared to the idealized design. The ideal unitary gate operation $U_I$ is obtained by switching on a suitable Hamiltonian $H_I$ which couples the desired computational states in a controlled way for a time $t_0$. By choosing $t_0$ one can implement the desidered gate operation. In reality, however, the dynamics of the system is governed by a unitary operator $U_R$ which acts on the full Hilbert space. Since information is being processed within the computational subspace the output is related to the input state via the map $\Pi U_{R}(t) \Pi$, where $\Pi$ is the projection operator on the computational space. One is interested in optimizing the real gate operation in order to get as close as possible to the ideal $U_I$. In general the “best” operation may require a time $t\neq t_0$ as all the system eigenenergies are modified by the states outside the computational subspace. Therefore we use the time $t$ as parameter to optimize the given computational step. We characterize the performance of real gates by the fidelity ${\cal F}$ and the probability of leakage ${\cal L}(t)$ defined as $${\cal F} = 1 - \frac{1}{2}\mbox{min}_{\{{\mbox t}\}} \| U_{I}(t_0) - \Pi U_{R}(t) \Pi \| \label{fidelity}$$ $${\cal L}(t) = 1 -\mbox{min}_{\psi} \langle \psi |U^{\dagger}_R(t)\Pi U_R(t) |\psi\rangle \label{leakage}$$ In Eq. (\[fidelity\]) we make use of the operator norm defined as $\| D \| = \mbox{Sup}_{\psi} | D |\psi\rangle | = \mbox{Sup}_{\psi} \sqrt{\langle \psi |D^{\dagger}D |\psi\rangle}$ over the vectors $\{ |\psi\rangle : \langle \psi|\psi\rangle =1\}$ of the computational subspace. This definition implies that $\| D\| = \sqrt{\lambda_M}$ where $\lambda_M$ is the biggest eigenvalue of $D^{\dagger}D$. As in the case of the minimal fidelity [@Schumacher96] this definition gives estimates for the worst case. The definition given in Eq. (\[fidelity\]) can therefore be regarded as a prescription how to optimize the gate design (note that the fidelity defined in Eq.(\[fidelity\]) does not depend on the time $t$). As mentioned before the existence of states other than the computational ones has two main consequences on the qubit dynamics. There is a nonzero probability of leakage, measured by ${\cal L}(t)$, and a modification of the eigenenergies and eigenstates of the real system. The latter effect turns out to be an important source of gate errors. In order to study the phenomena related to leakage quantitatively we apply Eqs. (\[fidelity\]), (\[leakage\]) to Josephson junction qubits in the charge regime as proposed in Refs. [@Shnirman97; @Makhlin99]. A similar analysis can be carried out, with appropriate changes of paramaters, for all other cases where leakage is present. In Refs. [@Shnirman97; @Makhlin99] the qubit is implemented using nanocircuits of Josephson junctions. The corresponding Hamiltonian for one and two qubit operations can be written as $$H_R = \sum_{i=1,2 } \left[ E_{\rm ch} (n_{i}-n_{x,i})^2 - E_{J} \cos \phi_{i} \right] \nonumber \\ + E_{L} \left( \sin \phi _1 + \sin \phi _2 \right)^2 \label{HamiltonianR}$$ In the first term $E_{\rm ch}$ is the charging energy. The second and the third term represent the Josephson tunneling (associated with the energy $E_J$) and the inductive coupling of strength $E_L$ [@footnote1] which bring about single and two qubit operations possible. Both $E_J$ and $E_L$ are assumed to be much smaller than the charging energy. The offset charge $n_{x,i}$ can be controlled by an external gate voltage. The phases $\phi_i$ and the number of Cooper pairs $n_i$ are canonically conjugate variables $[\phi_{i},n_{j}]= \,i \; \delta_{ij}$ [@footnote2]. At temperatures much lower than the charging energy, for $n_{x,i} \sim 1/2$ the two charge states $n_{i}=0,1$ are nearly degenerate. They represent the states $|0 \rangle $, $|1 \rangle $ of the qubit (see Fig. \[fig1\]). In the computational Hilbert space the ideal evolution of the system is governed by the Hamiltonian $$H_I = \sum_{i=1,2 } \left[ \Delta E_{{\rm ch},i} \sigma _{z,i} - \frac{E_{J}}{2} \sigma_{x,i} \right] - \frac{E_{L}}{2} \sigma_{y,1} \sigma_{y,2} \label{HamiltonianI}$$ where $\sigma$ are Pauli matrices and $\Delta E_{{\rm ch},i} = E_{\rm ch}(n_{x,i} - 1/2)$. The different time evolution due to $H_R$ and to $H_I$ causes an error [*in the gate operation*]{}. We note that leakage is also present during idle periods of the gates. However, here we only discuss the errors during gate operations. Single-qubit gate operations can be implemented, [*e.g.*]{}, by suddenly switching the offset charge to the degeneracy point $n_x =1/2$ where the charge states $|0\rangle $ and $ |1\rangle$ are strongly mixed by the Josephson coupling [@footnote3]. Whereas in the ideal setup this coupling mixes only the states $|0\rangle$ and $|1\rangle$, in the real qubit all charge states are involved. The evolution in the computational subspace for a time interval $t$ is described by the operator ($\hbar =1$) $$\Pi U_R(t) \Pi = \sum e^{-i E_n t } \Pi |\Phi_n \rangle \langle \Phi_n | \Pi$$ where $\Pi = |0\rangle \langle 0 | + |1\rangle \langle 1 | $ is the projector on the computational subspace and $|\Phi_n \rangle$ are the eigenstates with energies $E_n$ of the Hamiltonian $H_R$ (here $|\Phi_n\rangle$ can be expressed in terms of Mathieu functions). By evaluating the leakage according to Eq. (\[leakage\]) we obtain $$\begin{aligned} {\cal L}(t) &=& 1 - \mbox{min}_{\pm} \mid \sum_{n;m=0,1} (\pm )^{m} \langle 0 \mid \Phi_n\rangle \langle \Phi_n \mid m\rangle e^{-iE_nt}\mid ^2 \nonumber \\ & \sim & \frac{E_J^2}{8E_{\rm ch}^2} [1 - \mbox{min}_{\pm } \cos (2 E_{\rm ch}\pm E_J/2) t ] \;\; . \label{onebitleakage}\end{aligned}$$ The order of magnitude $(E_J/E_{\rm ch})^2$ can be understood immediately by regarding the coupling to higher charge states as a perturbation to the ideal system of Eq. (\[HamiltonianI\]). The fidelity has to be limited by the leakage since it describes the length of the projection of the true state at time $t$ onto the ideal state at $t_0$. There is another effect contributing to the loss of fidelity: the presence of higher charge states renormalizes the energy eigenvalues thus leading to a frequency mismatch between ideal and real time evolution. However, due to the symmetry of the system and the fact that $E_J$ is the only coupling energy to the states outside the computational subspace there is a simple way to cure this problem. Let us consider a $\pi$-rotation. The optimal gate is obtained by changing the operation time to $t_0^{\star}=\pi/\Delta E$ where $\Delta E$ is the energy splitting between the two lowest eigenstates (as opposed to the time $t_0 = \pi/E_J$ in the ideal system). The value of the fidelity is then given by $$\begin{aligned} {\cal F} & = & 1 - \frac{1}{2}\left| \sum_{n;m=0,1} \langle\ 0 \mid \Phi_n\rangle \langle \Phi_n \mid m\ \rangle\ e^{-iE_nt_0^\star} -i\ \right| \nonumber \\ & \sim & 1 - \frac{1}{32}\frac{E_J^2}{E_{\rm ch}^2} \sqrt { 2 + 2 \mid \sin (2\pi E_{\rm ch}/E_J)\mid } \label{onebitfidel} \end{aligned}$$ We mention that the error accumulates linearly with the number of operations. For typical parameters of Josephson junctions $E_J/E_{\rm ch} \sim 0.02$ one finds that after about $10^4$ operations the loss of fidelity becomes of order unity. Among the many possibilities for the elementary two-qubit operation, choosing a particular one may be a non-trivial step in the course of implementing quantum hardware. Due to universality of quantum computation [@UniversalQC] one is free to use any generic $4\times 4$ unitary matrix as a two-qubit gate. From our point of view a choice is optimal if it avoids errors stemming from a discrepancy of the ideal gate and the way of its implementation. Therefore, in the following we assume that the Hamiltonian as introduced in Eq. (\[HamiltonianI\]) $$H_I = \left( \begin{array}{cccc} 2\Delta E_{{\rm ch}} & -E_J/2 & -E_J/2 & E_L/2 \\ -E_J/2 & 0 & -E_L/2 & -E_J/2 \\ -E_J/2 & -E_L/2 & 0 & -E_J/2 \\ E_L/2 & -E_J/2 & -E_J/2 & -2\Delta E_{{\rm ch}} \end{array} \right)\ \ \label{matHI}$$ [*generates*]{} the ideal two-bit gate. In Eq. (\[matHI\]) we have used the basis $\{ |00\rangle, |01\rangle, |10\rangle, |11\rangle \}$ (which is obtained as the direct product of the states introduced previously). The typical scale for the operation time $t_0$ is on the order of $1/E_L$ [@Shnirman97; @Makhlin99]. In complete analogy with the one-bit gate we find that the leakage is of the same order for the two-qubit operation: $${\cal L}\ \propto\ \max \left\{ \left(\frac{E_J}{E_{\rm ch}}\right)^2, \left(\frac{E_L}{E_{\rm ch}}\right)^2 \right\}$$ (the numerical coefficient is larger than in the one-bit case because there are more charge states outside the computational subspace directly coupled to the qubit states either by $E_L$ or $E_J$). The situation for the fidelity, however, is different. In order to estimate $\cal F$ we consider a perturbative expansion of $D^{\dagger}D$ where $D=U_I(t_0)-\Pi U_R(t)\Pi$ up to second order in $E_J/E_{\rm ch}$, $E_L/E_{\rm ch}$. The eigenvalues of this matrix have the form 2nd order terms (here $E_n$ and $E_{n}^{(0)}$ are the eigenvalues of $H_R$ and $H_I$, respectively). It turns out that due to the presence of several energy scales the frequency mismatch between real and ideal time evolution cannot be compensated for by adjusting the operation period. The leading terms of the fidelity can be written as $${\cal F} \simeq 1 - \frac{1}{2}\left( a \frac{E_J^2}{E_L E_{\rm ch}} + b \frac{E_L}{E_{\rm ch}}\right)\ \ , \label{twobitfidel}$$ where $a$ and $b$ are coefficients which depend on the particular choice of $n_{x,i}$ and $t_0$. In Fig. \[fig2\] we show the numerical results for $n_x=1/4$ and $t_0=\pi/E_L$. The loss of fidelity (the term in paranthesis in Eq. (\[twobitfidel\])) is proportional to $t_0$. The maximum (the best operation one can achieve) scales linearly with $E_J/E_{ch}$. This should be contrasted with the one-bit case where it scales quadratically. We mention that we have chosen the definitions for the leakage and the fidelity describing the “worst case” in order to avoid a dependence of the discussion on the preparation of the initial state. One could wonder whether the “generic case” is much more robust with respect to leakage. It is easy to convince oneself by checking various choices of initial states that the loss of fidelity is indeed on the order of the worst case estimates. In conclusion, starting from given gate operations we have discussed their optimal implementation in real systems. We have shown that leakage limits the number of operations which can be performed reliably both for one and two qubit gates. For one-bit gates one can correct leakage errors by changing the operation time. We have pointed out that with respect to fidelity it may be appropriate to choose the elementary two-qubit gate as it is determined by the implementation. Fig. \[fig2\] shows the central result of this work: although leakage causes an inevitable loss of fidelity for two-qubit operations, this loss can be minimized by an appropriate choice of the device parameters. Finally we mention that one can speculate about correction procedures for errors caused by leakage. It should be possible to check during the computation whether leakage has occured. This should be done by measuring the system [*only*]{} if it is outside the computational subspace. One can imagine to realize a low sensitivity SET transistor which is able to measure the system only if the charge is outside a specified window. The authors would like to thank A.K. Ekert, G. Falci, R. Jozsa and Y. Makhlin for helpful discussions. This work was supported in part by the European TMR Research Network under contracts ERB 4061PL95-1412 and FMRX-CT-97-0143. D. Deutsch, Proc. R. Soc. London A [**400**]{}, 97 (1985). A. Ekert and R. Jozsa, Rev. Mod. Phys., 733 (1996). P.W. Shor, in [*Proc. of the 35th Annual Symposium on Foundation of Computer Sicence*]{}, (IEEE Computer Society, Los Alamitos, CA, 1994), p. 124. G.M.Palma, K.-A.Suominen and A.K.Ekert, Proc. Roy. Soc. London A [**452**]{}, 567 (1996); W. Zurek, Physics Today [**44**]{}, 36 (1991). A. Barenco, Proc. R. Soc. London A [**449**]{}, 679 (1995); D. Deutsch, A. Barenco, A. Eckert, Proc. R. Soc. London A [**449**]{}, 669 (1995); H. Weinfurter, Europhys. Lett.  [**25**]{}. 559 (1994); S. Lloyd, Phys. Rev. Lett. [**75**]{}, 346 (1995); A. Barenco, C.H. Bennett, R. Cleve, D. DiVincenzo, N. Margolus, P.W. Shor, T. Sleator, J. Smolin, and H. Weinfurther, Phys. Rev. A [**52**]{}, 3457 (1995). J.I. Cirac and P. Zoller, Phys. Rev. Lett., 4091 (1995). Q.A. Turchette, C.J. Hood, W. Lange, H. Mabuchi, and H.J. Kimble, Phys. Rev. Lett. [**75**]{}, 4710 (1997). N.A. Gershenfeld and I.L. Chuang, Science, [**275**]{}, 350 (1995). A. Shnirman, G. Schön and Z. Hermon, Phys. Rev. Lett. [**79**]{}, 2371 (1997). D.V. Averin, Sol. State Comm. [**105**]{} 659 (1998). Y. Makhlin, G. Schön and A. Shnirman, Nature [**398**]{}, 305-307 (1999). L.B. Ioffe, V.B. Geshkenbein, M.V. Feigelman, A.L. Faucher, and G. Blatter, Nature [**398**]{}, 679 (1999). J.E. Mooij, T.P. Orlando, L. Tian, C. van der Wal, L. Levitov, S. Lloyd, and J.J. Mazo, unpublished. D. Loss and D. DiVincenzo, Phys. Rev. A [**57**]{}, 120 (1998). P. Zanardi and F. Rossi, Phys. Rev. Lett.  [**81**]{}, 4752 (1998). B. Kane, Nature [**393**]{}, 133 (1998). M. Matters, W. Elion, and J.E. Mooij, Phys. Rev. Lett. [**75**]{}, 721 (1995). V. Bouchiat, D. Vion, P. Joyez, D. Esteve, and M. Devoret, Physica Scripta [**T76**]{}, 165 (1998). Y. Nakamura, Yu.A. Pashkin, J.S. Tsai, Nature [**398**]{}, 786 (1999). J. Preskill in [*Introduction to Quantum Computation and Information*]{}, H.-K. Lo, S. Popescu and T. Spiller Eds., p. 213 (World Scientific, New Jersey 1998), (quant-phys/9712048). A.Yu. Kitaev, quant-phys/9707021. B. Schumacher Phys. Rev. A. [**54**]{}, 2614 (1996). The coupling $E_L$ is also proportional on the Josephson coupling energys. For our pourposes we can assume that is an independent energy scale in the problem. The model ignores quasiparticle tunneling which is suppressed if the temperature is much lower than the BCS superconducting gap. Alternatively the qubit can be set to the degeneracy point adiabatically. In this paper we do not examine the various possibilities to realize one and two-qbit gates. The qualitative results do not change although it may be more convenient, as far as leakage is concerned, to choose a particular scheme.
{ "pile_set_name": "ArXiv" }
--- abstract: 'A graph $G$ is called well-covered if all maximal independent sets of vertices have the same cardinality. A simplicial complex $\Delta$ is called pure if all of its facets have the same cardinality. Let $\mathcal G$ be the class of graphs with some disjoint maximal cliques covering all vertices. In this paper, we prove that for any simplicial complex or any graph, there is a corresponding graph in class $\mathcal G$ with the same well-coveredness property. Then some necessary and sufficient conditions are presented to recognize fast when a graph in the class $\cal G$ is well-covered or not. To do this characterization, we use an algebraic interpretation according to zero-divisor elements of the edge rings of graphs.' author: - | Rashid Zaare-Nahandi\ Institute for Advanced Studies in Basic Sciences (IASBS),\ Zanjan 45195, Iran\ E-mail: rashidzn@iasbs.ac.ir title: 'Pure simplicial complexes and well-covered graphs' --- Introduction ============ A graph G is said to be well-covered (or unmixed) if every maximal independent sets of vertices have the same cardinality. These graphs were introduced by M. D. Plummer [@16] in 1970. Although the recognition problem of well-covered graphs in general is Co-NP-complete ([@19]), it is characterized for certain classes of graphs. For instance, claw-free well-covered graphs [@22], well-covered graphs which have girth at least 5 [@7], (4-cycle, 5-cycle)-free [@8] or chordal graphs [@18] are all recognizable in polynomial time. Excellent surveys of works on well-covered graphs are given in Plummer [@17] and Hartnell [@10]. Let $G$ be a graph with no loop and multiple edge. Denote the set of vertices of $G$ by $V(G)$ and the set of edges by $E(G)$. A subset $A$ of $V(G)$ is called an independent set if there is no any edge between vertices of $A$. Denote the cardinality of the largest independent set in $G$ by $\alpha(G)$. A subset $C$ of $V(G)$ is called a clique if any two vertices in $C$ are adjacent. Let $A$ and $B$ be subsets of $V(G)$. We say $A$ dominates $B$ if for any vertex $v$ in $B$, $v$ is in $A$ or there is at least one vertex in $A$ adjacent to $v$. The set $A$ is called a vertex cover of $G$ if any edge of $G$ has at least one edge in $A$. A vertex cover is called minimal if any proper subset of it is not a vertex cover. A subset of $E(G)$ is called a matching if there is not any common vertex in any two edges in this set. A matching is called perfect matching if it covers all vertices of $G$. Let $[n] = \{1,2,\ldots,n\}$. A (finite) simplicial complex $\Delta$ on $n$ vertices, is a collection of subsets of $[n]$ such that the following conditions hold:\ a) $\{i\}\in\Delta$ for each $i\in [n]$,\ b) if $E\in\Delta$ and $F\subseteq E$, then $F\in\Delta$.\ An element of $\Delta$ is called a face and a maximal face with respect to inclusion is called a facet. The set of all facets is denoted by $\mathcal{F}(\Delta)$. The dimension of a face $F\in\Delta$ is defined to be $|F|-1$ and dimension of $\Delta$ is maximum of dimensions of its faces. A simplicial complex is called pure if all of its facets have the same dimension. For more details on simplicial complexes see [@Stan]. Let $G$ be a graph. The set of all independent sets of vertices of $G$ is a simplecial complex, because, any single vertex is independent and any subset of an independent set is again independent. We assume that the empty set is also an independent set. This simplicial complex is called independence complex of $G$ and is denoted by $\Delta_G$. With the above definitions, a graph $G$ is well-covered means that the complex $\Delta_G$ is pure. Let $\Delta$ be a simplicial complex on the vertex set $[n]$. The barycentric subdivision of $\Delta$, denoted by $\mbox{bs}(\Delta)$, is a simplicial complex with vertex set consisting of all nonempty faces of $\Delta$. A face in $\mbox{bs}(\Delta)$ consists of comparable vertices, that is, two vertices lie in a face in $\mbox{bs}(\Delta)$ if one is a subset of the other. In other words, facets of $\mbox{bs}(\Delta)$ are maximal chains of faces of $\Delta$ considered as a poset with respect to inclusion order. It is easy to see that the minimal non-faces of $\mbox{bs}(\Delta)$ are subsets of $\Delta$ with exactly two non-comparable elements. Therefore, $\mbox{bs}(\Delta)$ is an independence complex of a graph. In fact this graph is non-comparability graph of $\Delta$. Vertices of the graph are nonempty faces of $\Delta$ and two vertices are adjacent if their corresponding faces are not comparable. This graph is denoted by $G(\Delta)$. It is known that the dimension (and many other invariants) of a simplicial complex and its barycentric subdivision are equal ([@BW] and [@KW]). Specially a simplicial complex $\Delta$ is pure if and only if its barycentric subdivision is pure and it is equal to say that the graph $G(\Delta)$ is well-covered. Well-covered graphs with clique covers ====================================== Let $\cal G$ be the class of graphs such that for each $G\in \cal G$ there are $k=\alpha(G)$ cliques in $G$ covering all its vertices. Let $G\in \cal G$ and $Q_1,\ldots,Q_k$ be cliques such that $V(Q_1)\cup\cdots\cup V(Q_k)=V(G)$. In this case, we may take $Q'_1=Q_1$, and for $i=2,\ldots,k$, $Q'_i$ the induced graph on the vertices $V(Q_i)\setminus (V(Q_1)\cup\cdots\cup V(Q_{i-1}))$. Then, $Q'_1,\ldots,Q'_k$ are $k$ disjoint cliques covering all vertices of $G$. We call such a set of cliques, a basic clique cover of the graph $G$. Therefore, any graph in the class $\cal G$ has a basic clique cover. Note that, $k=\alpha(G)$ is the smallest number which the graph $G$ may has a clique cover. It is not true that any graph has a basic clique cover. For example, a cycle of length 4 has a basic clique cover consisting of 2 cliques but a cycle of length 5 does not have any basic clique cover. \[class\] Let $\Delta$ be a simplicial complex. Then, $G(\Delta)$ is in the class $\cal G$. Moreover, $\Delta$ is pure if and only if $G(\Delta)$ is well-covered. [*Proof*]{}. Note that any two faces in $\Delta$ with the same dimension are not comparable. Therefore, for each $i$, $0\leq i\leq \dim(\Delta)$, if $\Delta(i)$ is the set of all faces of $\Delta$ with dimension $i$, then there are not any two comparable face in this set and the corresponding vertices in the graph $G(\Delta)$ make a clique. These cliques are disjoint and cover all vertices of $G(\Delta)$. In fact the set of these cliques is a basic clique cover of $G(\Delta)$. The last statement is clear. $\Box$ Now, we give some criteria equivalent to well-covered property of graphs in the class $\cal G$. \[main\] Let $G$ be a graph in the class $\cal G$ with a basic clique cover $Q_1, \ldots, Q_k$. Then $G$ is well-covered if and only if for each $i$, $1\leq i\leq k$, if $A\subseteq V(G)\setminus Q_i$ dominates $Q_i$, then $A$ is not an independent set. [*Proof.*]{} Assume that $G$ is well-covered. Let $1\leq i\leq k$ be given and $A\subseteq V(G)\setminus Q_i$ be a dominating set of $Q_i$. If $A$ is independent, then there is a maximal independent set $B$ containing $A$. But, $B\cap Q_i=\varnothing$ because any vertex of $Q_i$ is adjacent to some vertices in $A\subseteq B$. In other hand, $B$ has at most one element in common with each $Q_j$, $j\neq i$. Therefore, $|B|<k$ which is a contradiction with well-coveredness of $G$. Conversely, let $A$ be a maximal independent set. Then $|A\cap Q_i|\leq 1$ for each $1\leq i\leq k$ and $|A|\leq k$. The claim follows if one shows $|A|=k$. So, if one assume $|A\cap Q_i|=\varnothing$ for some $i$, then one can apply the assumption and $A$ is not dominating for $Q_i$, which means that there exists a $v \in Q_i$ not adjacent to any vertex of $A$. By maximality of $A$, $v\in A$ and hence $|A\cap Q_i|=1$ which is a contradiction to the assumption. So, finally one get $|A\cap Q_i|=1$ and the claim follows. $\Box$ \[partite\] Let $G$ be a $s$-partite well-covered graph such that all maximal cliques are of size $s$. Then all parts have the same cardinality and there is a perfect matching between each two parts. [*Proof.*]{} Let the $s$ parts of $G$ be $V_1, \ldots, V_s$. Let $1\leq i\leq s$ and $v\in V_i$. Each vertex belongs to some maximal clique and each maximal clique intersects each part in exactly one vertex. Therefore, the vertex $v$ is adjacent to some vertices in each part $V_j$, $1\leq j\leq s$, $j\neq i$. Then the part $V_i$ is a maximal independent set because for each vertex out of $V_i$, there is an edge connecting it to some vertex in $V_i$. The graph $G$ is well-covered therefore, cardinality of parts are the same. Let $1\leq i < j\leq s$ be two given integers. Let $A\subseteq V_i$ be a nonempty set and $N_j(A)$ be the set of all vertices in $V_j$ adjacent to some vertices in $A$. Suppose $|N_j(A)|< |A|$. There is no any edge between $A$ and $V_j\setminus N_j(A)$. Therefore, $A\cup (V_j\setminus N_j(A))$ is an independent set and its size is strictly greater than size of $V_j$, which is a contradiction with well-coveredness of $G$. Therefore, $|N_j(A)| \geq |A|$ for each nonempty subset $A$ of $V_i$. Therefore, by Theorem of Hall [@Hall], there is a set of distinct representatives (SDR) for the set $\{N_j(\{v\}) : v\in V_i\}$, which is a perfect matching between $V_i$ and $V_j$. $\Box$ #### Example. It is not true that in a well-covered graph $G$, there are $\alpha(G)$ maximal cliques covering $G$. For instance, consider any cycle $C_n$ for odd $n$. In this case, $\alpha(C_n)=\frac{n-1}{2}$ and any $\frac{n-1}{2}$ cliques, which are edges, can not cover all vertices. Also the above statement is not true in class of all well-covered $s$-partite graphs. For instance consider the following graph which is 3-partite, well-covered with maximal independent sets of size 2. But, there are no two maximal cliques covering $V(G)$. $$\unitlength=1cm \begin{picture}(-4,-1)(3,1.5) \put(0,0.5){\circle*{0.1}} \put(1,0){\circle*{0.1}} \put(2,0.5){\circle*{0.1}} \put(0,2){\circle*{0.1}} \put(1,1.5){\circle*{0.1}} \put(2,2){\circle*{0.1}} \put(0,0.5){\line(2,-1){1}} \put(0,0.5){\line(1,0){2}} \put(0,0.5){\line(1,1){1}} \put(0,0.5){\line(4,3){2}} \put(1,0){\line(2,1){1}} \put(1,0){\line(1,2){1}} \put(2,0.5){\line(-1,1){1}} \put(0,2){\line(1,0){2}} \put(0,2){\line(2,-1){1}} \end{picture}$$ Stating many examples motivates the following conjecture. #### Conjecture. Let $G$ be a $s$-partite well-covered graph with all maximal cliques of size $s$. Then, $G$ is in the class $\cal G$. At the end of this section, we restate the result of Ravindra about well-covered bipartite graphs. [[@24]]{} Let $G$ be a bipartite graph with no vertex of degree zero. Then, $G$ is well covered if and only if there is a perfect matching and for each $\{x,y\}$ in this matching, the induced subgraph on $N[\{x,y\}]$ is a complete bipartite graph. [*Proof.*]{} Let $G$ be well-covered. By Proposition \[partite\], cardinality of both parts are the same and there is a perfect matching in $G$. Moreovere, the edges in the matching make a basic clique cover of $G$. Let $\{x,y\}$ be an edge in the matching. By Theorem \[main\], $G$ is well-covered if and only if any dominating set of $\{x,y\}$ is dependent. The last statement is equal to say that any vertex in $N(\{x\})$ is adjacent to any vertex in $N(\{y\})$, i. e., the induced subgraph on $N[\{x,y\}]$ is a complete bipartite graph. $\Box$ An algebraic interpretation =========================== There is an interesting algebraic interpretation of well-coveredness of graphs in class $\cal G$, which we state in this section. First we recall some definitions in commutative algebra. Let $G$ be a graph with vertex set $\{v_1, \ldots, v_n\}$ and $K$ be a field. In the polynomial ring $K[x_1, \ldots, x_n]$, consider $I(G)$ be the ideal generated by all monomials of the form $x_ix_j$ provided that $v_i$ and $v_j$ are adjacent in $G$. This ideal is called edge ideal of the graph $G$ and the quotient ring $R(G)=K[x_1,\ldots,x_n]/ I(G)$ is called edge ring of $G$. This ring is introduced by R. Villarreal [@Vil] and has been extensively studied by several mathematicians. Let $R$ be a commutative ring. An element $a\neq 0$ in $R$ is called zero-divisor if there is a nonzero element $b\in R$ such that $ab=0$. An ideal in $R$ is called monomial ideal if it can be generated by a set of monomials. For example, edge ideal of a graph is a monomial ideal. In a ring of polynomials, it is well known and easy to check that a polynomial $f$ belongs to a monomial ideal if and only if each monomial of $f$ belongs to the ideal. If the monomial ideal is also square-free, then a monomial in $K[x_1,\ldots,x_n]$ belongs to $I$ if and only if its square-free part (its radical) belongs to $I$. As an example of zero-divisor element, let $R(G)$ be the edge ring of a graph $G$. Let $v_i$ be adjacent to $v_j$ in $G$. The elements $x_i$ and $x_j$ are not zero in $R(G)$ but $x_ix_j=0$. Here, with abuse of notation, we have written $x_i$ as same as its image in $R(G)$. A term ordering on $K[x_1,\ldots,x_n]$ is a linear order $\preceq$ on the set of terms $\{x_1^{a_1} x_2^{a_2} \ldots x_n^{a_n}\ : \ a_i\in {\Bbb Z}_{\geq 0}, i=1,2,\ldots n \}$, such that for each terms $\alpha, \alpha_1, \alpha_2$, the following conditions hold. - if $\alpha_1 \preceq \alpha_2$ then $\alpha_1\alpha\preceq\alpha_2\alpha$. - $1\preceq\alpha$. Lexicographic, degree lexicographic and degree reverse lexicographic orders are examples of term orderings. There is a rich literature about term orderings, for instance see [@Kr]. \[linearzero\] Let $K$ be a field and $I\subseteq K[x_1,\ldots,x_n]$ be an ideal generated by square-free monomials. Let $f$ be a nonzero linear polynomial in $R=K[x_1,\ldots,x_n]/I$. Then, $f$ is zero-divisor in $R$ if and only if there is a nonzero square-free monomial $m\in R$ such that $mf=0$. [*Proof.*]{} Let $f$ be zero-divisor in $R$, then, there is a nonzero polynomial $g$ in $R$ such that $fg=0$. We may rearrange variables such that $f=x_1+a_2x_2+\cdots+a_sx_s$, $a_j\in K$. Let $\prec$ be the lexicographic order on terms of $K[x_1,\ldots,x_n]$ with respect to $x_1\succ x_2 \succ \cdots \succ x_n$. Let $g=m_1 + m_2 + \cdots + m_t$ be decomposition of $g$ to nonzero monomials such that $m_1 \succ m_2 \succ\cdots \succ m_t$. Then, in $fg$, the monomial $x_1m_1$ is strictly greater than all other monomials. Therefore, $x_1m_1$ must be zero in $R$. The ideal $I$ is square-free and $x_1m_1\in I$, therefore, we may assume that $x_1\nmid m_1$. By the lexicographic order, we have $x_1\nmid m_i$ for all $1\leq i\leq t$. In other hand, $fg-x_1m_1\in I$. The greatest term of $fg-x_1m_1$ is $x_1m_2$ and then $x_1m_2\in I$ and $fg-(x_1m_1+x_1m_2)\in I$. Continuing this process, we have $x_1m_i\in I$ for all $1\leq i\leq t$ and therefore, $fg-x_1g\in I$. In the polynomial $fg-x_1g$ the greatest term is $x_2m_1$ which must be in $I$. Similarly, $x_2m_i\in I$ for all $1\leq i\leq t$. Finally, we get $x_im_j\in I$ for each $1\leq i\leq s$ and $1\leq j\leq t$. It means that $m_if\in I$ for each $1\leq i\leq t$. Specially $m_1f\in I$, and because $I$ is square-free and $f$ is linear, we may take $m_1$ to be square-free. The converse is trivial by definition. $\Box$\ Note that in the above lemma, assuming that $I$ is square-free is essential. Because, for example in $K[x_1,x_2]$ assume that $I=\langle x_1^3, x_2^3\rangle$. Then, $(x_1-x_2)(x_1^2+x_1x_2+x_2^2)\in I$, that is $(x_1-x_2)$ is a zero-divisor in $K[x_1,x_2]/I$ but, there is no any nonzero square-free monomial eliminating $(x_1-x_2)$. \[zerodiv\] Let $G$ be a graph in the class $\cal G$ and $\alpha(G)=k$. Let $Q_1, \ldots, Q_k$ be a basic clique cover of $G$. Consider $$\theta_i=\sum_{v_j \in Q_i} x_j, \ \ \ \ \ i=1,\ldots,k.$$ Then, $G$ is well-covered if and only if for each $i=1,\ldots,k$, the polynomial $\theta_i$ is not zero-divisor in the ring $R(G)$. [*Proof.*]{} Let $\theta_i$ be zero-divisor in $R(G)$. By Lemma \[linearzero\], the polynomial $\theta_i$ is zero-divisor in $R(G)$ if and only if there is a nonzero square-free monomial $m$ in $R(G)$ such that $m\theta_i=0$ or equivalently $m\theta_i\in I(G)$. The ideal $I(G)$ is a monomial ideal, then, for each $v_j$ in $Q_i$, we have $mx_j\in I(G)$. Let $m=x_{i_1}\cdots x_{i_r}$ and $A=\{v_{i_1}, \ldots, v_{i_r}\}$. Then, $mx_j\in I(G)$ means that there is a vertex $v_{i_l}$ in $A$ such that $v_{i_l}$ is adjacent to $v_j$. This means that the set $A$ is a dominating set of $Q_i$. In other hand, if $v_j$ is in $A\cap Q_i$, then $x_j\theta = x_j^2$ in $R(G)$ and there is $v_{i_l}$ in $A$ adjacent to $v_j$ and therefore $m=0$ in $R(G)$ which is a contradiction. Therefore $A\subseteq V(G)\setminus V(Q_i)$. Note that $A$ is independent if and only of $m$ is not zero in $R(G)$. Now, Theorem \[main\] implies that if $\theta_i$ is a zero-divisor in $R(G)$ for some $1\leq i\leq k$, then, $G$ is not well-covered. Conversely, if $G$ is not well-covered then, again by Theorem \[main\], there is an independent set $\{v_{i_1}, \ldots, v_{i_r}\}\subseteq V(G)\setminus V(Q_i)$ which dominates $Q_i$ for some $1\leq i\leq k$. In this case, $m=x_{i_1}\cdots x_{i_r}$ is a nonzero monomial in $R(G)$ such that $m\theta_i=0$ and $\theta_i$ is zero-divisor. This completes the proof. $\Box$\ Let $G$ be a graph in the class $\cal G$. Then, by Theorem \[zerodiv\], $G$ is well-covered if and only if each polynomial $\theta_i$ is non-zero-divisor in the ring $R(G)$. In other hand, the set of all zero-divisors of $R(G)$ is union of all minimal primes of the ideal $I(G)$. Minimal primes of $I(G)$ are corresponding to minimal vertex covers of $G$. Therefore, checking well-coveredness of the graph $G$ is equal to check that for each $i$, $1\leq i\leq k$, the set of vertices of $Q_i$ is a part of a minimal vertex cover of $G$ or not. But, this is a simple task: it is enough to check that the set of vertices of $Q_i$ is a minimal vertex cover of the induced sub-graph of $G$ on $N(Q_i)$, which can be done in a polynomial time algorithm. Therefore, we have proved the following. The well-coveredness of a graph in the class $\cal G$ can be checked in polynomial time. We know that an arbitrary graph $G$ is well-covered if and only if the corresponding graph $G(\Delta_G)$ is well-covered. The graph $G(\Delta_G)$ is in the class $\cal G$ and its well-coveredness can be checked in polynomial time. But, this does not solve completely the problem of well-covered checking of graphs, because passing from $G$ to $G(\Delta_G)$ can not be done in polynomial time. In fact, the graph $G(\Delta_G)$ has a huge number of vertices in comparison with $G$. The next natural question is when a graph in the class $\cal G$ is Cohen-Macaulay. With the notations above, Cohen-Macaulayness of $G$ is equal to regularity of the sequence $\theta_1, \theta_2, \ldots, \theta_k$ in $R(G)$. It means that $\theta_1$ is not zero-divisor in $R(G)$ and for $i=2, \ldots, k$, the element $\theta_i$ is not zero-divisor in $R(G)/\langle \theta_1, \ldots, \theta_{i-1}\rangle$. Therefore, one can say that if $G$ is Cohen-Macaulay, then $G\setminus Q_i$ is Cohen-Macaulay for each $1\leq i\leq k$. It is well-known that a simplicial complex $\Delta$ is Cohen-Macaulay if and only if the graph $G(\Delta)$ is Cohen-Macaulay ([@Stan]). Therefore, to check Cohen-Macaulayness of all simplicial complexes and all graphs, it is enough to check Cohen-Macaulayness of all graphs in the class $\cal G$. [99]{} F. Brenti and V. Welker, $f$-vectors of barycentric subdivisions, [*Math. Z.*]{} [**259**]{} (4) (2008) 849-865. A. Finbow, B. Hartnell and R. Nowakowski, A characterization of well-covered graphs of girth 5 or greater, [*J. Combin. Theory Ser. B*]{} [**57**]{} (1993) 44-68. A. Finbow, B. Hartnell and R. Nowakowski, A characterization of well-covered graphs that contain neither 4-nor 5-cycles, [*J. Graph Theory*]{} [**18**]{} (1994) 713-721. P. Hall, On representatives of subsets, [*J. London Math. Soc.*]{} [**10**]{} (1935) 26-30. B. L. Hartnell, Well-Covered Graphs, [*J. Combin. Math. Combin. Comput.*]{} [**29**]{} (1999) 107-115. M. Kreuzer and L. Robbiano, [*Computational Commutative Algebra I*]{}, Springer-Verlag, 2000. M. Kubitzke and V. Welker, The multiplicity conjecture for barycentric subdivisions, [*Comm. Algebra*]{} [**36**]{} (11) (2008) 4223-4248. M. D. Plummer, Some covering concepts in graphs, [*J. Combin. Theory*]{} [**8**]{} (1970) 91-98. M. D. Plummer, Well-covered graphs: a survey, [*Quaestiones Math.*]{} [**16**]{} (1993) 253-287. E. Prisner, J. Topp and P. D. Vestergaard, Well-covered simplicial, chordal and circular arc graphs, [*J. Graph Theory*]{} [**21**]{} (1996) 113-119. B. Randerath and L. Volkmann, A characterization of well-covered block-cactus graphs, [*Australas. J. Combin.*]{} [**9**]{} (1994) 307-314. G. Ravindra, Well covered graphs, [*J. Combin. Inform. System Sci.*]{} [**2**]{} (1977) 20-21. R.S. Sankaranarayana and L.K. Stewart, Complexity results for well-covered graphs, [*Networks*]{} [**22**]{} (1992) 247-262. R. Stanley, [*Combinatorics and Commutative Algebra*]{}, 2nd ed., Progress in Math., Birkhauser, 1996. R. H. Villarreal, Cohen-Macaulay graphs, [*Manuscripta Math*]{}. [**66**]{} (1990) 277-293.
{ "pile_set_name": "ArXiv" }
Vortices in superconductors represent an ideal system in which to study the effect of quenched disorder on elastic media. The competition between the flux-line interactions, which order the vortex lattice, and the defects in the sample, which disorder the vortex lattice, produce a remarkable variety of collective behavior [@Review1]. One prominent example is the peak effect in low temperature superconductors, which appears near $H_{c2}$ when a transition from an ordered to a strongly pinned disordered state occurs in the vortex lattice [@Kes2; @Shobo3; @Chaddah4; @Henderson5; @Andrei6; @Xiao7; @Paltiel8]. In high temperature superconductors, particularly BSCCO samples, a striking “second peak” phenomenon is observed in which a dramatic increase in the critical current occurs for increasing fields. It has been proposed that this is an order-disorder or 3D to 2D transition. [@Giamarchi9; @Cubitt10; @Tamegai11; @Glazman12; @Feigelman13] Recently there has been renewed interest in transient effects, which have been observed in voltage response versus time curves in low temperature superconductors [@Henderson5; @Andrei6; @Xiao7; @Frindt14]. In these experiments the voltage response increases or decays with time, depending on how the vortex lattice was prepared. The existence of transient states suggests that the disordered phase can be [*supercooled*]{} into the ordered region, producing an increasing voltage response, whereas the ordered phase may be [*superheated*]{} into the disordered region, giving a decaying response. In addition to transient effects, pronounced memory effects and hysteretic V(I) curves have been observed near the peak effect in low temperature materials. [@Kes2; @Chaddah4; @Henderson5; @Andrei6; @Xiao7]. Xiao [*et al.*]{} [@Xiao7] have shown that transient behavior can lead to a strong dependence of the critical current on the current ramp rate. Recent neutron scattering experiments in conjunction with ac shaking have provided more direct evidence of supercooling and superheating near the peak effect [@Ling15]. Experiments on BSCCO have revealed that the high field disordered state can be supercooled to fields well below the second peak line [@Konczykowski16]. Furthermore, transport experiments in BSCCO have shown metastability in the zero-field-cooled state near the second peak as well as hysteretic V(I) curves [@Portier17] and transient effects [@Giller18]. Hysteretic and memory effects have also been observed near the second peak in YBCO [@Kokkaliaris19; @Bekeris20; @Esquinazi21]. The presence of metastable states and superheating/supercooling effects strongly suggests that the order-disorder transitions in these different materials are [*first order*]{} in nature. The many similarities also point to a universal behavior between the peak effect of low temperature superconductors and the peak effect and second peak effect of high temperature superconductors. A key question in all these systems is the nature of the [*microscopic dynamics*]{} of the vortices in the transient states; particularly, whether plasticity or the opening of flowing channels are involved [@Andrei6]. The recent experiments have made it clear that a proper characterization of the static and dynamic phase diagrams must take into account these metastable states, and therefore an understanding of these effects at a microscopic level is crucial. Despite the growing amount of experimental work on metastability and transient effects in vortex matter, these effects have not yet been investigated numerically. In this work we present the first numerical study of metastability and transient effects in vortex matter near a disorder driven transition. We demonstrate that the simulations reproduce many experimental observations, including superheating and supercooling effects, and then link these to the underlying microscopic vortex behavior. We consider magnetically interacting pancake vortices driven through quenched point disorder. As a function of interlayer coupling or applied field the model exhibits a sharp 3D (ordered phase) to 2D (disordered phase) disorder-driven transition [@FirstPaper22]. By supercooling or superheating the ordered and disordered phases, we find increasing or decreasing transient voltage response curves, depending on the amplitude of the drive pulse and the proximity to the disordering transition. In the supercooled transient states a growing ordered channel of flowing vortices forms. No channels form in the superheated region but instead the ordered state is homogeneously destroyed. We observe memory effects when a sequence of pulses is applied, as well as ramp rate dependence and hysteresis in the V(I) curves. The critical current we obtain depends on how the system is prepared. We consider a 3D layered superconductor containing an equal number of pancake vortices in each layer, interacting magnetically. We neglect the Josephson coupling which is a reasonable approximation for highly anisotropic materials. The overdamped equation of motion for vortex $i$ at $T=0$ is $ {\bf f}_{i} = -\sum_{j=1}^{N_{v}}\nabla_{i} {\bf U}(\rho_{ij},z_{ij}) + {\bf f}_{i}^{vp} + {\bf f}_{d} = {\bf v}_{i}$. The total number of pancakes is $N_{v}$, and $\rho_{ij}$ and $z_{ij}$ are the distance between vortex $i$ and vortex $j$ in cylindrical coordinates. We impose periodic boundary conditions in the $x$ and $y$ directions and open boundaries in the $z$ direction. The magnetic interaction energy between pancakes is [@Clem22a; @Brandt23] $$\begin{aligned} {\bf U}(\rho_{ij},0)=2d\epsilon_{0} \left((1-\frac{d}{2\lambda})\ln{\frac{R}{\rho}} +\frac{d}{2\lambda} E_{1}\right) \ , \nonumber\end{aligned}$$ $$\begin{aligned} {\bf U}(\rho_{ij},z)=-s_{m}\frac{d^{2}\epsilon_{0}}{\lambda} \left(\exp(-z/\lambda)\ln\frac{R}{\rho}+ E_{2}\right) \ , \nonumber\end{aligned}$$ where $R = 22.6\lambda$, the maximum in-plane distance, $E_{1} = \int^{\infty}_{\rho} d\rho^{\prime} \exp(\rho^{\prime}/\lambda)/\rho^{\prime}$, $E_{2} = \int^{\infty}_{\rho} d\rho^{\prime} \exp(\sqrt{z^{2}+\rho^{\prime 2}}/\lambda)/\rho^{\prime}$, $\epsilon_{0} = \Phi_{0}^{2}/(4\pi\xi)^{2}$, $d=0.005\lambda$ is the interlayer spacing, $\lambda$ is the London penetration depth and $\xi=0.01\lambda$ is the coherence length. When the magnetic field $H$ increases, the distance $\rho$ between pancakes in the same plane decreases, but the distance $d$ between planes is unchanged. Thus we model $H$ by scaling the strength of the in- and inter-plane interactions via the prefactor $s_{m}$, such that $s_m \propto 1/{H}$. We denote the coupling strength at which the sharp 3D-2D transition occurs as $s_{m}^{c}$. We model the pinning as $N_p$ short range attractive parabolic traps that are randomly distributed in each layer. The pinning interaction is ${\bf f}_{i}^{vp} = \sum_{k=1}^{N_{p}}(f_{p}/\xi_{p}) ({\bf r}_{i} - {\bf r}_{k}^{(p)})\Theta( (\xi_{p} - |{\bf r}_{i} - {\bf r}_{k}^{(p)} |)/\lambda)$, where the pin radius $\xi_{p}=0.2\lambda$, the pinning force is $f_{p}=0.02f_{0}^{*}$, and $f_{0}^{*}=\epsilon_{0}/\lambda$. Throughout this work we will use 16 layers in a $16\lambda \times 16\lambda$ system with a vortex density of $n_v = 0.35/\lambda^2$ and a pin density of $n_p = 1.0/\lambda^{2}$ in each of the layers. There are 80 vortices per layer, giving a total of 1280 pancake vortices. For sufficiently strong disorder, the vortices in this model show a sharp 3D-2D decoupling transition as a function of $s_m$ or $H$ [@FirstPaper22]. A dynamic 2D-3D transition can also occur [@FirstPaper22]. In the inset of Fig. \[fig:trans\](b) we show the critical current $f_c$ and $z$-axis correlation $C_z$ as a function of interlayer coupling $s_m$, illustrating that a sharp transition from ordered 3D flux lines to disordered, decoupled 2D pancakes occurs at $s_{m}^{c}=1.2$. Here $f_c$ is obtained by summing $V_x = (1/N_v)\sum_{1}^{N_v}{v_x}$ and identifying the drive $f_d$ at which $V_x > 0.0005$, while $C_z = 1 - \left<(|{\bf r}_{i,L} - {\bf r}_{i,L+1}|/(a_0/2)) \ \Theta(a_0/2 - |{\bf r}_{i,L} - {\bf r}_{i,L+1}|\right>$, where $a_0$ is the vortex lattice constant. The ordered phase has a much lower critical current, $f_{c}^{o} = 0.0008f_{0}^{*}$ than the disordered phase, $f_{c}^{do} = 0.0105f_{0}^{*}$. To observe transient effects, we supercool the lattice by annealing the system at $s_{m} < s_{m}^{c}$ into a disordered, decoupled configuration. Starting from this state, at $t=0$ we set $s_{m} > s_{m}^{c}$ such that the pancakes would be ordered and coupled at equilibrium, apply a fixed drive $f_d$ and observe the time-dependent voltage response $V_x$. In Fig. \[fig:trans\](a) we show $V_x$ for several different drives $f_d$ for a sample with $s_{m} = 2.0$ in a state prepared at $s_{m} = 0.5$. For $f_{d} < 0.0053f_{0}^{*}$ the system remains pinned in a decoupled disordered state. For $f_{d} > 0.0053 f_{0}^{*}$ a time dependent increasing response occurs. $V_x$ does not rise instantly but only after a specific waiting time $t_{w}$. The rate of increase in $V_x$ grows as the amplitude of the $f_{d}$ increases. As shown in Fig. \[fig:trans\](c), $C_z$ exhibits the same behavior as $V_x$. In Fig. \[fig:trans\](b) we show a superheated system prepared at $s_m=2.0$ in the ordered state, and set to $s_m = 0.7$ at $t=0$. Here we find a large initial $V_x$ response that decays. With larger $f_{d}$ the decay [*takes an increasingly long time*]{}. The time scale for the decay is much [*shorter*]{} than the time scale for the increasing response in Fig. \[fig:trans\](a). In the inset of Fig. \[fig:ramp\](b) we demonstrate the presence of a [*memory*]{} effect by abruptly shutting off $f_d$. The vortex motion stops and when $f_d$ is re-applied $V_x$ resumes at the same point. We find such memory on both the increasing and decreasing response curves. The response curves and memory effect seen here are very similar to those observed in experiments [@Andrei6]. In Fig. \[fig:cool\] we show the vortex positions and trajectories in the supercooled sample with $s_m=2.0$ from Fig. \[fig:trans\](a) for $f_{d} = 0.007 f_{0}^{*}$ for different times. In Fig. \[fig:cool\](a) at $t = 2500$ the initial state is disordered. In Fig. \[fig:cool\](b) at $t = 7500$ significant vortex motion occurs through the [*nucleation*]{} of a single channel of moving vortices, which forms during the waiting time $t_w$. Vortices outside the channel remain pinned. In Fig. \[fig:cool\](c) at $t=12500$ the channel is wider, and vortices inside the channel are ordered and have recoupled. The pinned vortices remain in the disordered state. During the transient motion there is a [*coexistence*]{} of ordered and disordered states. If the drive is shut off the ordered domain is pinned but remains ordered, and when the drive is re-applied the ordered domain moves again. In Fig. \[fig:cool\](d) for $t = 20000$ almost all of the vortices have reordered and the channel width is the size of the sample. Thus in the supercooled case we observe [*nucleation*]{} of a microscopic transport channel, followed by [*expansion*]{} of the channel. The vortex positions and trajectories for a superheated sample with $s_m = 0.7$ and $f_d = 0.006f_{0}^{*}$, as in Fig. \[fig:trans\](b), are shown in Fig. \[fig:heat\](a-d). In Fig. \[fig:heat\](a) the initial vortex state is ordered. In Fig. \[fig:heat\](b-d) the vortex lattice becomes disordered and pinned in a [*homogeneous*]{} manner rather than through nucleation. Each vortex line is decoupled by the point pinning as it moves until the entire line dissociates and is pinned. We next consider the effect of changing the rate $\delta f_{d}$ at which the driving force is increased on $V(I)$ in both superheated and supercooled systems. Fig. \[fig:ramp\](a) shows $V_x$ versus $f_d$, which is analogous to a V(I) curve, for the supercooled system at $s_m=2.0$ prepared in a disordered state. $V_x$ remains low during a fast ramp, when the vortices in the strongly pinned disordered state cannot reorganize into the more ordered state. There is also considerable hysteresis since the vortices reorder at higher drives producing a higher value of $V_x$ during the ramp-down. For the slower ramp the vortices have time to reorganize into the weakly pinned ordered state, and remain ordered, producing [*no hysteresis*]{} in V(I). In a superheated sample, the reverse behavior occurs. Fig. \[fig:ramp\](b) shows V(I) curves at different $\delta f_{d}$ for a system with $s_m = 0.7$ prepared in the ordered state. Here, the fast ramp has a [*higher*]{} value of $V_x$ corresponding to the ordered state while the slow ramp has a low value of $V_x$. During a slow initial ramp in the superheated state the vortices gradually disorder through rearrangements but there is no net vortex flow through the sample. Such a phase was proposed by Xiao [*et al.*]{} [@Xiao7] and seen in recent experiments on BSCCO samples [@Konczykowski16]. At the slower $\delta f_{d}$, we find [*negative*]{} $dV/dI$ characteristics which resemble those seen in low- [@Frindt14; @Borka24] and high- [@deGroot25] temperature superconductors. Here, $V(I)$ initially increases as the vortices flow in the ordered state, but the vortices decouple as the lattice moves, increasing $f_c$ and dropping $V(I)$ back to zero, resulting in an N-shaped characteristic. To demonstrate the effect of vortex lattice disorder on the critical current, in Fig. 4 we plot the equilibrium $f_c$ along with $f_c$ obtained for the supercooled system, in which each sample is prepared in a state with $s_{m} = 0.5$, and then $s_m$ is raised to a new value above $s_{m}^{c}$ before $f_c$ is measured. The disorder in the supercooled state produces a value of $f_c$ between the two extrema observed in the equilibrium state. Note that the sharp transition in $f_c$ associated with equilibrium systems is now smooth. Our simulation does not contain a surface barrier which can inject disorder at the edges. Such an effect is proposed to explain experiments in which AC current pulses induce an increasing response as the vortices reorder but DC pulses produce a decaying response [@Andrei6; @Paltiel8]. We observe no difference between AC and DC drives. In low temperature superconductors, a rapid increase in $z$-direction vortex wandering occurs simultaneously with vortex disordering [@Ling15], suggesting that the change in $z$-axis correlations may be crucial in these systems as well. Our results, along with recent experiments on layered superconductors, suggest that the transient response seen in low temperature materials should also appear in layered materials. In summary we have investigated transient and metastable states near the 3D-2D transition by supercooling or superheating the system. We find voltage-response curves and memory effects that are very similar to those observed in experiments, and we identify the microscopic vortex dynamics associated with these transient features. In the supercooled case the vortex motion occurs through nucleation of a channel of ordered moving vortices followed by an increase in the channel width over time. In the superheated case the ordered phase homogeneously disorders. We also demonstrate that the measured critical current depends on the vortex lattice preparation and on the current ramp rate. We acknowledge helpful discussions with E. Andrei, S. Bhattacharya, X. Ling, Z. Xiao, and E. Zeldov. This work was supported by CLC and CULAR (LANL/UC) by NSF-DMR-9985978, and by the Director, Office of Adv. Scientific Comp. Res., Div. of Math., Information and Comp. Sciences, U.S. DoE contract DE-AC03-76SF00098. G. Blatter [*et al.*]{}, Rev. Mod. Phys. [**66**]{}, 1125 (1994). R. Wordenweber, P.H. Kes, Phys. Rev. B [**34**]{}, 494 (1986). S. Bhattacharya and M.J. Higgins, Phys. Rev. Lett. [**70**]{}, 2617 (1993); M.J. Higgins and S. Bhattacharya, Physica C [**257**]{}, 232 (1996). G. Ravikumar [*et al.*]{}, Phys. Rev. B [**57**]{}, R11069 (1998); S.S. Banerjee [*et al.*]{}, [*ibid.*]{} [**58**]{}, 995 (1998); [**59**]{}, 6043 (1999). W. Henderson [*et al.*]{}, Phys. Rev. Lett. [**77**]{}, 2077 (1996). W. Henderson [*et al.*]{}, Phys. Rev. Lett. [**81**]{}. 2352 (1998). Z.L. Xiao, E.Y. Andrei and M.J. Higgins, Phys. Rev. Lett. [**83**]{}, 1664 (1999); and to be published. Y. Paltiel [*et al.*]{}, Nature [**403**]{}, 398 (2000). T. Giamarchi and P. Le Doussal, Phys. Rev. B [**55**]{}, 6577 (1997). R. Cubitt [*et al.*]{}, Nature (London) [**365**]{}, 407 (1993). T. Tamegai [*et al.*]{}, Physica C [**213**]{}, 33 (1993). L.I. Glazman and A.E. Koshelev, Phys. Rev. B [**43**]{}, 2835 (1991); L.L. Daemen [*et al.*]{}, Phys. Rev. Lett. [**70**]{}, 1167 (1993); Phys. Rev. B [**47**]{}, 11291 (1993). M.V. Feigel’man, V.B. Geshkenbein, and A.I. Larkin, Physica C [**167**]{}, 177 (1990); V.M. Vinokur, P.H. Kes, and A.E. Koshelev, [*ibid.*]{} [**168**]{}, 29 (1990); V.B. Geshkenbein and A.I. Larkin, [*ibid.*]{} [**167**]{}, 177 (1990); G. Blatter [*et al.*]{}, Phys. Rev. B [**54**]{}, 72 (1996); A.E. Koshelev and V.M. Vinokur, [*ibid.*]{} [**57**]{}, 8026 (1998). R.F. Frindt, D.J. Huntley, and J. Kopp, Sol. St. Commun. [**11**]{}, 135 (1972). X.S. Ling [*et al.*]{}, to be published. C.J. van der Beek [*et al.*]{}, Phys. Rev. Lett. [**84**]{}, 4196 (2000). B. Sas [*et al.*]{}, Phys. Rev. B [**61**]{}, 9118 (2000). D. Giller [*et al.*]{}, Phys. Rev. Lett. [**84**]{}, 3698 (2000). S. Kokkaliaris [*et al.*]{}, Phys. Rev. Lett. [**82**]{}, 5116 (1999). S.O. Valenzuela and V. Bekeris, Phys. Rev. Lett. [**84**]{}, 4200 (2000). A.V. Pan and P. Esquinazi, to be published. C.J. Olson [*et al.*]{}, preprint. J.R. Clem, Phys. Rev. B [**43**]{}, 7837 (1991). E.H. Brandt, Rep. Prog. Phys. [**58**]{}, 1465 (1995). S. Borka [*et al.*]{}, Sov. J. Low Temp. Phys. [**3**]{}, 347 (1977). A.A. Zhukov [*et al.*]{}, Phys. Rev. B [**61**]{}, R886 (2000).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We show that the [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{} scaling relations observed from the local to the high-$z$ Universe can be largely or even [*entirely*]{} explained by a non-causal origin, i.e. they do not imply the need for any physically coupled growth of black hole and bulge mass, for example through feedback by active galactic nuclei (AGN). [ Provided some physics for the absolute normalisation,]{} the creation of the scaling relations can be fully explained by the hierarchical assembly of black hole and stellar mass through galaxy merging, from an initially uncorrelated distribution of BH and stellar masses in the early Universe. We show this with a suite of dark matter halo merger trees for which we make assumptions about (uncorrelated) black hole and stellar mass values at early cosmic times. We then follow the halos in the presence of global star formation and black hole accretion recipes that (i) work without any coupling of the two properties per individual galaxy and (ii) correctly reproduce the observed star formation and black hole accretion rate density in the Universe. With disk-to-bulge conversion in mergers included, our simulations even create the observed slope of $\sim$1.1 for the [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-relation at $z=0$. This also implies that AGN feedback is not a required (though still a possible) ingredient in galaxy evolution. In light of this, other mechanisms that can be invoked to truncate star formation in massive galaxies are equally justified.' author: - 'Knud Jahnke, Andrea V. Macciò' title: 'The non-causal origin of the black hole–galaxy scaling relations' --- Introduction {#sec:intro} ============ About a decade ago tight correlations between galaxy properties and those of central supermassive black holes (BHs) were empirically established: BH masses scale with the luminosity, the mass and the velocity dispersion of their host galaxies’ bulges [@mago98; @ferr00; @gebh00; @trem02; @mclu02; @marc03; @haer04; @guel09; @jahn09b; @merl10]. These correlations were quickly interpreted to yield two important implications: (i) If these correlations were to exist for a random set of galaxies, then every galaxy should contain a supermassive BH [e.g. @korm95] (ii) If every galaxy contains a BH and given the observed scaling relations, the evolution of galactic bulges and central BHs should be coupled by a physical mechanism (the ‘co-evolution’ picture). The former statement appears to be true at least above some mass threshold, and introduced a new ingredient, the central BH, in galaxy evolution. The latter statement is based on the vast energy available from accreting black holes, providing the easiest conceivable mechanism to physically couple BH and bulge properties despite the difference in linear scales. Coupling only a few percent of this energy in an “AGN feedback” [@silk98] to the gas within the galaxy would have vast implications on the temperature and structure of the surrounding interstellar medium. Ad hoc models [@gran04; @dima05; @crot06] were very successful in generating feedback loops that involve the energy from AGNs for quenching star formation (SF) and fueling of the AGN themselves. Different incarnations of AGN feedback are in principle able to not only couple SF and BH accretion, but to simultaneously fix a number of existing problems in galaxy evolution, namely an overproduction of massive galaxies in semi-analytic models as well as the inability to truncate SF fast enough to reproduce the observed color–magnitude bimodality of galaxies [@bald04]. This motivated the inclusion of AGN feedback in a yet-to-be-determined form and physical description as the driving force behind the BH–bulge scaling relations. Although [ the actual effectiveness and impact of]{} at least “quasar mode” feedback models [ is still unclear]{}, the interpretation of the scaling relations as a physically coupled evolution is largely assumed to be correct and continues to provide the basis for many studies. As an example, the evolution of the scaling relations [@treu04; @peng06a; @peng06b; @treu07; @schr08; @jahn09b; @merl10; @benn10; @deca10b] is investigated in order to constrain the physical drivers behind co-evolution and the growth mechanisms of BHs. An alternative origin of the scaling relations ---------------------------------------------- @peng07 demonstrated a thought experiment for the potential origin of a [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-relation without a physical coupling, but as the result of a statistical convergence process. In short, he showed that in principle, arbitrary distributions of [$M_\mathrm{BH}$]{}/[$M_\mathrm{*}$]{}-ratios in the early universe converge towards a linear relation through the process of galaxy merging. In this central-limit-theorem view, a large number of mergers will average out the extreme values of [$M_\mathrm{BH}$]{}/[$M_\mathrm{*}$]{} towards the ensemble average. What was deliberately left open in this experiment was whether there are enough major galaxy mergers in the history of an actual galaxy in order to drive this process far enough. In this present study we pick up this thought by following realistic ensembles of dark matter halos through cosmic time. Our immediate aim is to test whether the simple assembly of galaxies and their BHs according to a $\Lambda$CDM merger tree is able to produce BH–galaxy scaling relations from initial conditions at early times, where [$M_\mathrm{BH}$]{} and [$M_\mathrm{bulge}$]{} (or [$M_\mathrm{*}$]{}) were completely uncorrelated per individual galaxy. We circumvent the inherent problems and degrees of freedom of a full semi-analytic model by not trying to simultaneously solve the problem of SF truncation or correct BH or stellar mass function, but restrict the question solely to the genesis of the scaling relations. As an input for SF and BH accretion we use observed relations, but prevent any recipes that im- or explicitly couple BH and stellar mass growth [*per individual galaxy*]{}. Numerical simulations and merger trees {#sec:trees} ====================================== In this paper we use the Lagrangian code [pinocchio]{} [@mona02] to construct high resolution $\Lambda$CDM merger trees [for a comparison between Nbody codes and [pinocchio]{} see @li07]. The simulation has a box size of 100 Mpc, and $1500^3$ particles, this ensures a very high mass resolution, $m_p=1.01\times 10^7$ [$M_\odot$]{}. The construction of merger trees is straightforward with [pinocchio]{}: the code outputs a halo mass every time a merger occurs, i.e., when a halo with more than 10 particles merges with another halo. From an initial $6.5\times10^6$ halos, we receive a resulting sample of 10932 halos with $M> 10^{11}$ [$M_\odot$]{} at $z=0$. When two halos merge, the less massive one can either survive and continue to orbit within the potential well of the larger halo until $z=0$, or merge with the central object. The averaging process described above will only apply to this second category of halos (the ones that actually merge), we then adopt the dynamical friction formula presented in @boyl08 to compute the fate of a halo. [ The orbital parameters of the halo are extracted from suitable distributions that reproduce the results of Nbody simulations as described in @mona07.]{} If the dynamical friction time is less that the Hubble time at that redshift, we consider the [ the halo to merger at a time $t=t_{dyn}$]{}, if it is longer the satellite halo is removed from our catalog. Each halo in our sample at $z=0$ has formed by at least 200 mergers and the most massive ones have had more than $5\times 10^4$ encounters. Creating scaling relations: averaging and mass function build up by hierarchical merging {#sec:zero} ======================================================================================== The main message of this work is to demonstrate which effect merging over cosmic time has on an ensemble of halos with initially uncorrelated [$M_\mathrm{BH}$]{} and [$M_\mathrm{*}$]{} values. These values change their distribution and converge towards a linear relation by $z=0$ – in the absence of SF, BH accretion, disk-to-bulge conversion and hence any physical connection between the two masses. [ For this task we follow dark matter halos through their assembly chain. We assign a stellar and a BH mass to each dark matter halo once its mass becomes larger than 10$^8$ [$M_\odot$]{}[^1], the corresponding redshift in the following is called $z_f$.]{} We set our initial guesses for [$M_\mathrm{*}$]{} and [$M_\mathrm{BH}$]{} as a fixed fraction of the dark matter mass plus a (large) random scatter. We used $M_*/M_{\rm dm}=10^{-3}$ and $M_{\rm BH}/M_{\rm DM}=10^{-7}$ for the initial ratio; the scatter is taken from a logarithmically flat distribution of [ 3 dex for the two quantities]{} (blue squares in Figs. \[fig:zero\] [ and \[fig:zero2\]]{}). We have no knowledge of any realistic seed mass scatter, but take 4 orders of magnitude variations in the [$M_\mathrm{BH}$]{}/[$M_\mathrm{*}$]{}-ratio as a proxy for “uncorrelated”. [ Empirical constraints on the possible parameter space for seed black hole mass do not seem to support seeds more massive than $\sim$$10^5$ [$M_\odot$]{}[@volo09], towards lower masses a few solar mass BHs are clearly being produced by stars. Whether this matches the true distribution of seed masses is not important, but by simply taking a large range that is currently not ruled out represents a rather conservative starting point for our demonstration.]{} Halos are then propagated along the merger tree to $z=0$ (the red points in Figs. \[fig:zero\] [ and \[fig:zero2\]]{}). When two halos merge according to our dynamical friction formula, we set the resulting stellar and BH masses equal to the sum of the individual masses before the merger [@volo03]. [ The final mass in [$M_\mathrm{BH}$]{} and [$M_\mathrm{*}$]{} as well as the corresponding normalization is determined simply by the sum of the individual halos contributing to a final halo.]{} ![ \[fig:zero\] Changes of [$M_\mathrm{BH}$]{} vs. [$M_\mathrm{*}$]{} from an initially uncorrelated (within 4 dex in each parameter) distribution at high $z$ (blue points) to $z=0$ purely by mass assembly along the merger trees, i.e. without SF, BH accretion and disk-to-bulge mass conversion. A very tight correlation of slope 1.0 is created by the merging, with smaller scatter for higher masses which experienced more merger. The black line is the observed local [$M_\mathrm{BH}$]{}-[$M_\mathrm{*}$]{}-relation from @haer04 with slope=1.12. ](Macc0.new.all.ps){width="\columnwidth"} Fig. \[fig:zero\] shows that the hierarchical formation of galaxies provides a strong inherent driver from the uncorrelated initial distribution to a linear relation[^2]. [ This effect is independent of the chosen initial conditions as Figure \[fig:zero2\] demonstrates, where completely different initial conditions result in a relation with the same slope and very similar scatter.]{} ![image](Macc0new2.ps){width="80.00000%"} [ This experiment shows that the dominating structural parts of the observed [$M_\mathrm{BH}$]{}-[$M_\mathrm{*}$]{}-scaling relation – i.e. (1) the existence of such a correlation, (2) that it extends over several orders of magnitude in mass, (3) the fact that the slope is near unity, and (4) an increasing scatter to lower masses – can be explained by this physics-, feedback- and coupling-free process. A slope$\sim$1 scaling relation does not need any physical interaction of galaxy and black hole. In the next sections we will show that this holds also when adding “2nd order” effects like actual star formation and black hole accretion, as well as disk–bulge conversion.]{} Adding star formation, black hole accretion, and disk-to-bulge conversion {#sec:one} ========================================================================= [ We so far demonstrated that merging alone is the basic mechanism to create a [$M_\mathrm{BH}$]{}–[$M_\mathrm{*}$]{} scaling relation. However we need to add a number of ingredients to our model: Placing all mass already at high redshift is not conservative, since all of stellar and BH mass is subject to the full merger averaging process. In the actual universe we know that the majority of BH and stellar mass in the universe was created after $z\sim 6$ [@solt82; @hopk06], and hence experiences less merger generations. Moreover pure merging produces a monotonic relation between [$M_\mathrm{DM}$]{} and [$M_\mathrm{*}$]{} as shown in the upper panel of Figure \[fig:ben\], which is at odds from empirical results. Since SF and BH accretion density depend on redshift, we will add the approximate right amount of SF and BH growth at the right cosmic times, and thus at the right place in time with respect to the merger cascade. The goal of this excercise is not to create a full semi-analytic model of galaxy formation, but to test which effect realistic assumptions about mass growth have on the resulting [$M_\mathrm{BH}$]{}–[$M_\mathrm{*}$]{} scaling relation.]{} To construct SF and BH accretion recipes we will use three observed relations as input: (i) The halo occupation distribution, i.e. the relation between dark matter halo mass [$M_\mathrm{DM}$]{} and [$M_\mathrm{*}$]{} [@most10], (ii) the Lilly-Madau relation for the evolution of SF rate [@hopk06], and (iii) the evolution of bolometric luminosity of AGN [@hopk07]. We want to explicitly note that by using these relations, we do not force any coupling of [$M_\mathrm{BH}$]{} and [$M_\mathrm{*}$]{} or [$M_\mathrm{bulge}$]{} [*per individual galaxy*]{}, but all recipes only relate to ensemble averages. Any potential implicit couplings only act on the ensemble and not on an individual galaxy, hence they can not induce a correlation in the (originally uncorrelated) data points. ![ The relation between [$M_\mathrm{*}$]{} and [$M_\mathrm{DM}$]{} at $z=0$. The black line shows the predictions from @most10, the red point the results from our simulations. Upper: Only merging but no SF taken into account. Lower: With our SF recipe implemented. []{data-label="fig:ben"}](Ben3.ps){width="\columnwidth"} Star Formation {#sec:sf} -------------- Up to now, at redshift $z=0$ the stellar mass of our galaxies is simply given by the sum of all stellar masses of its $j$-progenitors $M^0_*(i)=\sum M_*(z_{f})(j)$, masses that as explained were originally drawn from a random distribution. As shown in the upper panel of Fig. \[fig:ben\], for a given halo mass the stellar mass obtained in this way is too low when compared to the empirical expectations from Halo Occupation Distribution (HOD) models [e.g. @most10]. This is because the total galaxy is more than the sum of its seeds and we have neglegted star formation so far. On the other hand, Fig. \[fig:ben\] tells us exactly how much stellar mass each halo is missing. Hence we take this constraint, the HOD results, to fix the stellar mass produced throught star formation: $$M^*_{SF}(i)=M^*(M_{dm}(i))-M^*_0(i) \label{eq:sfr1}$$ where $M^*(M_{dm}(i))$ is the expected stellar mass for the $i$-th halo with dark matter mass $M_{dm}(i)$ as predicted by the HOD model presented in @most10. Now we need to distribute this stellar mass from SF along time, i.e. among all progenitors of galaxy $i$ along the merger tree. We do that according to the following formula that gives the stellar mass produced through SF for the $j$-th halo in the merger tree of final $i$ halo: $$M^{SF}_j = A \times M_{*}^q(j) \times LT(j) \times f(z_f(j),z_m(j)). \label{eq:sfr2}$$ The constant $A$ is fixed by the requirement that $M^{SF}(i)=\sum_j M^{SF}_j$ and is the same for all progenitors. The time $LT$ is the lifetime of a halo, defined as time between the formation redshift ($z_f$: when $M_\mathrm{DM}>10^8$ [$M_\odot$]{}) and the moment $z_m$, when it merges with a more massive halo, which is not necessarily the main branch of the merger tree. With this definition we assume that galaxies are able to actively form stars only when they are the central object within their host halo. The function $f$ is used to give different weights to the life time $LT$ at high ad low redshift, in this way, for a given life time, a galaxy will produce more (less) stars at high (low) redshift, according to the Lilly-Madau plot[^3]. We define $f(z_1,z_2)$ as the integral between $z_1$ and $z_2$ of the assumed star formation rate (SFR): $$f(z_1,z_2) = \int_{z_1}^{z_2} \rm SFR(z) \rm d z.$$ In our reference model we assume for the redshift evolution of the SFR the functional form suggested by [@hopk06], namely results listed in Table \[tab:mods\] for a modified Salpeter IMF [see @hopk06 for more details]. Finally the factor $ M_{*}^q(i)$ takes care of the observed mass-dependence of specific star formation rates and we fixed the exponent $q=0.8$ [e.g. @dadd07; @bouc09]. In the Appendix \[sec:test\] we will present results for different choices for $q$ and $SFR(z)$. Let us summerize one more time our parametrization for star formation: When the $j$ halo appears at $z_f(j)$ it gets an initial stellar mass ($M_*(z_{f})(j)$) from a random distribution as described in Section \[sec:zero\]. Then it will “produce” its own stars ($M^{SF}_j$) until it will be accreted onto, and become part of, a more massive halo. During its lifetime it will also accrete stellar mass from merging (lower) mass haloes. If halo $j$ will merge with the central galaxy it will add a fraction of its stellar mass to the bulge of the central galaxy as described below in Section \[sec:bd\]. If halo $j$ will merge with a more massive halo ($k$) before merging with the central halo it will give all its stellar mass to halo $k$ and cease to exist as a halo of its own. Disk to bulge conversion {#sec:bd} ------------------------ We assume that all stellar mass produced through star formation will occur in the disk component of each halo and then apply a recipe to convert part of this disk mass to bulge mass as a consequence of mergers. The amount of disk-to-bulge mass conversion depends on a multitude of parameters as mass ratio, gas fraction, and orbital parameters of the merger, which are impossible to implement in our context. Instead we follow a simpler recipe inspired by the numerical results of @hopk09, solely depending on the stellar mass ratio of the two merging partners: (i) Bulge mass of main halo and satellite will just be co-added, (ii) the disk mass of the satellite fully goes into the resulting bulge, and (iii) a fraction of the main halo disk, [ directly]{} proportional to the mass ratio, also gets converted into bulge mass. With this recipe we are able to approximately reproduce the ratio of bulge to total mass observed in the local universe. Black hole accretion {#sec:bha} -------------------- Since the mechanisms of BH accretion continue to be unclear, we assume a simple recipe; the BH will double its mass in a stochastic way on a characteristic time scale $\tau$. This will happen for all $j$ progenitors of halo $i$. Similar to what we have done for star formation, we link the number of doublings of a given halo to its life time and we weight this time with a function $g$ similar to the function $f$ in Section \[sec:sf\]. In practice the number of mass doublings of the $j$-th black hole in the $i$ merger tree is given by the expression: $$N_{\mathrm{doub,}j} = LT(j) \times g(z_f(j),z_m(j)) /\tau, \label{eq:bha}$$ Where $LT$ has the same meaning as in Eq. \[eq:sfr2\]. For black hole accretion we choose as the weighting function $g(z_1:z_2)$ the integral between $z_1$ and $z_2$ of the bolometric luminosity of AGN ($AGN_L(z)$): $$g(z_1,z_2) = \int_{z_1}^{z_2} \rm AGN_L(z) {\rm d} z.$$ In our reference model the redshift evolution of the AGN bolometric luminosity is modeled with a double power law aimed to reproduce the results of @hopk07 [Figure 8]: $$\log\left(\frac{AGN_L(z)}{L_\odot \mathrm{Mpc}^{-3}}\right) = \left\{ \begin{array}{ll} 2.02 \cdot \log(z) + 7.83 & \textrm{for $z<1.7$} \\ -2.09 \cdot \log(z) + 8.78 & \textrm{for $z\ge 1.7$} \end{array} \right.$$ Similarly to the $f$ function, $g$ can be used to allow for higher accretion rates at high redshift compared to low redshift for a fixed halo life time. The characteristic time $\tau$ is chosen in order to match the normalization of the observed [$M_\mathrm{BH}$]{}–[$M_\mathrm{bulge}$]{}-scaling relation at $z=0$ for $\log(M_\mathrm{BH})=7$ and we obtained $\tau=1.9 \times 10^9$ Gyrs. As a stochastic element to BH growth we cast for each of the $N_{\mathrm{doub,}j}$ events a random number from a flat distributing in the range \[0:1\] and effectively double the BH mass only if this random number is $> 0.5$. For the main branch the number of doublings is of the order of 6–8, while it is in the range 0–3 in the other branches of the tree. In the Appendix \[sec:test\] we will present results for different choices of the parameters involved in equation \[eq:bha\]. ![image](Macc2.new.ps){width="48.00000%"} ![image](MbMbh.all.ps){width="48.00000%"} Resulting scaling relations {#sec:results} --------------------------- The [$M_\mathrm{bulge}$]{}–[$M_\mathrm{BH}$]{}-distribution at $z=0$ with the above receipes added is shown in Figure \[fig:MbMbh1\], with the observed values overplotted. Compared to the pure merger assembly in Figures \[fig:zero\] and \[fig:zero2\] we note a substantially increased scatter – closer to the observed – and a steeper than linear relation. Still, despite both the shift of SF and BH growth to later times as well as the random parts of SF and BH accretion, a clear correlation is produced. The effects of SF, BH accretion and disk-to-bulge conversion do not destroy the correlations but only induce a “2nd order” modification of them. Over time a mass function is built up and the scatter decreases substantially from the initial 4 dex, particularly for the $\log(M_\mathrm{bulge})>10$ regime. The simulated relation is very similar to the observed points, it reproduces the slope $>$1 almost perfectly, even the fact that at the high mass end the observed points lie above the mean slope – and this without adjustable parameters beyond normalization. [ Also, the higher scatter at low masses can be seen in both simulations and observations, the “cloud” above the $z=0$ relation in our model data represents disk galaxies with small bulge masses as observed by @gree08. As can be seen in Figure \[fig:mstellar\] in the Appendix, these objects would still lie on the local mass scaling relation, but with their total and not their bulge mass.]{} We want to stress that the results in Figure \[fig:MbMbh1\] do not depend on our parametrization of SF rate or BH accretion. For example changing the functional form of $f$ or $g$ in Eqs. \[eq:sfr2\] and \[eq:bha\] only marginally affects the scatter of the simulated [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}–relation and leaves the slope unchanged. The same is true for the other parameters described in the previous sections, see the Appendix for details. There is a sole exception to this, the assumed initial seeding masses: The larger the initial [$M_\mathrm{BH}$]{} and [$M_\mathrm{*}$]{}, the less mass has to be created by SF and BH accretion. [ In this way less mass is entering the halos at later times and more mass is subject to the full cascade of mergers, which will lead]{} to a smaller scatter in the scaling relations at $z=0$. Discussion {#sec:discussion} ========== We showed above that all basic properties of the BH–bulge mass scaling relation in the local Universe – a relation between properties of individual galaxies – are produced naturally by the merger-driven assembly of bulge and BH mass, and without any coupling of SF and BH mass growth per individual galaxy. The convergence power of galaxy merging is very strong for a realistic halo merger history, even with a correct placement of SF and BH accretion along cosmic time. This means that the mechanism @peng07 sketched in his thought experiment works also in a realistic Universe – there is enough merging occurring in the universe and hence (most of) the scaling relations can be entirely explained without any physical mechanisms that directly couples [$M_\mathrm{bulge}$]{} and [$M_\mathrm{BH}$]{} growth for a given object. Implicit coupling of black hole and galaxy? ------------------------------------------- The scaling relations are produced naturally in our toy model – but does this preclude any implicit coupling of [$M_\mathrm{BH}$]{} and [$M_\mathrm{*}$]{} by design? The most obvious features to be discussed in this respect are (a) the shape of the halo occupation distribution and (b) the question, which mechanism determines the relative value of BH accretion to SF, hence the absolute normalisation of the [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-relation. \(a) HOD shape: The HOD was empirically inferred and shows that the ratio of stellar to dark matter mass is not constant but is a function of mass itself. Towards the massive end stellar mass does not increase in parallel to DM, star formation appears suppressed. Its impact on the [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-relation is the slight curvature in Figure \[fig:MbMbh1\] with the noticable upturn at the massive end – consistent with the observations. The HOD shape represents a long known feature of galaxy formation and was “fixed” in models by introduction of a quenching mechanism, suppressing SF above some mass, often by inclusion of AGN feedback receipes. In our toy model however, we do not make specific assumptions about which mechanism produces the HOD we use as a constraint. We do not tie BH accretion or a merger event to the suppression of SF – it can be suppressed by any mechanism, e.g. a modified SN feedback receipe or gravitational heating by infalling clumps of matter. The latter mechanisms are completely independent of BH accretion and, while modifying the HOD, by nature can not have an impact on the [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-relation. Even if AGN feedback was the source for shaping the HOD, this would only be the cause of the second order shape deviation from a linear slope, not the existence of the [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-relation itself. \(b) Absolute normalisation: Our initial model just propagates the (rather ad hoc) seed masses in stars and BH to $z=0$, i.e. the normalisation of the [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-relation at $z=0$ is directly set by the ensemble mean ratio of seed masses. Since most of stellar and BH mass in reality is produced by SF and BH accretion later on, the high-$z$ ratio is in fact unconnected to the $z=0$ normalisation. In our simulation we set the normalisation by requesting a match of our simulation results with the empirical [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-relation at [$M_\mathrm{*}$]{}=$10^{10}$[$M_\odot$]{}. Arguments have been brought forward that the actual normalisation must be the result of a regulatory feeback loop involving BH accretion and SF. This is an attractive scenario, as it would explain both the creation of the scaling relations as well as their normalisation. Models of this kind have been implemented in several semianalytic models of galaxy formation, in all cases with free parameters that actually control the absolute normalisation of the resulting scaling relations, set to ad hoc values to again match this and other observations. Since we show in this paper that (most aspects of) the [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-scaling relations are created automatically by hierarchical assembly, these feedback models actually [ seem to]{} achieve too much – the creation of a certain [$M_\mathrm{BH}$]{}/[$M_\mathrm{*}$]{} ratio for [*each galaxy*]{} and at all times, which as a conspiracy would come on top of the formation path of the scaling relations demonstrated here. This said, we want to sketch the outline of an alternative scenario, that could well be responsible for the absolute normalisation but has not yet been explored: The main ingredients for both SF and BH accretion are (i) gas, and (ii) a trigger to form stars or to bring down gas to the BH. For both stars and BHs the amount of growth is basically a product of the two ingredients. At early times gas was ample and the number of both galaxy mergers and gas disk instabilities was high until the peak of activity around $z=2$. The triggering mechanisms subsequently decreased with the decreasing number of mergers and reduced gas reservoirs, either in number or duration or both. If both SF and BH accretion were to be ruled by a set of random triggering mechanisms and the specific gas fraction in a galaxy, then very high-$z$ galaxies might exhibit a strong variance in their [$M_\mathrm{BH}$]{}/[$M_\mathrm{*}$]{}-distribution as well as in their (instantaneous and also time averaged) BH accretion over SF ratio, with a dependency on the actual total mass, morphology, gas mass, trigger type, environment, etc. However, as a cosmic mean there will be a [*global*]{} [$M_\mathrm{BH}$]{}/[$M_\mathrm{*}$]{} value, just as an average over the efficiency of the spectrum of random triggering mechanisms and realised conditions to produce new stars or BH mass – a number which could also change with time. The hierarchical assembly of galaxies and its averaging mechanism now relieves us from having to search for regulatory mechanism [*per galaxy*]{}, since with each galaxy merger the spectrum of [$M_\mathrm{BH}$]{}/[$M_\mathrm{*}$]{} values will increasingly average out. The fact that the slowly starting depletion of gas reservoirs in galaxies at $z\sim2$ is accompanied with, and not preceded by, a slowly decreasing merger rate has the consequence that at higher redshifts there were enough mergers to average out the extreme [$M_\mathrm{BH}$]{}/[$M_\mathrm{*}$]{} systems – at lower $z$, with a decaying gas reservoir and merger rate and the transition to a “secular universe” [see e.g. @cist11], the prerequisites for producing new extreme values become less and less frequently fulfilled. The ensemble converges towards the observed [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-relation at $z=0$. Recently, first nested multi-scale simulations of gas inflow into the very centers of galaxies have been successful [@hopk10] and give a first impression of how in principle random instabilities, strongly depending on the actual conditions in the galaxy, can create gas inflow into the very center and the fuelling of either BH, SF or both. It is mechanisms of this kind that will create a certain mean value of [$M_\mathrm{BH}$]{}/[$M_\mathrm{*}$]{} averaged over all galaxies, which can be vastly different for an individual galaxy. How different is so far unclear, and whether this mechanism for BH fuelling precludes “runaway” growth of BHs to 100- or 1000-fold in a single instance, though unlikely, needs to be seen. All of this can in principle be realized without any AGN feedback – though it does not rule it out –, and has the freedom to have a mass- or environmental-dependent efficiency component, as we also observe a non-unity slope of the $z=0$ scaling relation. Coming back to the initial question, we conclude that there is no necessity that our toy model makes any implicit assumptions about a physical [$M_\mathrm{BH}$]{}–[$M_\mathrm{*}$]{}-coupling. Further consequences of this mechanism -------------------------------------- Our mechanism is also able to explain other observational results that are often used as evidence to support the picture of agn feedback. \(a) One of them is the potentially different [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-relations in galaxies with classical and pseudo-bulges [@gree08; @gado09]. Pseudo-bulged are thought to be formed through secular processes, rather than major merging [@korm04]. This has the immediate implication that the bulge has taken a different long- or mid-term assembly route compared to the BH, hence it is actually not expected that galaxies with pseudo-bulges obey the same [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-relation than those with classical ones. \(b) @hopk09 found that the stellar mass inside a very small radius near the black hole [$M_\mathrm{*}$]{}($<R$), comparable to the BH’s sphere of influence, shows a much larger scatter than the scatter in [$M_\mathrm{BH}$]{}/[$M_\mathrm{*}$]{}. They argue that this is an indication that gas was transported to near the BH, to form said stars, but that this had apparently no impact on the scatter in [$M_\mathrm{BH}$]{}, hence a self regulating mechanism should have been at work. With our model also this can be simply explained: The BH and bulge of a galaxy take part in the same long-term assembly and averaging cascade, hence their values correlate and scatter in their ratio is small. The central density of stars on the other hand does not. It is governed by more short term gas inflow, star formation and redistribution mechanisms, hence it is expected that it does not correlate well with the overall mass of the bulge or the BH. \(c) Following this line of argument, we in princple expect a correlation with [$M_\mathrm{BH}$]{} for any (mass) parameter that is subject to the same $\Lambda$CDM assembly chain. This includes the halo mass, to some extent the total mass of the galaxy [ (for both see Appendix \[sec:stellardm\] and Figure \[fig:mstellar\])]{}, but also e.g. the total mass of globular clusters in a galaxy, which recently has been found empirically by @burk10. \(d) On the other hand, our model is too basic to be able to reproduce higher order effects. A number of studies [@alle07; @hopk07; @feol09; @feol10] have suggested that the most fundamental relation with [$M_\mathrm{BH}$]{} is neither [$M_\mathrm{bulge}$]{} nor [$\sigma_\mathrm{bulge}$]{} but that rather galaxy binding energy or potential well depth. What these studies actually find are residual correlations in the scaling relations that are in some way related to the the compactness of a bulge/spheroid at a given [$M_\mathrm{BH}$]{}. This can be effective radius or binding energy or any other quantity that is ex- or implicitly a measure of compactness. Since bulge mass depends just linearly on the progenitor mass ratios, the exact size, compactness, binding energy of a spheroid produced in a merger will depend non-linearly on other parameters like gas fraction, merger orbit, etc. These parameters could easily be responsible for the residual correlation found in the [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-relation as they are measures of the specific short- or mid-term merger history of each galaxy, while [$M_\mathrm{bulge}$]{} is a long term integral. Our deliberately simple toy model itself can not make any statements on this issue as it does not trace e.g. galaxy scale radii. Interpreting the above points with respect to the relevance of AGN feedback, we find no strong argument for AGN feedback as a [*necessary*]{} mechanism at work. However this does not mean that AGN feedback does not exist. It only means that AGN feedback is still a [*possible*]{} mechanism involved in parts of galaxy evolution, which might be important e.g. for subclasses of the galaxy population[^4]. This implies that other proposed means of energy (or momentum) injection that have been proposed to quench SF in massive galaxies [e.g. @deke06; @khoc08; @lofa09] are viable options. In other words, all other mechanisms that can be invoked to truncate star formation in massive galaxies appear to be equally justified. The comparison with previous work --------------------------------- Our result is different to all previous studies, and with more than a decade of semianalytic models including BHs having passed, one of the obvious questions is: Why have others not found this before? The closest studies to ours is a short proceedings contribution by @gask10, which is basically paraphrasing @peng07, and the work by @hirc10. Their study is a very systematic assessment of how the scaling relation scatter evolves under the influence of galaxy merging. However, their initial setup in all cases was an existing correlation of [$M_\mathrm{BH}$]{} and [$M_\mathrm{*}$]{}, and due to their specific goal they deliberately ignored the influence from SF and BH accretion. Other recent studies investigating this subject either explicitly include AGN feedback [e.g. @crot06; @robe06; @boot09; @joha09; @shan09] or couple the merging scenario with a self regulation prescription for BH growth [e.g. @kauf00; @volo09]. In retrospect it is quite obvious why they did not find our results earlier, as most of these models were developed for galaxy evolution in general. Once BHs entered the equations, they added terms of (regulated) BH growth and, when evaluating the generated [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-relations, noted that their prescription managed to produce a decent match to the empirical relation. The interpretation that this meant the AGN feedback- or coupling-prescription were correct, now requires re-evaluation from our new point of view – it was the simple result of hierarchical assembly at work. Impact on evolution studies --------------------------- In the last decade a substantial number of attempts were made to measure local and higher redshift scaling relations of [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}, [$M_\mathrm{BH}$–$\sigma_\mathrm{bulge}$]{}, or [$M_\mathrm{BH}$–$L_\mathrm{bulge}$]{} [@treu04; @peng06a; @peng06b; @woo06; @treu07; @schr08; @jahn09b; @merl10; @benn10; @deca10b] and to interprete them with respect to the mechanisms that couple BH growth and their impact on galaxy formation. Which implications does the non-causal origin of the scaling relations have for these results? If the mean cosmic [$M_\mathrm{*}$]{} and [$M_\mathrm{BH}$]{} actually evolved similarly, our results explain at least a part of the [*bulge*]{} scaling relation evolution for galaxies with substantial disk components: it is the simple conversion of disk to bulge mass in galaxy mergers. What still remains interesting and needs to be substantiated is how at high redshifts the relation between [$M_\mathrm{BH}$]{} and total [$M_\mathrm{*}$]{} (or even [$M_\mathrm{bulge}$]{} for bulge dominated galaxies) evolves [e.g. @walt04]. This would continue to predict a substantial early BH growth – with corresponding implications for BH feeding models. One other aspect that could serve as a diagnostic is the evolution of the scaling relation scatter. When coupled with predictions of BH and stellar mass assembly from a proper model, the scatter can be used to study e.g. the distribution of seed [$M_\mathrm{BH}$]{} at early times. We will follow up on this issue in a future publication. [62]{} natexlab\#1[\#1]{} , M. C., & [Richstone]{}, D. O. 2007, , 665, 120 , I. K., [Glazebrook]{}, K., [Brinkmann]{}, J., [Ivezi[ć]{}]{}, [Ž]{}., [Lupton]{}, R. H., [Nichol]{}, R. C., & [Szalay]{}, A. S. 2004, , 600, 681 , V. N., [Treu]{}, T., [Woo]{}, J., [Malkan]{}, M. A., [Le Bris]{}, A., [Auger]{}, M. W., [Gallagher]{}, S., & [Blandford]{}, R. D. 2010, , 708, 1507 , P. N., [Kaiser]{}, C. R., [Heckman]{}, T. M., & [Kauffmann]{}, G. 2006, , 368, L67 , C. M., & [Schaye]{}, J. 2009, , 398, 53 , N., [Dekel]{}, A., [Genzel]{}, R., [Genel]{}, S., [Cresci]{}, G., [Forster Schreiber]{}, N. M., [Shapiro]{}, K. L., [Davies]{}, R. I., & [Tacconi]{}, L. 2009, ArXiv e-prints , M., [Ma]{}, C., & [Quataert]{}, E. 2008, , 383, 93 , A., & [Tremaine]{}, S. 2010, , 720, 516 , M., [Jahnke]{}, K., [Inskip]{}, K. J., [Kartaltepe]{}, J., [Koekemoer]{}, A. M., [Lisker]{}, T., [Robaina]{}, A. R., [Scodeggio]{}, M., [Sheth]{}, K., [Trump]{}, J. R., [Andrae]{}, R., [Miyaji]{}, T., [Lusso]{}, E., [Brusa]{}, M., [Capak]{}, P., [Cappelluti]{}, N., [Civano]{}, F., [Ilbert]{}, O., [Impey]{}, C. D., [Leauthaud]{}, A., [Lilly]{}, S. J., [Salvato]{}, M., [Scoville]{}, N. Z., & [Taniguchi]{}, Y. 2011, , 726, 57 , D. J., [Springel]{}, V., [White]{}, S. D. M., [De Lucia]{}, G., [Frenk]{}, C. S., [Gao]{}, L., [Jenkins]{}, A., [Kauffmann]{}, G., [Navarro]{}, J. F., & [Yoshida]{}, N. 2006, , 365, 11 , E., [Dickinson]{}, M., [Morrison]{}, G., [Chary]{}, R., [Cimatti]{}, A., [Elbaz]{}, D., [Frayer]{}, D., [Renzini]{}, A., [Pope]{}, A., [Alexander]{}, D. M., [Bauer]{}, F. E., [Giavalisco]{}, M., [Huynh]{}, M., [Kurk]{}, J., & [Mignoli]{}, M. 2007, , 670, 156 , R., [Falomo]{}, R., [Treves]{}, A., [Labita]{}, M., [Kotilainen]{}, J. K., & [Scarpa]{}, R. 2010, , 402, 2453 , A., & [Birnboim]{}, Y. 2006, , 368, 2 , T., [Springel]{}, V., & [Hernquist]{}, L. 2005, , 433, 604 , A. C., [Sanders]{}, J. S., [Taylor]{}, G. B., [Allen]{}, S. W., [Crawford]{}, C. S., [Johnstone]{}, R. M., & [Iwasawa]{}, K. 2006, , 366, 417 , A., & [Mancini]{}, L. 2009, , 703, 1502 , A., [Mancini]{}, L., [Marulli]{}, F., & [van den Bergh]{}, S. 2010, General Relativity and Gravitation, 57 Ferrarese, L., & Merrit, D. 2000, , 539, L9 , D. A., & [Kauffmann]{}, G. 2009, , 399, 621 , C. M. 2010, ArXiv e-prints Gebhardt, K., Bender, R., Bower, G., Dressler, A., Faber, S. M., Filippenko, A. V., Green, R., Grillmair, C., Ho, L. C., Kormendy, J., Lauer, T. R., Magorrian, J., Pinkney, J., Richstone, D., & Tremaine, S. 2000, , 539, L13 , S., [Smol[č]{}i[ć]{}]{}, V., [Finoguenov]{}, A., [Boehringer]{}, H., [B[î]{}rzan]{}, L., [Zamorani]{}, G., [Oklop[č]{}i[ć]{}]{}, A., [Pierini]{}, D., [Pratt]{}, G. W., [Schinnerer]{}, E., [Massey]{}, R., [Koekemoer]{}, A. M., [Salvato]{}, M., [Sanders]{}, D. B., [Kartaltepe]{}, J. S., & [Thompson]{}, D. 2010, , 714, 218 , G. L., [De Zotti]{}, G., [Silva]{}, L., [Bressan]{}, A., & [Danese]{}, L. 2004, , 600, 580 , J. E., [Ho]{}, L. C., & [Barth]{}, A. J. 2008, , 688, 159 , K., [Richstone]{}, D. O., [Gebhardt]{}, K., [Lauer]{}, T. R., [Tremaine]{}, S., [Aller]{}, M. C., [Bender]{}, R., [Dressler]{}, A., [Faber]{}, S. M., [Filippenko]{}, A. V., [Green]{}, R., [Ho]{}, L. C., [Kormendy]{}, J., [Magorrian]{}, J., [Pinkney]{}, J., & [Siopis]{}, C. 2009, , 698, 198 , N., & [Rix]{}, H.-W. 2004, , 604, L89 , M., [Khochfar]{}, S., [Burkert]{}, A., [Naab]{}, T., [Genel]{}, S., & [Somerville]{}, R. 2010, ArXiv e-prints , A. M., & [Beacom]{}, J. F. 2006, , 651, 142 , P. F., [Bundy]{}, K., [Croton]{}, D., [Hernquist]{}, L., [Keres]{}, D., [Khochfar]{}, S., [Stewart]{}, K., [Wetzel]{}, A., & [Younger]{}, J. D. 2009, submitted to MNRAS, arxiv/0906.5357 , P. F., [Hernquist]{}, L., [Cox]{}, T. J., [Robertson]{}, B., & [Krause]{}, E. 2007, , 669, 67 , P. F., & [Quataert]{}, E. 2010, , 407, 1529 , K., [Bongiorno]{}, A., [Brusa]{}, M., [Capak]{}, P., [Cappelluti]{}, N., [Cisternas]{}, M., [Civano]{}, F., [Colbert]{}, J., [Comastri]{}, A., [Elvis]{}, M., [Hasinger]{}, G., [Ilbert]{}, O., [Impey]{}, C., [Inskip]{}, K., [Koekemoer]{}, A. M., [Lilly]{}, S., [Maier]{}, C., [Merloni]{}, A., [Riechers]{}, D., [Salvato]{}, M., [Schinnerer]{}, E., [Scoville]{}, N. Z., [Silverman]{}, J., [Taniguchi]{}, Y., [Trump]{}, J. R., & [Yan]{}, L. 2009, , 706, L215 , P. H., [Burkert]{}, A., & [Naab]{}, T. 2009, , 707, L184 Kauffmann, G., & Haehnelt, M. 2000, , 311, 576 , S., & [Ostriker]{}, J. P. 2008, , 680, 54 , J., & [Kennicutt]{}, Jr., R. C. 2004, , 42, 603 , J., & [Richstone]{}, D. 1995, , 33, 581 , Y., [Mo]{}, H. J., [van den Bosch]{}, F. C., & [Lin]{}, W. P. 2007, , 379, 689 , B., [Monaco]{}, P., [Vanzella]{}, E., [Fontanot]{}, F., [Silva]{}, L., & [Cristiani]{}, S. 2009, , 399, 827 , A. V., [Kang]{}, X., [Fontanot]{}, F., [Somerville]{}, R. S., [Koposov]{}, S., & [Monaco]{}, P. 2010, , 402, 1995 Magorrian, J., Tremaine, S., Richstone, D., Bender, R., Bower, G., Dressler, A., Faber, S. M., Gebhardt, K., Green, R., Grillmair, C., Kormendy, J., & Lauer, T. 1998, , 115, 2285 , A., & [Hunt]{}, L. K. 2003, , 589, L21 McLure, R. J., & Dunlop, J. S. 2002, , 331, 795 , A., [Bongiorno]{}, A., [Bolzonella]{}, M., [Brusa]{}, M., [Civano]{}, F., [Comastri]{}, A., [Elvis]{}, M., [Fiore]{}, F., [Gilli]{}, R., [Hao]{}, H., [Jahnke]{}, K., [Koekemoer]{}, A. M., [Lusso]{}, E., [Mainieri]{}, V., [Mignoli]{}, M., [Miyaji]{}, T., [Renzini]{}, A., [Salvato]{}, M., [Silverman]{}, J., [Trump]{}, J., [Vignali]{}, C., [Zamorani]{}, G., [Capak]{}, P., [Lilly]{}, S. J., [Sanders]{}, D., [Taniguchi]{}, Y., [Bardelli]{}, S., [Carollo]{}, C. M., [Caputi]{}, K., [Contini]{}, T., [Coppa]{}, G., [Cucciati]{}, O., [de la Torre]{}, S., [de Ravel]{}, L., [Franzetti]{}, P., [Garilli]{}, B., [Hasinger]{}, G., [Impey]{}, C., [Iovino]{}, A., [Iwasawa]{}, K., [Kampczyk]{}, P., [Kneib]{}, J., [Knobel]{}, C., [Kova[č]{}]{}, K., [Lamareille]{}, F., [Le Borgne]{}, J., [Le Brun]{}, V., [Le F[è]{}vre]{}, O., [Maier]{}, C., [Pello]{}, R., [Peng]{}, Y., [Perez Montero]{}, E., [Ricciardelli]{}, E., [Scodeggio]{}, M., [Tanaka]{}, M., [Tasca]{}, L. A. M., [Tresse]{}, L., [Vergani]{}, D., & [Zucca]{}, E. 2010, , 708, 137 , P., [Fontanot]{}, F., & [Taffoni]{}, G. 2007, , 375, 1189 , P., [Theuns]{}, T., & [Taffoni]{}, G. 2002, , 331, 587 , B. P., [Somerville]{}, R. S., [Maulbetsch]{}, C., [van den Bosch]{}, F. C., [Macci[ò]{}]{}, A. V., [Naab]{}, T., & [Oser]{}, L. 2010, , 710, 903 , C. Y. 2007, , 671, 1098 , C. Y., [Impey]{}, C. D., [Ho]{}, L. C., [Barton]{}, E. J., & [Rix]{}, H.-W. 2006, , 640, 114 , C. Y., [Impey]{}, C. D., [Rix]{}, H.-W., [Kochanek]{}, C. S., [Keeton]{}, C. R., [Falco]{}, E. E., [Leh[á]{}r]{}, J., & [McLeod]{}, B. A. 2006, , 649, 616 , B., [Hernquist]{}, L., [Cox]{}, T. J., [Di Matteo]{}, T., [Hopkins]{}, P. F., [Martini]{}, P., & [Springel]{}, V. 2006, , 641, 90 , M., [Wisotzki]{}, L., & [Jahnke]{}, K. 2008, , 478, 311 , F., [Weinberg]{}, D. H., & [Miralda-Escud[é]{}]{}, J. 2009, , 690, 20 , J., & [Rees]{}, M. J. 1998, , 331, L1 , A. 1982, , 200, 115 , S., [Gebhardt]{}, K., [Bender]{}, R., [Bower]{}, G., [Dressler]{}, A., [Faber]{}, S. M., [Filippenko]{}, A. V., [Green]{}, R., [Grillmair]{}, C., [Ho]{}, L. C., [Kormendy]{}, J., [Lauer]{}, T. R., [Magorrian]{}, J., [Pinkney]{}, J., & [Richstone]{}, D. 2002, , 574, 740 , T., [Malkan]{}, M. A., & [Blandford]{}, R. D. 2004, , 615, L97 , T., [Woo]{}, J.-H., [Malkan]{}, M. A., & [Blandford]{}, R. D. 2007, , 667, 117 , M., [Haardt]{}, F., & [Madau]{}, P. 2003, , 582, 559 , M., & [Natarajan]{}, P. 2009, , 400, 1911 , F., [Carilli]{}, C., [Bertoldi]{}, F., [Menten]{}, K., [Cox]{}, P., [Lo]{}, K. Y., [Fan]{}, X., & [Strauss]{}, M. A. 2004, , 615, L17 , J.-H., [Treu]{}, T., [Malkan]{}, M. A., & [Blandford]{}, R. D. 2006, , 645, 900 Testing the model: Are the parameter choices special? {#sec:test} ===================================================== [ccccclc]{} A & HB06 & 0.8 & H07 & Y & M$_{G1}$/M$_{G2}$ & B08\ B & [**const.**]{} & 0.8 & H07 & Y & M$_{G1}$/M$_{G2}$ & B08\ C & [**const.**]{} & 0.8 & [**const.**]{} & Y & M$_{G1}$/M$_{G2}$ & B08\ D & HB06 & 0.8 & H07 & [**N**]{} & M$_{G1}$/M$_{G2}$ & B08\ E & HB06 & 0.8 & H07 & Y & M$_{G1}$/M$_{G2}$ & [**B08 $\times 5$**]{}\ F & HB06 & 0.8 & H07 & Y & [**(M$_{G1}$/M$_{G2}$)$^{1/2}$** ]{} & B08\ G & HB06 & 0.8 & H07 & Y & [**(M$_{G1}$/M$_{G2}$)$^2$** ]{} & B08\ H & HB06 & [**1.0**]{} & H07 & Y & M$_{G1}$/M$_{G2}$ & B08\ I & HB06 & [**2.0**]{} & H07 & Y & M$_{G1}$/M$_{G2}$ & B08\ L & HB06 & [**3.5**]{} & H07 & Y & M$_{G1}$/M$_{G2}$ & B08\ \[tab:mods\] ![Models A–D; see Table \[tab:mods\] for model parameters.[]{data-label="fig:modelAD"}](MbMbh400new.ps){width="80.00000%"} ![Models E–G; see Table \[tab:mods\] for model parameters. The reference model A is shown again as a comparison.[]{data-label="fig:modelAG"}](MbMbh400dyn.AG.ps){width="80.00000%"} ![Models H–L; see Table \[tab:mods\] for model parameters. The reference model A is shown again as a comparison.[]{data-label="fig:modelAL"}](MbMbh400dyn.AL.ps){width="80.00000%"} Few (free) parameters enter in our parametrization of star formation, black hole accretion and bulge to disk conversion. In this section we want to explore different choices with respect to our reference model and test their impact on the final results. All different models are listed in Table \[tab:mods\]. Figure \[fig:modelAD\] shows the effect of varying our ‘weighting’ functions. In model B we set $SFR(z)=1.0$, e.g. we assume a constant star formation rate as a function of redshift; in model C we set also $AGN_L(z)=1.0$. The resulting [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-relations are indistinguishable from the original (A) model. The bottom right panel of figure \[fig:modelAD\] (model D) shows an even tighter correlation between [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}with respect to our reference model. This is because in model D we remove the stochasticity in the BH mass doubling, forcing all BHs to double their mass every $\tau$ Gyrs. This increases the number of doublings in the merger tree branches with short life time, increasing the fraction of BH mass that is accreted through mergers and hence is subject to the central limit theorem. In Figure \[fig:modelAG\] we check the effect of our parametrization of dynamical friction (model E, where the dynamical friction time is multiplied by five) and of our disk-to-bulge conversion, assuming that a fraction of the disk mass proportional to the square-root (model F) or proportional to the square of the merger ratio is promoted into the disk (model G). Finally, models H–L (Figure \[fig:modelAL\]) test the importance of a dependence of SFR on stellar mass. In these models we vary the $q$ exponent in equation \[eq:sfr2\]: models H and I do not show any appreciable variation when compared with model A. Model L which has an absolutely unrealistic value for $q$ is the only model where we were able to break the $M_{bulge}-M_{BH}$ correlation. This result can be easily understand in the following way: if star formation is a too strong function of stellar mass, then the vast majority of stars will be formed in the main branch of the tree that hosts (by definition) the most massive progenitor of our galaxy. This implies that the bulk of stellar mass will not be subject to any averaging process and hence the central limit theorem does not apply. Moreover given the artificially high fraction of stars produced within the central galaxy, there will be no major merger, this explains why in model L we do not get any bulge more massive than $4\times 10^{10}$ [$M_\odot$]{}. All other model are not distinguishable from the reference model A, this underlines one more time the convergence power of galaxy mergers and shows that the actual implementation of star formation, dynamical friction, BH accretion and bulge formation are only secondary effects. Scaling relations with stellar mass and halo mass {#sec:stellardm} ================================================= Hierarchical assembly produces correlations between any two parameters that take part in this cascade. The main focus of this paper lies on the [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-relation, but [$M_\mathrm{BH}$]{} also correlates with total stellar mass and dark matter halo mass. This is shown in the two panels of Figure \[fig:mstellar\], which include the receipes for BHA. The clearest and simplest correlation is the one with [$M_\mathrm{DM}$]{} (right side), which in the simple framework of this toy model is very close to slope=1, with some scatter and no bending. [$M_\mathrm{BH}$]{} vs. total stellar mass (incl. SF receipe), as shown on the left side of Figure \[fig:mstellar\] is at the massive end largely identical to the [$M_\mathrm{BH}$–$M_\mathrm{bulge}$]{}-view, because most galaxies there will have had sufficient numbers of minor and major mergers in order to make them bulge dominated. At the low-mass end, the scatter is still larger than at high masses, due to the smaller number of (averaging) past mergers for each halo, but it is somewhat smaller than for bulge mass, since the extra random element from disk-to-bulge conversion is absent. This also explains the missing plume of seemingly high [$M_\mathrm{BH}$]{}-systems at low masses visible in Figure \[fig:MbMbh1\], which in this way can be explained to actually be disk galaxies with small bulges and normal sized black holes for the amount of total stellar mass. ![image](MsMbh.ps){width="48.00000%"} ![image](MdmMbh.ps){width="48.00000%"} [^1]: We picked this mass since at lower masses halos likely did/do not form stars at all [@macc10]. The most massive progenitor of $z=0$ galaxies form according to this definition in the range $z=15-17$, while low mass satellites can form as late as $z\sim3$. [^2]: The convergence is in fact too strong (see next sections), as the scaling relation it produces by $z=0$ is much tighter than the observed 0.3 dex scatter. In principle the scatter has a $\sqrt{N}$ dependency on the number $N$ of merger generations, but the relation gets complicated by the different masses of the merging components and different merging times across the tree. [^3]: We note that we do not include a different shape of this star formation history as a function of mass. As the Appendix shows, this will not have any effect on the results [^4]: Undoubtedly, at least “radio mode” feedback [@crot06] is actually observed in some massive clusters [e.g. @fabi06; @best06] and possibly on the group-level [@giod10].
{ "pile_set_name": "ArXiv" }
[**Onsets of avalanches in the BTW model**]{} [Ajanta Bhowal$^*$]{} [*Institute for Theoretical Physics*]{} [*University of Cologne, D-50923 Cologne, Germany*]{} [**Abstract:**]{} The onsets of toppling and dissipation in the BTW model are studied by computer simulation. The distributions of these two onset times and their dependences on the system size are also studied. Simple power law dependences of these two times on the system size are observed and the exponents are estimated. The fluctuation of the average (spatial) height in the subcritical region is studied and observed to increase very rapidly near the SOC point. There exists some extended driven dissipative systems in nature which show self-organised criticality (SOC). This phenomena of SOC is characterised by sponteneous evolution into a steady state which shows long-range spatial and temporal correlations. The concept of SOC was introduced by Bak et al in terms of a simple cellular automata model \[1\]. The steady state dynamics of the model shows a power law in the probability distributions for the occurence of the relaxation (avalanches) clusters of a certain size, area, lifetime, etc. Extensive work has been done so far to study the properties of the model in the steady SOC state\[2-10\]. Using the commutative property of the particle addition operator this model has been solved exactly \[2\]. Several properties of this critical state, e.g., entropy, height correlation, height probabilities, etc have been calculated analytically \[2-3\]. But the critical exponents have not been calculated analytically and hence an extensive numerical efforts have been performed to estimate various exponents \[4-7\]. The values of critical exponent for size and lifetime distributions of avalanches starting at the boundary have been calculated \[8\]. Recently, the avalanche exponents were estimated using the renormalization scheme \[9\]. Very few efforts are made to study the systematic evolutions of the system towards the SOC state. In the subcritical region the response of a pulsed addition of particles has been studied and it has been observed \[11\] that the ratio of response time ($\delta t$) to the perturbation time ($\Delta t$) diverges as the system approaches the critical state (i.e., $R={\delta t \over \Delta t} \sim (z_c -z)^{-\gamma}$), which implies the critical slowing down in this model. Grassberger and Manna \[12\] also anticipated such kind of scaling behaviour ($<s>, <t> \sim (z_c -z)^{-\beta}$) and obtained only for two dimension. In this paper, we have focussed on the subcritical region of the time evolution of the BTW model. We have studied how the onset time of toppling and that of dissipation (escape of particle through the boundary) vary with the system size. The distributions of these two onset times have also been found. We have also studied the growth of fluctuations of the height variable (averaged over all sites) as the system attains the critical state. The BTW model is a lattice automata model which shows some important properties of the dynamics of the system which evolves spontaneously into a critical state. We consider a two dimensional square lattice of size $L \times L$. The description of the model is the following: At each site $(i,j)$ of the lattice a variable (so called height) $z(i,j)$ is associated which can take positive integer values. In every time step, one particle is added to a randomly chosen site according to $$z(i,j)=z(i,j)+1.$$ If, at any site the height variable exceeds a critical value $z_m$ (i.e., if $z(i,j) \geq z_m$), then that site becomes unstable and it relaxes by a toppling. As an unstable site topples, the value of the height variable, of that site is decreased by 4 units and that, of each of the four of its neighbouring sites increased by unity (local conservation), i.e., $$z(i,j)=z(i,j)-4$$ $$z(i, j \pm 1) = z(i , j \pm 1) +1 ~~{\rm and}~~ z(i \pm 1,j) = z(i\pm 1 ,j ) +1$$ for $z(i,j) \geq z_m$. Each boundary site is attached to an additional site which acts as a sink. We use here the open boundary conditions so that the system can dissipate through the boundary. In our simulation, we have taken $z_m = 4$. The main investigations in this paper can be divided as follows: \(1) Studies regarding the onset times of the toppling and dissipations, where we allow the system to evolve under the BTW dynamics (following eqns 1-3) starting from an initial condition with all the sites having $z=0$. With the evolution of time, the height at different sites first increases due to random addition of particles. As soon as the height at any site reaches (or exceeds) the maximum value ($z_m = 4$), that site topples. We call this time, when the toppling starts in the system, the onset time of toppling, $T^t_o$. In most of the cases, the interior sites first topple and then, after some time toppling occurs at the boundary sites. The system starts to dissipate (through boundary) as soon as any boundary site topples. The time, taken by the system to start dissipation, is called onset time of dissipation, $T^d_o$. The onset of toppling of the boundary site (to start dissipation) can occur in either of the two ways: (i) via the primary avalanche of the boundary site, (ii) through the secondary avalanche followed by a primary avalanche, initiated at any interior site of the lattice. These two onset times of the system may change appreciably as one change the random sequence of addition of particles. As a result, these two times may have wide fluctuations in their distributions. We have studied here the statistical distributions of these two times, $T^t_o$ and $T^d_o$. The size (of the system) dependences of these two onset times are also studied. \(2) Studies regarding the growth of fluctuations near the SOC point: The average (spatial) value of $z$, i.e., $$\bar z= (1/N)\sum_{i=1}^N z_i ~~~~~~~~~~~~~(N=L^2)$$ increases almost linearly with the time in the subcritical region and then in the critical state, $\bar z$ becomes steady apart from some fluctuations \[4\]. Grassberger and Manna \[12\] have studied the system size dependence of the fluctuation of $\bar z$ in the critical state. Here, we pay some attention to study how the fluctuation of $\bar z$ changes as the system approaches SOC for a particular length ($L=100$). Here, the fluctuation of $\bar z$ means the standard deviation of $\bar z$, at a particular time, for different random sample, i.e., $$\delta z= \sqrt {{1\over N_s} \sum_{l=1}^{N_s}(\bar z_l -<\bar z > )^2 }$$ where $\bar z_l$ is the value of $\bar z$ for $l^{th}$ random sample and $ <\bar z>$ is the average value of $\bar z$ calculated from $N_s$ number of different random samples, i.e., $$<\bar z >= (1/N_s)\sum_{l=1}^{N_s} z_l .$$ In our simulation, for a fixed system size ($L = 100$), the distribution of $T^t_o$ and $T^d_o$ are obtained from $10^4$ different samples. Figure 1 shows the distribution of the onset time of toppling ($D(T^t_o)$) and the distribution of the onset time of dissipation ($D(T^d_o)$). It has been observed that the onset times for toppling and that for dissipation vary in a wide range but they have well defined symmetric distributions. The width of these two distributions differ appreciably, where it is much much larger in the case of dissipations. Since the number of boundary sites are smaller than that of the interior sites the width of the distribution of $T_o^d$ is much larger than that of $T_o^t$. The onset time (average) for toppling and that for dissipation will depend upon the linear size ($L$) of the system. The log-log plot of these two onset times are depicted in Fig. 2. The simulation results show that these dependences are power law type. The exponents are also estimated within limited accuracy. $T^t_o \propto L^{a}$ and $T^d_o \propto L^{b}$, where $a \sim 1.48$ and $b \sim 1.70$. All these data are obtained by averaging over 100 different samples for $30\leq L\leq 300$. The growth of fluctuation of $\bar z$ is plotted against the time of evolution of the system in the subcritical region in Fig. 3. It shows that the fluctuation increases very sharply near the critical point (SOC). The fluctuations of $\bar z$ are calculated for $L = 100$ using 400 random different samples. We studied here, when the toppling and the dissipation start in the BTW model. There is a well defined symmetric distribution of the onset time for toppling and that for dissipation. These two onset times depend (power law) on the system size with different exponents. &gt;From Fig.1, we see that the maximum value of the onset time for dissipation is of the order of $L^2$, when the average value of the hight variable is of the order of unity (i.e., $\bar z \sim 1$). Thus, there is a very little chance, that onset of toppling of the boundary site occurs due to the secondary avalanches of the interior site’s avalanches. Almost all the topplings of the boundary sites, for the onset of dissipation, are due to the avalanches initiated at the boundary. In this sense, onset of dissipation can be considered as onset of toppling of the boundary site (due to the avalanche, initiated at that boundary site). It is also important to know whether the system size dependences of these two onset times are same in the subcritical region for the other SOC models show the power law. The author is grateful to the Institute for Theoretical Physics for giving her the opportunity to use the computer at the University of Cologne, Germany. [99]{} P. Bak, C. Tang and K. Wiesenfeld, Phys. Rev. Lett., [**59**]{} (1987) 381; Phys. Rev. A, [**38**]{} (1988) 364. D. Dhar, Phys. Rev. Lett., [**64**]{} (1990) 1613; S. N. Majumder and D. Dhar, J. Phys. A [**24**]{} (1991) L357. E. V. Ivashkevich, J. Phys. A [**27**]{} (1994) 3643; V. B. Priezzhev, J. Stat. Phys, [**74**]{} (1994) 955. S. S. Manna, J. Stat. Phys. [**59**]{} (1990) 509; Physica A [**179**]{} (1991) 249 V. B. Priezzhev, D. V. Ktitarev and E. V. Ivashkevich, Phys. Rev. Lett [**76**]{} (1996) 2093. A. Benhur and O. Biham, Phys. Rev. E [**53**]{} (1996) R1317. S. Lubek and K. D. Usadel, Phys. Rev. E [**55**]{} (1997) 4095. E. V. Ivashkevich, J. Phys. A [**27** ]{} (1994) L585. L. Pietronero, A. Vespignani and S. Zapperi, Phys. Rev. Lett., [**72**]{} (1994) 1690; E. V. Ivashkevich, Phys. Rev. Lett, [**76**]{} (1996) 3368. A. Bhowal, Physica A [**247**]{} (1997) 327 M. Acharyya and B. K. Chakrabarti, Physica A, [**224**]{} (1996) 254 P. Grassberger and S. S. Manna, J. Phys. (Paris) [**51**]{} (1990) 1091 (1500,900)(0,0) =cmr10 at 10pt (176.0,68.0) ------------------------------------------------------------------------ (176.0,68.0) ------------------------------------------------------------------------ (176.0,68.0) ------------------------------------------------------------------------ (154,68)[(0,0)\[r\][0]{}]{} (1416.0,68.0) ------------------------------------------------------------------------ (176.0,203.0) ------------------------------------------------------------------------ (154,203)[(0,0)\[r\][100]{}]{} (1416.0,203.0) ------------------------------------------------------------------------ (176.0,338.0) ------------------------------------------------------------------------ (154,338)[(0,0)\[r\][200]{}]{} (1416.0,338.0) ------------------------------------------------------------------------ (176.0,473.0) ------------------------------------------------------------------------ (154,473)[(0,0)\[r\][300]{}]{} (1416.0,473.0) ------------------------------------------------------------------------ (176.0,607.0) ------------------------------------------------------------------------ (154,607)[(0,0)\[r\][400]{}]{} (1416.0,607.0) ------------------------------------------------------------------------ (176.0,742.0) ------------------------------------------------------------------------ (154,742)[(0,0)\[r\][500]{}]{} (1416.0,742.0) ------------------------------------------------------------------------ (176.0,877.0) ------------------------------------------------------------------------ (154,877)[(0,0)\[r\][600]{}]{} (1416.0,877.0) ------------------------------------------------------------------------ (176.0,68.0) ------------------------------------------------------------------------ (176,23)[(0,0)[0]{}]{} (176.0,857.0) ------------------------------------------------------------------------ (291.0,68.0) ------------------------------------------------------------------------ (291,23)[(0,0)[1000]{}]{} (291.0,857.0) ------------------------------------------------------------------------ (405.0,68.0) ------------------------------------------------------------------------ (405,23)[(0,0)[2000]{}]{} (405.0,857.0) ------------------------------------------------------------------------ (520.0,68.0) ------------------------------------------------------------------------ (520,23)[(0,0)[3000]{}]{} (520.0,857.0) ------------------------------------------------------------------------ (634.0,68.0) ------------------------------------------------------------------------ (634,23)[(0,0)[4000]{}]{} (634.0,857.0) ------------------------------------------------------------------------ (749.0,68.0) ------------------------------------------------------------------------ (749,23)[(0,0)[5000]{}]{} (749.0,857.0) ------------------------------------------------------------------------ (863.0,68.0) ------------------------------------------------------------------------ (863,23)[(0,0)[6000]{}]{} (863.0,857.0) ------------------------------------------------------------------------ (978.0,68.0) ------------------------------------------------------------------------ (978,23)[(0,0)[7000]{}]{} (978.0,857.0) ------------------------------------------------------------------------ (1092.0,68.0) ------------------------------------------------------------------------ (1092,23)[(0,0)[8000]{}]{} (1092.0,857.0) ------------------------------------------------------------------------ (1207.0,68.0) ------------------------------------------------------------------------ (1207,23)[(0,0)[9000]{}]{} (1207.0,857.0) ------------------------------------------------------------------------ (1321.0,68.0) ------------------------------------------------------------------------ (1321,23)[(0,0)[10000]{}]{} (1321.0,857.0) ------------------------------------------------------------------------ (1436.0,68.0) ------------------------------------------------------------------------ (1436,23)[(0,0)[11000]{}]{} (1436.0,857.0) ------------------------------------------------------------------------ (176.0,68.0) ------------------------------------------------------------------------ (1436.0,68.0) ------------------------------------------------------------------------ (176.0,877.0) ------------------------------------------------------------------------ (405,843)[(0,0)\[l\][$D(T^t_o$)]{}]{} (863,810)[(0,0)\[l\][$D(T^d_o$)]{}]{} (176.0,68.0) ------------------------------------------------------------------------ (187,69) (187.0,69.0) ------------------------------------------------------------------------ (197.0,69.0) ------------------------------------------------------------------------ (197.0,71.0) ------------------------------------------------------------------------ (215.0,71.0) ------------------------------------------------------------------------ (215.0,77.0) ------------------------------------------------------------------------ (224.0,77.0) ------------------------------------------------------------------------ (224.0,86.0) ------------------------------------------------------------------------ (233.0,86.0) ------------------------------------------------------------------------ (233.0,92.0) ------------------------------------------------------------------------ (242.0,92.0) ------------------------------------------------------------------------ (242.0,94.0) ------------------------------------------------------------------------ (251.0,94.0) ------------------------------------------------------------------------ (251.0,129.0) ------------------------------------------------------------------------ (260.0,126.0) ------------------------------------------------------------------------ (260.0,126.0) ------------------------------------------------------------------------ (269.0,126.0) ------------------------------------------------------------------------ (269.0,152.0) ------------------------------------------------------------------------ (278.0,152.0) ------------------------------------------------------------------------ (278.0,184.0) ------------------------------------------------------------------------ (287.0,184.0) ------------------------------------------------------------------------ (287.0,189.0) ------------------------------------------------------------------------ (297.0,189.0) ------------------------------------------------------------------------ (297.0,242.0) ------------------------------------------------------------------------ (306.0,242.0) ------------------------------------------------------------------------ (306.0,300.0) ------------------------------------------------------------------------ (315.0,300.0) ------------------------------------------------------------------------ (315.0,361.0) ------------------------------------------------------------------------ (324.0,361.0) ------------------------------------------------------------------------ (324.0,365.0) ------------------------------------------------------------------------ (333.0,365.0) ------------------------------------------------------------------------ (333.0,428.0) ------------------------------------------------------------------------ (342.0,428.0) ------------------------------------------------------------------------ (342.0,454.0) ------------------------------------------------------------------------ (351.0,454.0) ------------------------------------------------------------------------ (351.0,530.0) ------------------------------------------------------------------------ (360.0,530.0) ------------------------------------------------------------------------ (360.0,559.0) ------------------------------------------------------------------------ (369.0,559.0) ------------------------------------------------------------------------ (369.0,615.0) ------------------------------------------------------------------------ (378.0,615.0) ------------------------------------------------------------------------ (378.0,650.0) ------------------------------------------------------------------------ (388.0,650.0) ------------------------------------------------------------------------ (388.0,698.0) ------------------------------------------------------------------------ (397.0,698.0) ------------------------------------------------------------------------ (397.0,710.0) ------------------------------------------------------------------------ (406.0,710.0) ------------------------------------------------------------------------ (406.0,734.0) ------------------------------------------------------------------------ (415.0,734.0) (415.0,735.0) ------------------------------------------------------------------------ (424.0,735.0) ------------------------------------------------------------------------ (424.0,785.0) ------------------------------------------------------------------------ (433.0,699.0) ------------------------------------------------------------------------ (433.0,699.0) ------------------------------------------------------------------------ (442.0,699.0) ------------------------------------------------------------------------ (442.0,744.0) ------------------------------------------------------------------------ (451.0,668.0) ------------------------------------------------------------------------ (451.0,668.0) ------------------------------------------------------------------------ (460.0,650.0) ------------------------------------------------------------------------ (460.0,650.0) ------------------------------------------------------------------------ (469.0,555.0) ------------------------------------------------------------------------ (469.0,555.0) ------------------------------------------------------------------------ (478.0,551.0) ------------------------------------------------------------------------ (478.0,551.0) ------------------------------------------------------------------------ (488.0,508.0) ------------------------------------------------------------------------ (488.0,508.0) ------------------------------------------------------------------------ (497.0,428.0) ------------------------------------------------------------------------ (497.0,428.0) ------------------------------------------------------------------------ (506.0,398.0) ------------------------------------------------------------------------ (506.0,398.0) ------------------------------------------------------------------------ (515.0,330.0) ------------------------------------------------------------------------ (515.0,330.0) ------------------------------------------------------------------------ (524.0,309.0) ------------------------------------------------------------------------ (524.0,309.0) ------------------------------------------------------------------------ (533.0,254.0) ------------------------------------------------------------------------ (533.0,254.0) ------------------------------------------------------------------------ (542.0,226.0) ------------------------------------------------------------------------ (542.0,226.0) ------------------------------------------------------------------------ (551.0,193.0) ------------------------------------------------------------------------ (551.0,193.0) ------------------------------------------------------------------------ (560.0,145.0) ------------------------------------------------------------------------ (560.0,145.0) ------------------------------------------------------------------------ (569.0,126.0) ------------------------------------------------------------------------ (569.0,126.0) ------------------------------------------------------------------------ (579.0,108.0) ------------------------------------------------------------------------ (579.0,108.0) ------------------------------------------------------------------------ (588.0,91.0) ------------------------------------------------------------------------ (588.0,91.0) ------------------------------------------------------------------------ (597.0,88.0) ------------------------------------------------------------------------ (597.0,88.0) ------------------------------------------------------------------------ (606.0,80.0) ------------------------------------------------------------------------ (606.0,80.0) ------------------------------------------------------------------------ (615.0,80.0) (615.0,81.0) ------------------------------------------------------------------------ (624.0,76.0) ------------------------------------------------------------------------ (624.0,76.0) ------------------------------------------------------------------------ (633.0,69.0) ------------------------------------------------------------------------ (633.0,69.0) ------------------------------------------------------------------------ (225,69) (225.0,69.0) ------------------------------------------------------------------------ (247.0,69.0) ------------------------------------------------------------------------ (247.0,72.0) ------------------------------------------------------------------------ (269.0,72.0) ------------------------------------------------------------------------ (269.0,79.0) ------------------------------------------------------------------------ (291.0,79.0) ------------------------------------------------------------------------ (291.0,91.0) ------------------------------------------------------------------------ (313.0,91.0) ------------------------------------------------------------------------ (313.0,100.0) ------------------------------------------------------------------------ (335.0,100.0) ------------------------------------------------------------------------ (335.0,122.0) ------------------------------------------------------------------------ (357.0,122.0) ------------------------------------------------------------------------ (357.0,138.0) ------------------------------------------------------------------------ (379.0,138.0) ------------------------------------------------------------------------ (379.0,156.0) ------------------------------------------------------------------------ (401.0,156.0) ------------------------------------------------------------------------ (401.0,203.0) ------------------------------------------------------------------------ (423.0,199.0) ------------------------------------------------------------------------ (423.0,199.0) ------------------------------------------------------------------------ (446.0,199.0) ------------------------------------------------------------------------ (446.0,261.0) ------------------------------------------------------------------------ (468.0,261.0) ------------------------------------------------------------------------ (468.0,264.0) ------------------------------------------------------------------------ (490.0,264.0) ------------------------------------------------------------------------ (490.0,339.0) ------------------------------------------------------------------------ (512.0,335.0) ------------------------------------------------------------------------ (512.0,335.0) ------------------------------------------------------------------------ (534.0,335.0) ------------------------------------------------------------------------ (534.0,424.0) ------------------------------------------------------------------------ (556.0,424.0) ------------------------------------------------------------------------ (556.0,452.0) ------------------------------------------------------------------------ (578.0,452.0) ------------------------------------------------------------------------ (578.0,528.0) ------------------------------------------------------------------------ (600.0,528.0) ------------------------------------------------------------------------ (600.0,556.0) ------------------------------------------------------------------------ (622.0,556.0) ------------------------------------------------------------------------ (622.0,610.0) ------------------------------------------------------------------------ (644.0,610.0) ------------------------------------------------------------------------ (644.0,637.0) ------------------------------------------------------------------------ (666.0,637.0) ------------------------------------------------------------------------ (666.0,671.0) ------------------------------------------------------------------------ (688.0,671.0) ------------------------------------------------------------------------ (688.0,737.0) ------------------------------------------------------------------------ (710.0,737.0) ------------------------------------------------------------------------ (710.0,757.0) ------------------------------------------------------------------------ (732.0,707.0) ------------------------------------------------------------------------ (732.0,707.0) ------------------------------------------------------------------------ (754.0,692.0) ------------------------------------------------------------------------ (754.0,692.0) ------------------------------------------------------------------------ (776.0,692.0) ------------------------------------------------------------------------ (776.0,752.0) ------------------------------------------------------------------------ (798.0,652.0) ------------------------------------------------------------------------ (798.0,652.0) ------------------------------------------------------------------------ (820.0,636.0) ------------------------------------------------------------------------ (820.0,636.0) ------------------------------------------------------------------------ (842.0,630.0) ------------------------------------------------------------------------ (842.0,630.0) ------------------------------------------------------------------------ (864.0,618.0) ------------------------------------------------------------------------ (864.0,618.0) ------------------------------------------------------------------------ (886.0,566.0) ------------------------------------------------------------------------ (886.0,566.0) ------------------------------------------------------------------------ (908.0,486.0) ------------------------------------------------------------------------ (908.0,486.0) ------------------------------------------------------------------------ (930.0,462.0) ------------------------------------------------------------------------ (930.0,462.0) ------------------------------------------------------------------------ (952.0,400.0) ------------------------------------------------------------------------ (952.0,400.0) ------------------------------------------------------------------------ (974.0,374.0) ------------------------------------------------------------------------ (974.0,374.0) ------------------------------------------------------------------------ (996.0,296.0) ------------------------------------------------------------------------ (996.0,296.0) ------------------------------------------------------------------------ (1019.0,296.0) ------------------------------------------------------------------------ (1019.0,303.0) ------------------------------------------------------------------------ (1041.0,199.0) ------------------------------------------------------------------------ (1041.0,199.0) ------------------------------------------------------------------------ (1085.0,172.0) ------------------------------------------------------------------------ (1085.0,172.0) ------------------------------------------------------------------------ (1107.0,145.0) ------------------------------------------------------------------------ (1107.0,145.0) ------------------------------------------------------------------------ (1129.0,119.0) ------------------------------------------------------------------------ (1129.0,119.0) ------------------------------------------------------------------------ (1151.0,117.0) ------------------------------------------------------------------------ (1151.0,117.0) ------------------------------------------------------------------------ (1173.0,95.0) ------------------------------------------------------------------------ (1173.0,95.0) ------------------------------------------------------------------------ (1195.0,88.0) ------------------------------------------------------------------------ (1195.0,88.0) ------------------------------------------------------------------------ (1217.0,86.0) ------------------------------------------------------------------------ (1217.0,86.0) ------------------------------------------------------------------------ (1239.0,76.0) ------------------------------------------------------------------------ (1239.0,76.0) ------------------------------------------------------------------------ (1261.0,72.0) ------------------------------------------------------------------------ (1261.0,72.0) ------------------------------------------------------------------------ (1283.0,72.0) (1283.0,73.0) ------------------------------------------------------------------------ (1305.0,71.0) ------------------------------------------------------------------------ (1305.0,71.0) ------------------------------------------------------------------------ [**Fig. 1**]{}. The unnormalized distributions of onset time of toppling ($T^t_o$) and the onset time of dissipation ($T^d_o$). (1500,900)(0,0) =cmr10 at 10pt (176.0,113.0) ------------------------------------------------------------------------ (154,113)[(0,0)\[r\][100]{}]{} (1416.0,113.0) ------------------------------------------------------------------------ (176.0,190.0) ------------------------------------------------------------------------ (1426.0,190.0) ------------------------------------------------------------------------ (176.0,235.0) ------------------------------------------------------------------------ (1426.0,235.0) ------------------------------------------------------------------------ (176.0,266.0) ------------------------------------------------------------------------ (1426.0,266.0) ------------------------------------------------------------------------ (176.0,291.0) ------------------------------------------------------------------------ (1426.0,291.0) ------------------------------------------------------------------------ (176.0,311.0) ------------------------------------------------------------------------ (1426.0,311.0) ------------------------------------------------------------------------ (176.0,328.0) ------------------------------------------------------------------------ (1426.0,328.0) ------------------------------------------------------------------------ (176.0,343.0) ------------------------------------------------------------------------ (1426.0,343.0) ------------------------------------------------------------------------ (176.0,356.0) ------------------------------------------------------------------------ (1426.0,356.0) ------------------------------------------------------------------------ (176.0,368.0) ------------------------------------------------------------------------ (154,368)[(0,0)\[r\][1000]{}]{} (1416.0,368.0) ------------------------------------------------------------------------ (176.0,444.0) ------------------------------------------------------------------------ (1426.0,444.0) ------------------------------------------------------------------------ (176.0,489.0) ------------------------------------------------------------------------ (1426.0,489.0) ------------------------------------------------------------------------ (176.0,521.0) ------------------------------------------------------------------------ (1426.0,521.0) ------------------------------------------------------------------------ (176.0,546.0) ------------------------------------------------------------------------ (1426.0,546.0) ------------------------------------------------------------------------ (176.0,566.0) ------------------------------------------------------------------------ (1426.0,566.0) ------------------------------------------------------------------------ (176.0,583.0) ------------------------------------------------------------------------ (1426.0,583.0) ------------------------------------------------------------------------ (176.0,598.0) ------------------------------------------------------------------------ (1426.0,598.0) ------------------------------------------------------------------------ (176.0,611.0) ------------------------------------------------------------------------ (1426.0,611.0) ------------------------------------------------------------------------ (176.0,622.0) ------------------------------------------------------------------------ (154,622)[(0,0)\[r\][10000]{}]{} (1416.0,622.0) ------------------------------------------------------------------------ (176.0,699.0) ------------------------------------------------------------------------ (1426.0,699.0) ------------------------------------------------------------------------ (176.0,744.0) ------------------------------------------------------------------------ (1426.0,744.0) ------------------------------------------------------------------------ (176.0,776.0) ------------------------------------------------------------------------ (1426.0,776.0) ------------------------------------------------------------------------ (176.0,800.0) ------------------------------------------------------------------------ (1426.0,800.0) ------------------------------------------------------------------------ (176.0,821.0) ------------------------------------------------------------------------ (1426.0,821.0) ------------------------------------------------------------------------ (176.0,838.0) ------------------------------------------------------------------------ (1426.0,838.0) ------------------------------------------------------------------------ (176.0,852.0) ------------------------------------------------------------------------ (1426.0,852.0) ------------------------------------------------------------------------ (176.0,865.0) ------------------------------------------------------------------------ (1426.0,865.0) ------------------------------------------------------------------------ (176.0,877.0) ------------------------------------------------------------------------ (154,877)[(0,0)\[r\][100000]{}]{} (1416.0,877.0) ------------------------------------------------------------------------ (176.0,113.0) ------------------------------------------------------------------------ (176,68)[(0,0)[10]{}]{} (176.0,857.0) ------------------------------------------------------------------------ (366.0,113.0) ------------------------------------------------------------------------ (366.0,867.0) ------------------------------------------------------------------------ (477.0,113.0) ------------------------------------------------------------------------ (477.0,867.0) ------------------------------------------------------------------------ (555.0,113.0) ------------------------------------------------------------------------ (555.0,867.0) ------------------------------------------------------------------------ (616.0,113.0) ------------------------------------------------------------------------ (616.0,867.0) ------------------------------------------------------------------------ (666.0,113.0) ------------------------------------------------------------------------ (666.0,867.0) ------------------------------------------------------------------------ (708.0,113.0) ------------------------------------------------------------------------ (708.0,867.0) ------------------------------------------------------------------------ (745.0,113.0) ------------------------------------------------------------------------ (745.0,867.0) ------------------------------------------------------------------------ (777.0,113.0) ------------------------------------------------------------------------ (777.0,867.0) ------------------------------------------------------------------------ (806.0,113.0) ------------------------------------------------------------------------ (806,68)[(0,0)[100]{}]{} (806.0,857.0) ------------------------------------------------------------------------ (996.0,113.0) ------------------------------------------------------------------------ (996.0,867.0) ------------------------------------------------------------------------ (1107.0,113.0) ------------------------------------------------------------------------ (1107.0,867.0) ------------------------------------------------------------------------ (1185.0,113.0) ------------------------------------------------------------------------ (1185.0,867.0) ------------------------------------------------------------------------ (1246.0,113.0) ------------------------------------------------------------------------ (1246.0,867.0) ------------------------------------------------------------------------ (1296.0,113.0) ------------------------------------------------------------------------ (1296.0,867.0) ------------------------------------------------------------------------ (1338.0,113.0) ------------------------------------------------------------------------ (1338.0,867.0) ------------------------------------------------------------------------ (1375.0,113.0) ------------------------------------------------------------------------ (1375.0,867.0) ------------------------------------------------------------------------ (1407.0,113.0) ------------------------------------------------------------------------ (1407.0,867.0) ------------------------------------------------------------------------ (1436.0,113.0) ------------------------------------------------------------------------ (1436,68)[(0,0)[1000]{}]{} (1436.0,857.0) ------------------------------------------------------------------------ (176.0,113.0) ------------------------------------------------------------------------ (1436.0,113.0) ------------------------------------------------------------------------ (176.0,877.0) ------------------------------------------------------------------------ (806,23)[(0,0)[$L$]{}]{} (176.0,113.0) ------------------------------------------------------------------------ (477,255) (555,304) (616,334) (666,361) (708,396) (745,413) (777,435) (806,455) (917,511) (996,567) (1057,602) (1107,633) (477,254) (477.00,254.58)(0.836,0.500)[751]{} ------------------------------------------------------------------------ (477.00,253.17)(628.405,377.000)[2]{} ------------------------------------------------------------------------ (477,318)[(0,0)[$+$]{}]{} (555,375)[(0,0)[$+$]{}]{} (616,411)[(0,0)[$+$]{}]{} (666,452)[(0,0)[$+$]{}]{} (708,480)[(0,0)[$+$]{}]{} (745,504)[(0,0)[$+$]{}]{} (777,529)[(0,0)[$+$]{}]{} (806,545)[(0,0)[$+$]{}]{} (917,619)[(0,0)[$+$]{}]{} (996,672)[(0,0)[$+$]{}]{} (1057,714)[(0,0)[$+$]{}]{} (1107,758)[(0,0)[$+$]{}]{} (477,319) (477.00,319.58)(0.728,0.500)[863]{} ------------------------------------------------------------------------ (477.00,318.17)(628.585,433.000)[2]{} ------------------------------------------------------------------------ [**Fig. 2**]{}. The onset times, for toppling ($T^t_o$) ($\Diamond$) and dissipation ($T^d_o$) ($+$), are plotted against the system size $L$ in log-log scales. The solid lines are linear best fit. (1500,900)(0,0) =cmr10 at 10pt (220.0,113.0) ------------------------------------------------------------------------ (220.0,113.0) ------------------------------------------------------------------------ (220.0,113.0) ------------------------------------------------------------------------ (198,113)[(0,0)\[r\][0]{}]{} (1416.0,113.0) ------------------------------------------------------------------------ (220.0,240.0) ------------------------------------------------------------------------ (198,240)[(0,0)\[r\][0.5]{}]{} (1416.0,240.0) ------------------------------------------------------------------------ (220.0,368.0) ------------------------------------------------------------------------ (198,368)[(0,0)\[r\][1]{}]{} (1416.0,368.0) ------------------------------------------------------------------------ (220.0,495.0) ------------------------------------------------------------------------ (198,495)[(0,0)\[r\][1.5]{}]{} (1416.0,495.0) ------------------------------------------------------------------------ (220.0,622.0) ------------------------------------------------------------------------ (198,622)[(0,0)\[r\][2]{}]{} (1416.0,622.0) ------------------------------------------------------------------------ (220.0,750.0) ------------------------------------------------------------------------ (198,750)[(0,0)\[r\][2.5]{}]{} (1416.0,750.0) ------------------------------------------------------------------------ (220.0,877.0) ------------------------------------------------------------------------ (198,877)[(0,0)\[r\][3]{}]{} (1416.0,877.0) ------------------------------------------------------------------------ (220.0,113.0) ------------------------------------------------------------------------ (220,68)[(0,0)[0]{}]{} (220.0,857.0) ------------------------------------------------------------------------ (423.0,113.0) ------------------------------------------------------------------------ (423,68)[(0,0)[5000]{}]{} (423.0,857.0) ------------------------------------------------------------------------ (625.0,113.0) ------------------------------------------------------------------------ (625,68)[(0,0)[10000]{}]{} (625.0,857.0) ------------------------------------------------------------------------ (828.0,113.0) ------------------------------------------------------------------------ (828,68)[(0,0)[15000]{}]{} (828.0,857.0) ------------------------------------------------------------------------ (1031.0,113.0) ------------------------------------------------------------------------ (1031,68)[(0,0)[20000]{}]{} (1031.0,857.0) ------------------------------------------------------------------------ (1233.0,113.0) ------------------------------------------------------------------------ (1233,68)[(0,0)[25000]{}]{} (1233.0,857.0) ------------------------------------------------------------------------ (1436.0,113.0) ------------------------------------------------------------------------ (1436,68)[(0,0)[30000]{}]{} (1436.0,857.0) ------------------------------------------------------------------------ (220.0,113.0) ------------------------------------------------------------------------ (1436.0,113.0) ------------------------------------------------------------------------ (220.0,877.0) ------------------------------------------------------------------------ (828,23)[(0,0)[$t$]{}]{} (220.0,113.0) ------------------------------------------------------------------------ (228,113) (236,113) (244,113) (252,113) (259,113) (265,116) (271,116) (277,116) (283,116) (289,116) (295,116) (301,116) (307,117) (313,117) (319,117) (325,119) (331,119) (338,120) (344,120) (350,121) (356,123) (362,124) (368,124) (374,126) (380,128) (386,129) (392,130) (398,132) (404,133) (411,136) (417,138) (423,140) (429,142) (435,142) (441,143) (447,145) (453,145) (459,147) (465,147) (471,149) (477,149) (483,150) (490,151) (496,153) (502,157) (508,157) (514,159) (520,163) (526,165) (532,165) (538,166) (544,170) (550,171) (556,174) (563,178) (569,179) (575,182) (581,182) (587,184) (593,188) (599,187) (605,191) (607,193) (609,194) (611,196) (613,195) (615,198) (617,199) (619,200) (621,200) (623,203) (625,201) (627,202) (629,206) (631,208) (633,212) (635,213) (637,213) (640,214) (642,214) (644,214) (646,215) (648,214) (650,216) (652,218) (654,218) (656,218) (658,218) (660,219) (662,221) (664,221) (666,223) (668,225) (670,225) (672,225) (674,226) (676,228) (678,229) (680,228) (682,229) (684,228) (686,227) (688,227) (690,227) (692,230) (694,232) (696,235) (698,239) (700,239) (702,241) (704,243) (706,243) (708,247) (710,246) (712,247) (715,247) (717,248) (719,251) (721,250) (723,251) (725,252) (727,253) (729,253) (731,257) (733,257) (735,257) (737,257) (739,257) (741,261) (743,259) (745,263) (747,266) (749,265) (751,269) (753,270) (755,270) (757,269) (759,272) (761,273) (763,278) (765,282) (767,284) (769,287) (771,289) (773,290) (775,293) (777,295) (779,298) (781,299) (783,299) (785,303) (787,305) (789,310) (792,312) (794,314) (796,324) (798,325) (800,326) (802,326) (804,330) (806,329) (808,328) (810,331) (812,330) (814,331) (816,333) (818,336) (820,335) (822,339) (824,340) (826,341) (828,341) (830,345) (832,347) (834,351) (836,351) (838,351) (840,356) (842,363) (844,364) (846,367) (848,373) (850,373) (852,377) (854,380) (856,381) (858,385) (860,387) (862,386) (864,389) (867,384) (869,381) (871,380) (873,382) (875,384) (877,382) (879,385) (881,383) (883,393) (885,394) (887,394) (889,392) (891,397) (893,397) (895,401) (897,400) (899,402) (901,403) (903,403) (905,402) (907,410) (909,411) (911,425) (913,427) (915,436) (917,443) (919,442) (921,445) (923,450) (925,445) (927,444) (929,453) (931,456) (933,454) (935,465) (937,470) (939,477) (941,472) (944,476) (946,478) (948,480) (950,475) (952,471) (954,475) (956,475) (958,485) (960,487) (962,484) (964,486) (966,492) (968,500) (970,506) (972,509) (974,515) (976,525) (978,525) (980,529) (982,539) (984,542) (986,549) (988,555) (990,560) (992,565) (994,570) (996,574) (998,570) (1000,586) (1002,592) (1004,593) (1006,611) (1008,620) (1010,627) (1012,641) (1014,657) (1016,659) (1019,666) (1021,678) (1023,674) (1025,694) (1027,695) (1029,706) (1031,722) (1033,730) (1035,739) (1037,746) (1039,758) (1041,774) (1043,768) (1045,792) (1047,800) (1049,807) (1051,830) (1053,838) (1055,848) (1057,846) (1059,847) (1061,841) (1063,870) (222,114)[(0,0)[$+$]{}]{} (224,116)[(0,0)[$+$]{}]{} (226,117)[(0,0)[$+$]{}]{} (228,118)[(0,0)[$+$]{}]{} (230,119)[(0,0)[$+$]{}]{} (232,121)[(0,0)[$+$]{}]{} (234,122)[(0,0)[$+$]{}]{} (236,123)[(0,0)[$+$]{}]{} (238,124)[(0,0)[$+$]{}]{} (240,126)[(0,0)[$+$]{}]{} (242,127)[(0,0)[$+$]{}]{} (244,128)[(0,0)[$+$]{}]{} (246,130)[(0,0)[$+$]{}]{} (248,131)[(0,0)[$+$]{}]{} (250,132)[(0,0)[$+$]{}]{} (252,133)[(0,0)[$+$]{}]{} (254,135)[(0,0)[$+$]{}]{} (256,136)[(0,0)[$+$]{}]{} (259,137)[(0,0)[$+$]{}]{} (261,138)[(0,0)[$+$]{}]{} (263,140)[(0,0)[$+$]{}]{} (265,141)[(0,0)[$+$]{}]{} (267,142)[(0,0)[$+$]{}]{} (269,144)[(0,0)[$+$]{}]{} (271,145)[(0,0)[$+$]{}]{} (273,146)[(0,0)[$+$]{}]{} (275,147)[(0,0)[$+$]{}]{} (277,149)[(0,0)[$+$]{}]{} (279,150)[(0,0)[$+$]{}]{} (281,151)[(0,0)[$+$]{}]{} (283,152)[(0,0)[$+$]{}]{} (285,154)[(0,0)[$+$]{}]{} (287,155)[(0,0)[$+$]{}]{} (289,156)[(0,0)[$+$]{}]{} (291,158)[(0,0)[$+$]{}]{} (293,159)[(0,0)[$+$]{}]{} (295,160)[(0,0)[$+$]{}]{} (297,161)[(0,0)[$+$]{}]{} (299,163)[(0,0)[$+$]{}]{} (301,164)[(0,0)[$+$]{}]{} (303,165)[(0,0)[$+$]{}]{} (305,166)[(0,0)[$+$]{}]{} (307,168)[(0,0)[$+$]{}]{} (309,169)[(0,0)[$+$]{}]{} (311,170)[(0,0)[$+$]{}]{} (313,172)[(0,0)[$+$]{}]{} (315,173)[(0,0)[$+$]{}]{} (317,174)[(0,0)[$+$]{}]{} (319,175)[(0,0)[$+$]{}]{} (321,177)[(0,0)[$+$]{}]{} (323,178)[(0,0)[$+$]{}]{} (325,179)[(0,0)[$+$]{}]{} (327,180)[(0,0)[$+$]{}]{} (329,182)[(0,0)[$+$]{}]{} (331,183)[(0,0)[$+$]{}]{} (333,184)[(0,0)[$+$]{}]{} (336,186)[(0,0)[$+$]{}]{} (338,187)[(0,0)[$+$]{}]{} (340,188)[(0,0)[$+$]{}]{} (342,189)[(0,0)[$+$]{}]{} (344,191)[(0,0)[$+$]{}]{} (346,192)[(0,0)[$+$]{}]{} (348,193)[(0,0)[$+$]{}]{} (350,194)[(0,0)[$+$]{}]{} (352,196)[(0,0)[$+$]{}]{} (354,197)[(0,0)[$+$]{}]{} (356,198)[(0,0)[$+$]{}]{} (358,200)[(0,0)[$+$]{}]{} (360,201)[(0,0)[$+$]{}]{} (362,202)[(0,0)[$+$]{}]{} (364,203)[(0,0)[$+$]{}]{} (366,205)[(0,0)[$+$]{}]{} (368,206)[(0,0)[$+$]{}]{} (370,207)[(0,0)[$+$]{}]{} (372,208)[(0,0)[$+$]{}]{} (374,210)[(0,0)[$+$]{}]{} (376,211)[(0,0)[$+$]{}]{} (378,212)[(0,0)[$+$]{}]{} (380,214)[(0,0)[$+$]{}]{} (382,215)[(0,0)[$+$]{}]{} (384,216)[(0,0)[$+$]{}]{} (386,217)[(0,0)[$+$]{}]{} (388,219)[(0,0)[$+$]{}]{} (390,220)[(0,0)[$+$]{}]{} (392,221)[(0,0)[$+$]{}]{} (394,222)[(0,0)[$+$]{}]{} (396,224)[(0,0)[$+$]{}]{} (398,225)[(0,0)[$+$]{}]{} (400,226)[(0,0)[$+$]{}]{} (402,228)[(0,0)[$+$]{}]{} (404,229)[(0,0)[$+$]{}]{} (406,230)[(0,0)[$+$]{}]{} (408,231)[(0,0)[$+$]{}]{} (411,233)[(0,0)[$+$]{}]{} (413,234)[(0,0)[$+$]{}]{} (415,235)[(0,0)[$+$]{}]{} (417,236)[(0,0)[$+$]{}]{} (419,238)[(0,0)[$+$]{}]{} (421,239)[(0,0)[$+$]{}]{} (423,240)[(0,0)[$+$]{}]{} (425,242)[(0,0)[$+$]{}]{} (427,243)[(0,0)[$+$]{}]{} (429,244)[(0,0)[$+$]{}]{} (431,245)[(0,0)[$+$]{}]{} (433,247)[(0,0)[$+$]{}]{} (435,248)[(0,0)[$+$]{}]{} (437,249)[(0,0)[$+$]{}]{} (439,250)[(0,0)[$+$]{}]{} (441,252)[(0,0)[$+$]{}]{} (443,253)[(0,0)[$+$]{}]{} (445,254)[(0,0)[$+$]{}]{} (447,256)[(0,0)[$+$]{}]{} (449,257)[(0,0)[$+$]{}]{} (451,258)[(0,0)[$+$]{}]{} (453,259)[(0,0)[$+$]{}]{} (455,261)[(0,0)[$+$]{}]{} (457,262)[(0,0)[$+$]{}]{} (459,263)[(0,0)[$+$]{}]{} (461,264)[(0,0)[$+$]{}]{} (463,266)[(0,0)[$+$]{}]{} (465,267)[(0,0)[$+$]{}]{} (467,268)[(0,0)[$+$]{}]{} (469,270)[(0,0)[$+$]{}]{} (471,271)[(0,0)[$+$]{}]{} (473,272)[(0,0)[$+$]{}]{} (475,273)[(0,0)[$+$]{}]{} (477,275)[(0,0)[$+$]{}]{} (479,276)[(0,0)[$+$]{}]{} (481,277)[(0,0)[$+$]{}]{} (483,278)[(0,0)[$+$]{}]{} (485,280)[(0,0)[$+$]{}]{} (488,281)[(0,0)[$+$]{}]{} (490,282)[(0,0)[$+$]{}]{} (492,284)[(0,0)[$+$]{}]{} (494,285)[(0,0)[$+$]{}]{} (496,286)[(0,0)[$+$]{}]{} (498,287)[(0,0)[$+$]{}]{} (500,289)[(0,0)[$+$]{}]{} (502,290)[(0,0)[$+$]{}]{} (504,291)[(0,0)[$+$]{}]{} (506,292)[(0,0)[$+$]{}]{} (508,294)[(0,0)[$+$]{}]{} (510,295)[(0,0)[$+$]{}]{} (512,296)[(0,0)[$+$]{}]{} (514,298)[(0,0)[$+$]{}]{} (516,299)[(0,0)[$+$]{}]{} (518,300)[(0,0)[$+$]{}]{} (520,301)[(0,0)[$+$]{}]{} (522,303)[(0,0)[$+$]{}]{} (524,304)[(0,0)[$+$]{}]{} (526,305)[(0,0)[$+$]{}]{} (528,306)[(0,0)[$+$]{}]{} (530,308)[(0,0)[$+$]{}]{} (532,309)[(0,0)[$+$]{}]{} (534,310)[(0,0)[$+$]{}]{} (536,312)[(0,0)[$+$]{}]{} (538,313)[(0,0)[$+$]{}]{} (540,314)[(0,0)[$+$]{}]{} (542,315)[(0,0)[$+$]{}]{} (544,317)[(0,0)[$+$]{}]{} (546,318)[(0,0)[$+$]{}]{} (548,319)[(0,0)[$+$]{}]{} (550,320)[(0,0)[$+$]{}]{} (552,322)[(0,0)[$+$]{}]{} (554,323)[(0,0)[$+$]{}]{} (556,324)[(0,0)[$+$]{}]{} (558,326)[(0,0)[$+$]{}]{} (560,327)[(0,0)[$+$]{}]{} (563,328)[(0,0)[$+$]{}]{} (565,329)[(0,0)[$+$]{}]{} (567,331)[(0,0)[$+$]{}]{} (569,332)[(0,0)[$+$]{}]{} (571,333)[(0,0)[$+$]{}]{} (573,334)[(0,0)[$+$]{}]{} (575,336)[(0,0)[$+$]{}]{} (577,337)[(0,0)[$+$]{}]{} (579,338)[(0,0)[$+$]{}]{} (581,339)[(0,0)[$+$]{}]{} (583,341)[(0,0)[$+$]{}]{} (585,342)[(0,0)[$+$]{}]{} (587,343)[(0,0)[$+$]{}]{} (589,345)[(0,0)[$+$]{}]{} (591,346)[(0,0)[$+$]{}]{} (593,347)[(0,0)[$+$]{}]{} (595,348)[(0,0)[$+$]{}]{} (597,350)[(0,0)[$+$]{}]{} (599,351)[(0,0)[$+$]{}]{} (601,352)[(0,0)[$+$]{}]{} (603,353)[(0,0)[$+$]{}]{} (605,355)[(0,0)[$+$]{}]{} (607,356)[(0,0)[$+$]{}]{} (609,357)[(0,0)[$+$]{}]{} (611,359)[(0,0)[$+$]{}]{} (613,360)[(0,0)[$+$]{}]{} (615,361)[(0,0)[$+$]{}]{} (617,362)[(0,0)[$+$]{}]{} (619,364)[(0,0)[$+$]{}]{} (621,365)[(0,0)[$+$]{}]{} (623,366)[(0,0)[$+$]{}]{} (625,367)[(0,0)[$+$]{}]{} (627,369)[(0,0)[$+$]{}]{} (629,370)[(0,0)[$+$]{}]{} (631,371)[(0,0)[$+$]{}]{} (633,372)[(0,0)[$+$]{}]{} (635,374)[(0,0)[$+$]{}]{} (637,375)[(0,0)[$+$]{}]{} (640,376)[(0,0)[$+$]{}]{} (642,378)[(0,0)[$+$]{}]{} (644,379)[(0,0)[$+$]{}]{} (646,380)[(0,0)[$+$]{}]{} (648,381)[(0,0)[$+$]{}]{} (650,383)[(0,0)[$+$]{}]{} (652,384)[(0,0)[$+$]{}]{} (654,385)[(0,0)[$+$]{}]{} (656,386)[(0,0)[$+$]{}]{} (658,388)[(0,0)[$+$]{}]{} (660,389)[(0,0)[$+$]{}]{} (662,390)[(0,0)[$+$]{}]{} (664,392)[(0,0)[$+$]{}]{} (666,393)[(0,0)[$+$]{}]{} (668,394)[(0,0)[$+$]{}]{} (670,395)[(0,0)[$+$]{}]{} (672,397)[(0,0)[$+$]{}]{} (674,398)[(0,0)[$+$]{}]{} (676,399)[(0,0)[$+$]{}]{} (678,400)[(0,0)[$+$]{}]{} (680,402)[(0,0)[$+$]{}]{} (682,403)[(0,0)[$+$]{}]{} (684,404)[(0,0)[$+$]{}]{} (686,405)[(0,0)[$+$]{}]{} (688,407)[(0,0)[$+$]{}]{} (690,408)[(0,0)[$+$]{}]{} (692,409)[(0,0)[$+$]{}]{} (694,411)[(0,0)[$+$]{}]{} (696,412)[(0,0)[$+$]{}]{} (698,413)[(0,0)[$+$]{}]{} (700,414)[(0,0)[$+$]{}]{} (702,416)[(0,0)[$+$]{}]{} (704,417)[(0,0)[$+$]{}]{} (706,418)[(0,0)[$+$]{}]{} (708,419)[(0,0)[$+$]{}]{} (710,421)[(0,0)[$+$]{}]{} (712,422)[(0,0)[$+$]{}]{} (715,423)[(0,0)[$+$]{}]{} (717,424)[(0,0)[$+$]{}]{} (719,426)[(0,0)[$+$]{}]{} (721,427)[(0,0)[$+$]{}]{} (723,428)[(0,0)[$+$]{}]{} (725,430)[(0,0)[$+$]{}]{} (727,431)[(0,0)[$+$]{}]{} (729,432)[(0,0)[$+$]{}]{} (731,433)[(0,0)[$+$]{}]{} (733,435)[(0,0)[$+$]{}]{} (735,436)[(0,0)[$+$]{}]{} (737,437)[(0,0)[$+$]{}]{} (739,438)[(0,0)[$+$]{}]{} (741,440)[(0,0)[$+$]{}]{} (743,441)[(0,0)[$+$]{}]{} (745,442)[(0,0)[$+$]{}]{} (747,443)[(0,0)[$+$]{}]{} (749,445)[(0,0)[$+$]{}]{} (751,446)[(0,0)[$+$]{}]{} (753,447)[(0,0)[$+$]{}]{} (755,448)[(0,0)[$+$]{}]{} (757,450)[(0,0)[$+$]{}]{} (759,451)[(0,0)[$+$]{}]{} (761,452)[(0,0)[$+$]{}]{} (763,454)[(0,0)[$+$]{}]{} (765,455)[(0,0)[$+$]{}]{} (767,456)[(0,0)[$+$]{}]{} (769,457)[(0,0)[$+$]{}]{} (771,459)[(0,0)[$+$]{}]{} (773,460)[(0,0)[$+$]{}]{} (775,461)[(0,0)[$+$]{}]{} (777,462)[(0,0)[$+$]{}]{} (779,464)[(0,0)[$+$]{}]{} (781,465)[(0,0)[$+$]{}]{} (783,466)[(0,0)[$+$]{}]{} (785,467)[(0,0)[$+$]{}]{} (787,469)[(0,0)[$+$]{}]{} (789,470)[(0,0)[$+$]{}]{} (792,471)[(0,0)[$+$]{}]{} (794,472)[(0,0)[$+$]{}]{} (796,474)[(0,0)[$+$]{}]{} (798,475)[(0,0)[$+$]{}]{} (800,476)[(0,0)[$+$]{}]{} (802,477)[(0,0)[$+$]{}]{} (804,479)[(0,0)[$+$]{}]{} (806,480)[(0,0)[$+$]{}]{} (808,481)[(0,0)[$+$]{}]{} (810,483)[(0,0)[$+$]{}]{} (812,484)[(0,0)[$+$]{}]{} (814,485)[(0,0)[$+$]{}]{} (816,486)[(0,0)[$+$]{}]{} (818,488)[(0,0)[$+$]{}]{} (820,489)[(0,0)[$+$]{}]{} (822,490)[(0,0)[$+$]{}]{} (824,491)[(0,0)[$+$]{}]{} (826,493)[(0,0)[$+$]{}]{} (828,494)[(0,0)[$+$]{}]{} (830,495)[(0,0)[$+$]{}]{} (832,496)[(0,0)[$+$]{}]{} (834,498)[(0,0)[$+$]{}]{} (836,499)[(0,0)[$+$]{}]{} (838,500)[(0,0)[$+$]{}]{} (840,501)[(0,0)[$+$]{}]{} (842,503)[(0,0)[$+$]{}]{} (844,504)[(0,0)[$+$]{}]{} (846,505)[(0,0)[$+$]{}]{} (848,506)[(0,0)[$+$]{}]{} (850,508)[(0,0)[$+$]{}]{} (852,509)[(0,0)[$+$]{}]{} (854,510)[(0,0)[$+$]{}]{} (856,511)[(0,0)[$+$]{}]{} (858,513)[(0,0)[$+$]{}]{} (860,514)[(0,0)[$+$]{}]{} (862,515)[(0,0)[$+$]{}]{} (864,516)[(0,0)[$+$]{}]{} (867,518)[(0,0)[$+$]{}]{} (869,519)[(0,0)[$+$]{}]{} (871,520)[(0,0)[$+$]{}]{} (873,521)[(0,0)[$+$]{}]{} (875,523)[(0,0)[$+$]{}]{} (877,524)[(0,0)[$+$]{}]{} (879,525)[(0,0)[$+$]{}]{} (881,526)[(0,0)[$+$]{}]{} (883,528)[(0,0)[$+$]{}]{} (885,529)[(0,0)[$+$]{}]{} (887,530)[(0,0)[$+$]{}]{} (889,531)[(0,0)[$+$]{}]{} (891,533)[(0,0)[$+$]{}]{} (893,534)[(0,0)[$+$]{}]{} (895,535)[(0,0)[$+$]{}]{} (897,537)[(0,0)[$+$]{}]{} (899,538)[(0,0)[$+$]{}]{} (901,539)[(0,0)[$+$]{}]{} (903,540)[(0,0)[$+$]{}]{} (905,542)[(0,0)[$+$]{}]{} (907,543)[(0,0)[$+$]{}]{} (909,544)[(0,0)[$+$]{}]{} (911,545)[(0,0)[$+$]{}]{} (913,546)[(0,0)[$+$]{}]{} (915,548)[(0,0)[$+$]{}]{} (917,549)[(0,0)[$+$]{}]{} (919,550)[(0,0)[$+$]{}]{} (921,551)[(0,0)[$+$]{}]{} (923,553)[(0,0)[$+$]{}]{} (925,554)[(0,0)[$+$]{}]{} (927,555)[(0,0)[$+$]{}]{} (929,556)[(0,0)[$+$]{}]{} (931,558)[(0,0)[$+$]{}]{} (933,559)[(0,0)[$+$]{}]{} (935,560)[(0,0)[$+$]{}]{} (937,561)[(0,0)[$+$]{}]{} (939,563)[(0,0)[$+$]{}]{} (941,564)[(0,0)[$+$]{}]{} (944,565)[(0,0)[$+$]{}]{} (946,566)[(0,0)[$+$]{}]{} (948,568)[(0,0)[$+$]{}]{} (950,569)[(0,0)[$+$]{}]{} (952,570)[(0,0)[$+$]{}]{} (954,571)[(0,0)[$+$]{}]{} (956,573)[(0,0)[$+$]{}]{} (958,574)[(0,0)[$+$]{}]{} (960,575)[(0,0)[$+$]{}]{} (962,576)[(0,0)[$+$]{}]{} (964,577)[(0,0)[$+$]{}]{} (966,579)[(0,0)[$+$]{}]{} (968,580)[(0,0)[$+$]{}]{} (970,581)[(0,0)[$+$]{}]{} (972,582)[(0,0)[$+$]{}]{} (974,584)[(0,0)[$+$]{}]{} (976,585)[(0,0)[$+$]{}]{} (978,586)[(0,0)[$+$]{}]{} (980,587)[(0,0)[$+$]{}]{} (982,588)[(0,0)[$+$]{}]{} (984,590)[(0,0)[$+$]{}]{} (986,591)[(0,0)[$+$]{}]{} (988,592)[(0,0)[$+$]{}]{} (990,593)[(0,0)[$+$]{}]{} (992,595)[(0,0)[$+$]{}]{} (994,596)[(0,0)[$+$]{}]{} (996,597)[(0,0)[$+$]{}]{} (998,598)[(0,0)[$+$]{}]{} (1000,599)[(0,0)[$+$]{}]{} (1002,601)[(0,0)[$+$]{}]{} (1004,602)[(0,0)[$+$]{}]{} (1006,603)[(0,0)[$+$]{}]{} (1008,604)[(0,0)[$+$]{}]{} (1010,606)[(0,0)[$+$]{}]{} (1012,607)[(0,0)[$+$]{}]{} (1014,608)[(0,0)[$+$]{}]{} (1016,609)[(0,0)[$+$]{}]{} (1019,610)[(0,0)[$+$]{}]{} (1021,612)[(0,0)[$+$]{}]{} (1023,613)[(0,0)[$+$]{}]{} (1025,614)[(0,0)[$+$]{}]{} (1027,615)[(0,0)[$+$]{}]{} (1029,616)[(0,0)[$+$]{}]{} (1031,617)[(0,0)[$+$]{}]{} (1033,619)[(0,0)[$+$]{}]{} (1035,620)[(0,0)[$+$]{}]{} (1037,621)[(0,0)[$+$]{}]{} (1039,622)[(0,0)[$+$]{}]{} (1041,623)[(0,0)[$+$]{}]{} (1043,624)[(0,0)[$+$]{}]{} (1045,626)[(0,0)[$+$]{}]{} (1047,627)[(0,0)[$+$]{}]{} (1049,628)[(0,0)[$+$]{}]{} (1051,629)[(0,0)[$+$]{}]{} (1053,630)[(0,0)[$+$]{}]{} (1055,631)[(0,0)[$+$]{}]{} (1057,632)[(0,0)[$+$]{}]{} (1059,634)[(0,0)[$+$]{}]{} (1061,635)[(0,0)[$+$]{}]{} (1063,636)[(0,0)[$+$]{}]{} (1065,637)[(0,0)[$+$]{}]{} (1067,638)[(0,0)[$+$]{}]{} (1069,639)[(0,0)[$+$]{}]{} (1071,640)[(0,0)[$+$]{}]{} (1073,641)[(0,0)[$+$]{}]{} (1075,642)[(0,0)[$+$]{}]{} (1077,643)[(0,0)[$+$]{}]{} (1079,644)[(0,0)[$+$]{}]{} (1081,645)[(0,0)[$+$]{}]{} (1083,646)[(0,0)[$+$]{}]{} (1085,646)[(0,0)[$+$]{}]{} (1087,647)[(0,0)[$+$]{}]{} (1089,648)[(0,0)[$+$]{}]{} (1091,648)[(0,0)[$+$]{}]{} (1093,649)[(0,0)[$+$]{}]{} (1096,649)[(0,0)[$+$]{}]{} (1098,649)[(0,0)[$+$]{}]{} (1100,650)[(0,0)[$+$]{}]{} (1102,650)[(0,0)[$+$]{}]{} (1104,650)[(0,0)[$+$]{}]{} (1106,650)[(0,0)[$+$]{}]{} (1108,650)[(0,0)[$+$]{}]{} (1110,650)[(0,0)[$+$]{}]{} (1112,650)[(0,0)[$+$]{}]{} (1114,650)[(0,0)[$+$]{}]{} (1116,650)[(0,0)[$+$]{}]{} (1118,651)[(0,0)[$+$]{}]{} (1120,651)[(0,0)[$+$]{}]{} (1122,651)[(0,0)[$+$]{}]{} (1124,651)[(0,0)[$+$]{}]{} (1126,651)[(0,0)[$+$]{}]{} (1128,650)[(0,0)[$+$]{}]{} (1130,651)[(0,0)[$+$]{}]{} (1132,651)[(0,0)[$+$]{}]{} (1134,651)[(0,0)[$+$]{}]{} (1136,650)[(0,0)[$+$]{}]{} (1138,651)[(0,0)[$+$]{}]{} (1140,650)[(0,0)[$+$]{}]{} (1142,651)[(0,0)[$+$]{}]{} (1144,651)[(0,0)[$+$]{}]{} (1146,651)[(0,0)[$+$]{}]{} (1148,651)[(0,0)[$+$]{}]{} (1150,651)[(0,0)[$+$]{}]{} (1152,651)[(0,0)[$+$]{}]{} (1154,651)[(0,0)[$+$]{}]{} (1156,651)[(0,0)[$+$]{}]{} (1158,651)[(0,0)[$+$]{}]{} (1160,651)[(0,0)[$+$]{}]{} (1162,651)[(0,0)[$+$]{}]{} (1164,651)[(0,0)[$+$]{}]{} (1166,651)[(0,0)[$+$]{}]{} (1168,651)[(0,0)[$+$]{}]{} (1171,651)[(0,0)[$+$]{}]{} (1173,651)[(0,0)[$+$]{}]{} (1175,650)[(0,0)[$+$]{}]{} (1177,650)[(0,0)[$+$]{}]{} (1179,650)[(0,0)[$+$]{}]{} (1181,650)[(0,0)[$+$]{}]{} (1183,650)[(0,0)[$+$]{}]{} (1185,650)[(0,0)[$+$]{}]{} (1187,650)[(0,0)[$+$]{}]{} (1189,651)[(0,0)[$+$]{}]{} (1191,650)[(0,0)[$+$]{}]{} (1193,650)[(0,0)[$+$]{}]{} (1195,651)[(0,0)[$+$]{}]{} (1197,651)[(0,0)[$+$]{}]{} (1199,651)[(0,0)[$+$]{}]{} (1201,651)[(0,0)[$+$]{}]{} (1203,651)[(0,0)[$+$]{}]{} (1205,651)[(0,0)[$+$]{}]{} (1207,651)[(0,0)[$+$]{}]{} (1209,651)[(0,0)[$+$]{}]{} (1211,651)[(0,0)[$+$]{}]{} (1213,651)[(0,0)[$+$]{}]{} (1215,651)[(0,0)[$+$]{}]{} (1217,651)[(0,0)[$+$]{}]{} (1219,651)[(0,0)[$+$]{}]{} (1221,651)[(0,0)[$+$]{}]{} (1223,651)[(0,0)[$+$]{}]{} (1225,651)[(0,0)[$+$]{}]{} (1227,651)[(0,0)[$+$]{}]{} (1229,651)[(0,0)[$+$]{}]{} (1231,651)[(0,0)[$+$]{}]{} (1233,651)[(0,0)[$+$]{}]{} (1235,651)[(0,0)[$+$]{}]{} (1237,651)[(0,0)[$+$]{}]{} (1239,651)[(0,0)[$+$]{}]{} (1241,651)[(0,0)[$+$]{}]{} (1243,651)[(0,0)[$+$]{}]{} (1245,650)[(0,0)[$+$]{}]{} (1248,651)[(0,0)[$+$]{}]{} (1250,651)[(0,0)[$+$]{}]{} (1252,651)[(0,0)[$+$]{}]{} (1254,651)[(0,0)[$+$]{}]{} (1256,651)[(0,0)[$+$]{}]{} (1258,651)[(0,0)[$+$]{}]{} (1260,651)[(0,0)[$+$]{}]{} (1262,651)[(0,0)[$+$]{}]{} (1264,651)[(0,0)[$+$]{}]{} (1266,651)[(0,0)[$+$]{}]{} (1268,651)[(0,0)[$+$]{}]{} (1270,651)[(0,0)[$+$]{}]{} (1272,651)[(0,0)[$+$]{}]{} (1274,651)[(0,0)[$+$]{}]{} (1276,651)[(0,0)[$+$]{}]{} (1278,650)[(0,0)[$+$]{}]{} (1280,651)[(0,0)[$+$]{}]{} (1282,651)[(0,0)[$+$]{}]{} (1284,651)[(0,0)[$+$]{}]{} (1286,651)[(0,0)[$+$]{}]{} (1288,651)[(0,0)[$+$]{}]{} (1290,651)[(0,0)[$+$]{}]{} (1292,651)[(0,0)[$+$]{}]{} (1294,651)[(0,0)[$+$]{}]{} (1296,651)[(0,0)[$+$]{}]{} (1298,651)[(0,0)[$+$]{}]{} (1300,651)[(0,0)[$+$]{}]{} (1302,651)[(0,0)[$+$]{}]{} (1304,651)[(0,0)[$+$]{}]{} (1306,651)[(0,0)[$+$]{}]{} (1308,651)[(0,0)[$+$]{}]{} (1310,651)[(0,0)[$+$]{}]{} (1312,650)[(0,0)[$+$]{}]{} (1314,650)[(0,0)[$+$]{}]{} (1316,650)[(0,0)[$+$]{}]{} (1318,651)[(0,0)[$+$]{}]{} (1320,650)[(0,0)[$+$]{}]{} (1323,650)[(0,0)[$+$]{}]{} (1325,650)[(0,0)[$+$]{}]{} (1327,650)[(0,0)[$+$]{}]{} (1329,651)[(0,0)[$+$]{}]{} (1331,651)[(0,0)[$+$]{}]{} (1333,651)[(0,0)[$+$]{}]{} (1335,650)[(0,0)[$+$]{}]{} (1337,650)[(0,0)[$+$]{}]{} (1339,650)[(0,0)[$+$]{}]{} (1341,651)[(0,0)[$+$]{}]{} (1343,651)[(0,0)[$+$]{}]{} (1345,651)[(0,0)[$+$]{}]{} (1347,651)[(0,0)[$+$]{}]{} (1349,651)[(0,0)[$+$]{}]{} (1361,651)[(0,0)[$+$]{}]{} (1395,651)[(0,0)[$+$]{}]{} (1422,651)[(0,0)[$+$]{}]{} [**Fig. 3**]{}. The time varations of $<z>$ ($+$) and $\delta z \times 10^3$ ($\Diamond$). At SOC $<z>$ = 2.124 (Ref. \[4\]).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We have used the latest HI observations of the Small Magellanic Cloud (SMC), obtained with the Australia Telescope Compact Array and the Parkes telescope, to re-examine the kinematics of this dwarf, irregular galaxy. A large velocity gradient is found in the HI velocity field with a significant symmetry in iso-velocity contours, suggestive of a differential rotation. A comparison of HI data with the predictions from tidal models for the SMC evolution suggests that the central region of the SMC corresponds to the central, disk- or bar-like, component left from the rotationally supported SMC disk prior to its last two encounters with the Large Magellanic Cloud. In this scenario, the velocity gradient is expected as a left-over from the original, pre-encounter, angular momentum. We have derived the HI rotation curve and the mass model for the SMC. This rotation curve rapidly rises to about 60  up to the turnover radius of $\sim3$ kpc. A stellar mass-to-light ratio of about unity is required to match the observed rotation curve, suggesting that a dark matter halo is not needed to explain the dynamics of the SMC. A set of derived kinematic parameters agrees well with the assumptions used in tidal theoretical models that led to a good reproduction of observational properties of the Magellanic System. The dynamical mass of the SMC, derived from the rotation curve, is $2.4\times10^{9}$ M$_{\odot}$.' author: - 'S. Stanimirović' - 'L. Staveley-Smith' - 'P. A. Jones' title: A New Look at the Kinematics of Neutral Hydrogen in the Small Magellanic Cloud --- Introduction ============ The Small Magellanic Cloud (SMC) is a nearby[^1], gas-rich, dwarf irregular galaxy. Its morphology, dynamics, and evolution are highly complex, and must have been heavily influenced by gravitational interactions with the nearby Large Magellanic Cloud (LMC) and the Galaxy [@Mary-nature]. As a dwarf irregular galaxy, the SMC is different from our own Galaxy in many respects, having a lower heavy element abundance, significantly lower dust content, and a consequently stronger interstellar radiation field [@Stanimirovic99; @Stanimirovic01]. Many recent high resolution studies have shown that dwarf galaxies in general have a very dynamic interstellar medium (ISM), structured mainly by star formation and its aftermath. The most obvious example of star-formation creativity are numerous expanding shells of gas [@Puche92; @Staveleyetal97; @Kim99; @Stanimirovic99; @Walter99]. Dwarf galaxies are also usually found to have large, and dynamically important, halos of dark matter [@Mateo98]. The relative roles of a galactic rotation, pressure support, dark matter, and magnetic fields on the 3-D structure and dynamics of these galaxies are still open questions. Furthermore, it is not clear yet whether different stages of galactic evolution are marked by fundamentally different phenomena, and whether these phenomena are selective with respect to spatial scales. It has been known for some time that the morphology and kinematics of the SMC traced by different stellar populations show very different properties [@Gardiner92]. This has been one of the major reasons to start thinking about forces other than gravitational that may have played a significant role in the formation and evolution of the whole Magellanic System. The questions of morphology and kinematics are closely related to the long-standing and greatly controversial questions of the SMC’s 3-D structure and depth along the line-of-sight. As the results of several new optical and near-IR surveys are becoming available [@Zaritsky00; @Cioni00; @Maragoudaki01], revealing structural evolution of the SMC, as well as new N-body simulations of the dynamical evolution of the SMC [@Yoshizawa03], we find that it is timely to re-examine the morphology and kinematics of the SMC as traced by the neutral hydrogen (HI). The mapping of HI in the SMC has a long and productive history. After the pioneering work by @KerrHindmanRobinson54 and @Hindmanetal63, @Hindman67 was the first to notice the velocity gradient in the SMC and to model its rotation curve, followed by @BajajaLoiseau82. However, the velocity field of the SMC is far from a simple text-book example. The HI profiles, often complex and with multiple peaks, have caused much controversy in the past, being interpreted as due to either expanding shells of gas or spatially separate systems [@Hindman67; @MathewsonFordVisvanathan88; @Martin89]. An improvement by a factor of ten in spatial resolution, relative to previous surveys with the Parkes telescope, was achieved in the HI survey by @Staveleyetal97, using the Australia Telescope Compact Array (ATCA). These new high resolution data showed that much of the HI profile complexity lies in the huge number ($\sim$ 500) of expanding shells. The new high resolution HI observations were complemented with new low resolution observations obtained with the Parkes telescope to provide information over a continuous range of spatial scales from 30 pc to 4 kpc [@Stanimirovic99]. The aim of this paper is to re-examine the HI kinematics of the SMC, as viewed from high resolution observations, and compare it with predictions from tidal theoretical models. We start by summarizing the HI observations in Section \[s:hi-data\]. The morphology and kinematics of the SMC from the HI distribution, as well as viewed using other tracers, are discussed in Section \[s:overview\]. The SMC 3-D structure and line-of-sight depth are reviewed briefly in Section \[s:3D-structure\]. We then compare HI data with predictions from several tidal models in Section \[s:theoretical-models\]. The rotation curve and mass model of the SMC are investigated in Section \[s:rotation-curve\]. Finally, a summary and concluding remarks are given in Section \[s:summary\]. HI data {#s:hi-data} ======= [lc]{} Property& Value\ \ [**Radio:**]{}&\ [*Measured:*]{}\ RA (J2000)$^{a}$ & 01$^{\rm h}$ 05$^{\rm m}$\ Dec (J2000)$^{a}$ & $-72^{\circ}$ $25'$\ Systemic velocity$^{b}$, $V_{\rm sys}$ (Gal.) & 24\ Systemic velocity$^{b}$, $V_{\rm sys}$ (Hel.) & 160\ HI mass$^{c}$, $M_{\rm HI}$ & $4.2 \times 10^{8}$ M$_{\odot} $\ [*Model-dependent:*]{}\ Inclination$^{d}$, $i$ & 40$\pm$ 20\ Kinematic Position angle$^{d}$, PA & 40\ V$_{\rm max}^{d}$ & 60\ R$_{\rm max}^{d}$ & 2.5 – 3 kpc\ M$_{\rm dyn}^{d}$ & $2.4\times10^{9}$ M$_{\odot}$\ \ [**Optical:** ]{}&\ Extinction$^{e}$, $E_{\rm B-V}$ & $\sim0.05$–0.25 mag\ Total visible magnitude$^{f}$, $V_{\rm T}$ & $+2.4$ mag\ Absolute visible magnitude$^{f}$, $M_{\rm V}$ & $-16.5$\ Visible luminosity$^{f}$, $L_{\rm V}$ & $3.1\times10^{8}$ L$_{\odot}$\ $M_{\rm H}/L_{\rm V}$ & 1.4 M$_{\odot}$/L$_{\odot}$\ $(B-V)_{\rm T}^{0}$ & 0.41\ $(U-B)_{\rm T}^{0}$ & $-0.23$\ The HI in the SMC was observed with the ATCA[^2], a radio interferometer, in a mosaicing mode (Staveley-Smith et al. 1997). Observations of the same area were obtained also with the 64-m Parkes telescope. The two sets of observations were then combined (Stanimirović et al. 1999), resulting in the final HI data cube with angular resolution of 98 arcsec, velocity resolution of 1.65 , and 1–$\sigma$ brightness-temperature sensitivity of 1.3 K to the full range of spatial scales between 30 pc and 4 kpc. The area covered with these observations is RA $00^{\rm h} 30^{\rm m}$ to $01^{\rm h} 30^{\rm m}$ and Dec $-71^{\circ}$ to $-75^{\circ}$ (J2000), over a velocity range of 90 to 215 . For details about the ATCA and Parkes observations, data processing, and data combination (short-spacings correction) see Staveley-Smith et al. (1997) and Stanimirović et al. (1999). Structure and kinematics of the SMC {#s:overview} =================================== HI distribution --------------- Fig. \[f:HI-density\] shows the integrated HI column-density distribution. The large-scale HI morphology of the SMC is quite irregular and does not show obvious symmetry. The most prominent features are the elongation from the north-east to the south-west and the V-shaped concentration at the east (RA 01$^{\rm h}$ 14$^{\rm m}$, Dec $-73^{\circ}$ 15$'$, J2000). These are usually referred to as the ‘bar’ and the Eastern Wing, although their dynamical importance is still not well understood. A ‘bridge’ of gas appears to connect the ‘bar’ and the Wing (note that this is not the Magellanic Bridge that connects the SMC and the LMC), while the arm-like extension of the ‘bar’ towards the north-east is also prominent. On smaller scales and looking at different velocity channels (see Stanimirović et al. 1999), the HI distribution appears very complex and frothy, being dominated by numerous expanding shells, filaments and arcs. The total estimated mass of the HI, after correction for self-absorption, is $4.2 \times 10^{8}$ M$_{\odot} $ (Stanimirović et al. 1999). A summary of radio and optical properties of the SMC is given in Table \[t:summary\]. The HI column density distribution is super-imposed on the V-band optical image of the SMC, kindly provided to us by M. Bessell, in Fig. \[f:v-HI\]. In general, in the ‘bar’ and the Eastern Wing the stellar and HI distributions correlate well, however the HI is significantly more extended towards the south-west, the south-east, and the north-west. HI kinematics {#s:profile-analysis} ------------- The intensity-weighted mean velocity along each line-of-sight, or the first moment map, was derived from the full resolution HI data cube (corrected for short spacings), and is shown in Fig. \[f:1mom-mean\]. A large velocity gradient, from 91  in the south-west to 200  in the north-east, can be seen. The iso-velocity contours show some large-scale symmetry suggestive of differential rotation for the main gaseous body of the SMC. However, clear distortions are visible in the north-west, most likely corresponding to several shells and filamentary features aligned into a chimney-like structure at velocity $\sim$ 123  (Fig. 2 in Stanimirović et al. 1999), and in the south-east, towards the Eastern Wing region, where again a supergiant shell was found (494A; see Stanimirović et al. 1999). These perturbations form a S-shaped feature perpendicular to the direction of the main velocity gradient (as shown in Fig. \[f:1mom-mean\] with a dashed line). The second moment map or the intensity-weighted velocity dispersion is shown in Fig. \[f:dispersion-mean\]. This velocity dispersion varies from $\sim$5 to 40  across the SMC. The regions with higher dispersion appear to be associated with the positions of the three largest supergiant shells (SGSs). The region around RA $01^{\rm h}$ $00^{\rm m}$, Dec $-71^{\circ} 30'$ (J2000) has the lowest dispersion with a mean value of $\sim$10 . As this region corresponds to the receding hemisphere of the SGS 304A (Stanimirović et al. 1999), the lower velocity dispersion may be explained with the fact that most of the approaching hemisphere of SGS 304A is missing at the northside. We have investigated two effects that could significantly influence the observed velocity field shown in Fig. \[f:1mom-mean\]: (i) the effect of the proper motion of the SMC, and (ii) the effect of multiple-peak velocity profiles.\ (i) The binary motion of the SMC around the Galaxy, combined with its large angular extent, could have a significant contribution to the observed velocity field, as was shown for the case of the LMC by @LuksRohlfs92 and @Kim98. This contribution consists of the projection of the transverse velocity of the SMC’s center of gravity on the line-of-sight at each position. In the case of a large angular extent on the sky, this projection can vary greatly across the field of observation and cause a significant change in the observed velocity field. To correct for this effect, we used values for the SMC’s proper motion and the heliocentric transversal velocity predicted by numerical simulations by @Gardiner94, which are in agreement with the estimates based on a combination of proper motion measurements by @Kroupa94 and @KroupaBastian97. At any position across the SMC the proper-motion corrected heliocentric velocity was obtained by subtracting the projection of the heliocentric transverse velocity of the center of the SMC on the line-of-sight from the observed radial heliocentric velocity. The values for the conversion between the observed and the proper-motion corrected heliocentric velocity range between $-23$ , in the north-west, and 28 , in the south-east. The correction for the proper motion, however, did not make as big difference as in the case of the LMC [@LuksRohlfs92]. The reason is that the angular extent of the SMC is less than half that of the LMC. In addition, the direction of the SMC’s motion is perpendicular to the observed velocity gradient resulting in the gradient being almost unaffected by the proper motion. \(ii) Velocity profiles in the SMC are usually very complex, having double or even multiple-peak components. Since the intensity-weighted mean velocity (Fig. \[f:1mom-mean\]) is biased towards the velocity component with higher intensity, it is not necessarily the best representation of complex velocity profiles. To avoid this bias, we found, for each line-of-sight, the minimum and maximum velocity for which the intensity is 5% of the peak value for that line-of-sight. The mean of the minimum and maximum velocity, determined in this way, was then taken to be our new velocity estimate. Since such an analysis requires a high signal-to-noise ratio, we only used the low resolution Parkes data cube. The resultant velocity field has a more regular ‘spider’ pattern than the mean one, however irregularities are still present in the north and close to the Eastern Wing region. We also investigated the existence of a bimodal velocity field, by modeling each velocity profile in the Parkes data cube with one or a superposition of two independent Gaussian functions. As a result, two separate velocity structures were depicted, with central velocities corresponding to the low and high velocity entities of 137  and 174 , respectively. The position-velocity diagrams show almost parallel velocity fields of these two components. However, in many cases the velocity fields intersect suggesting that two separate velocity components may be a consequence of the statistical data handling of complex HI profiles with no substantial physical meaning. The proper-motion corrected velocity field in the Galactocentric velocity reference frame, derived from the Parkes HI data cube using the 5% cut-off described in (ii), is shown in Fig. \[f:gal\_velfield\]. A clear velocity gradient is noticeable from $-15$ , in the south-west, to 53 , in the north-east. The positions of the three largest shells in the SMC are also overlaid in Fig. \[f:gal\_velfield\], suggesting that supershells 304A and 494A may still be responsible for some of the perturbations in the velocity field. Some perturbations are also visible in the north-west. Comparison with other tracers {#s:other-tracers} ----------------------------- ### SMC morphology from stellar populations It has been known for some time that young and old stellar populations in the SMC have significantly different spatial distributions (e.g. @Gardiner92). Several recent stellar surveys have especially enhanced our knowledge about the different morphological properties of different stellar populations. The recent stellar survey of the central SMC (4$\times$ 4) by @Zaritsky00 showed that while young stars (with ages $\la 200$ Myr) exhibit an irregular distribution, similar to that seen in HI, the old stellar population (with ages $\ga 1$ Gyr) shows regular, undisturbed, and almost spheroidal distribution. Similar results were reached from the [DENIS]{} survey by @Cioni00 based on $IJK_{s}$ IR bands. These studies also pointed out that the ‘bar’ and the Eastern Wing region may not be distinct dynamical entities, but could be a result of recent hydrodynamic interactions between the LMC and SMC’s gaseous components. @Maragoudaki01 further investigated the dynamical origin of the ‘bar’ by using isodensity contour maps of stars with different ages. They found similar results concerning old stellar populations. In addition, their data show that the ‘bar’ and the Eastern Wing region were already prominent features some 0.3–0.4 Gyr ago; however, it is not clear whether this is a temporary extended region of star formation or a genuine, dynamical feature. ### SMC kinematics from stellar populations {#s:other-tracers-kinematics} The intermediate age and old stellar populations, traced by planetary nebulae [@Dopita85b] and carbon stars [@Hardy89; @Hatzidimitriou97; @Hatzidimitriou99; @Kunkel00], do not show signs of rotation. The red horizontal branch clump stars in the north-east show a velocity gradient of 70  which was interpreted as due to a distance spread of 10 kpc along the line-of-sight (Hazidimitriou et al. 1993). The young stellar population, traced by HI shells, shows a clear velocity gradient from the south-west to the north-east [@Staveleyetal97], while Cepheids may have a similar trend with velocity [@MathewsonFordVisvanathan88]. All populations show similar mean heliocentric velocity and velocity dispersion regardless of age and location [@Hatzidimitriou99; @Staveleyetal97]. From an analysis of 150 carbon stars, @Kunkel00 estimated the inclination of the SMC orbital plane to $i=73^{\circ}\pm4^{\circ}$ relative to the plane of the sky. They found that carbon stars are associated with more negative velocity and denser HI, while low density HI seems to be devoid of carbon stars. This suggests that non-gravitational forces must be acting on the gaseous component to provide its separation from stellar systems. Similar estimates for the SMC’s inclination were reached using Cepheids by @Caldwell86, $i=70^{\circ} \pm 3^{\circ}$, and by @Groenewegen00, $i=68^{\circ} \pm 2^{\circ}$. The 3-D structure of the SMC {#s:3D-structure} ============================ Previous observational models ----------------------------- The 3-D structure of the SMC has been a matter of great controversy in the past. Since early mapping of HI, complex HI profiles pointed to the existence of unusual motions of the gas [@Hindman67]. [@Hindman67] also suggested the first model of the SMC, as a flattened disk with three supergiant shells in the main body. From the analysis of the radial velocity distributions of HI, stars, HII regions, and planetary nebulae, [@MathewsonFord84] and Mathewson, Ford & Visvanathan (1986, 1988) suggested the ‘two separate entities’ model for the SMC, whereby the SMC is, along most of its angular extent, broken into two velocity subsystems (the Small Magellanic Cloud Remnant, SMCR, and the Mini Magellanic Cloud, MMC). In this model, the SMCR and MMC are separated in velocity by $\sim$ 40 , and have their own nebular and stellar populations. The reason proposed for this disruption of the SMC was its close encounter with the LMC some 2$\times$10$^{8}$ yr ago. A slight modification of the ‘two separate entities’ model was suggested by [@Torres87], and [@Martin89], who found four, instead of two, different velocity components. [@Caldwell86] also used Cepheids to suggest the ‘bar and three arms’ geometrical model of the SMC. The central bar is very elongated (5-to-1) and is seen edge-on. A distant arm of material is pulled from the center of the SMC to the west, while two sides of the bar located in the north-east and the south-west are considered as two near arms. Depth along the line-of-sight {#s:smc-depth} ----------------------------- The question of the 3-D structure of the SMC is closely related to the long-standing and greatly controversial issue of the SMC’s depth along the line-of-sight. By measuring distances and radial velocities of 161 Cepheids, [@MathewsonFordVisvanathan86] found a great depth along the line-of-sight of $\sim$30 kpc, and a distance gradient from the north-east to the south-west. The large depth of the SMC was supported later by [@MathewsonFordVisvanathan88], using Cepheids again, and by [@Hatzidimitriou89] and [@Hatzidimitriou93] from a study of the intermediate-age (halo) population. [@Welch87] have pointed out that the determination of the Cepheid distances in [@MathewsonFordVisvanathan86] suffers from a few possible problems: an inconsistent correction to mean magnitude, the assumption of zero intrinsic scatter for the period-luminosity (P-L) relation, and sample inhomogeneity. [@Welch87] concluded that the SMC does not extend in depth beyond its tidal radius (4 – 9 kpc). [@Martin89] confirmed that the young stars in the SMC lie within a depth of $<10$ kpc. On the other hand, Groenewegen (2000) used near-infrared observations of Cepheids, which are less affected by reddening and metallicity, and derived the depth of the SMC of 14 kpc, assuming that the P-L relation has the same scatter for both the LMC and the SMC. Although Cepheids are used as the main distance indicators, the P-L relation requires an independent zero-point calibration, which is very difficult to achieve, and could suffer from a significant metallicity dependence. Another source of error in the distance determination, that has not been previously fully appreciated, is differential extinction toward the SMC. While reddening of Cepheids in the SMC has been extensively studied (see [@Welch87] for a summary), a constant value has been often applied for distance determination. For example, [@MathewsonFordVisvanathan86] found no sign of differential reddening and assumed a uniform correction of 0.06 mag for their sample of 161 Cepheids. Recently, [@Zaritsky02] determined the extinction map across the SMC, using the Magellanic Clouds Photometric Survey, and showed that extinction varies both spatially across the SMC and with stellar population. They found that young, hot stars have an average extinction 0.3 mag higher than old, cool stars, and that young stars show a significant increase in extinction along the main SMC ridge, from the north-east to the south-west. [@Zaritsky02] found $E_{\rm B-V} \sim$ 0.05 – 0.25 mag. For comparison, the mean value for the interstellar reddening previously measured by [@Caldwell85] was $E_{\rm B-V}=(0.054 \pm 0.021)$ mag, while [@Sasselov97] found a slightly higher value of $E_{\rm B-V} = (0.125 \pm 0.009)$ mag. The differential extinction across the SMC has a significant influence on the distance estimate. Differential reddening toward Cepheids in the south-west part of the SMC is $\sim0.2$ mag higher than toward in the north-east. This results in the interstellar infrared absorption being larger in the south-west by $\sim$0.4 mag. Hence, the Cepheid distances estimated by [@MathewsonFordVisvanathan88] could be over-estimated by up to 18%. This corresponds to about 10 kpc at a distance of 60 kpc, and suggests that the correction for the interstellar absorption can significantly influence the distance determination, and easily bring the depth of the SMC within its tidal radius (4 – 9 kpc). Comparison with theoretical models {#s:theoretical-models} ================================== There are two families of theoretical models, based on either a tidal or a ram pressure scenario, that try to reproduce observational features in the Magellanic System, caused by interactions between the SMC, LMC, and the Galaxy. The 3-D structure and kinematics of the SMC were particularly addressed in tidal models by @Gardiner94, @Gardiner96, and @Yoshizawa03. The recent work by @Yoshizawa03 provides, to date, the most detailed model for the evolution of the SMC. We summarize here results from these tidal simulations and their major predictions concerning the structure and kinematics of the SMC. It is important to note that none of the theoretical models so far predicts the bimodal velocity distribution throughout the main gaseous body of the SMC. This gives support to the idea that most, if not all, of the observed line-splitting comes from the combined effects of numerous expanding shells (Staveley-Smith et al. 1997). Model predictions ----------------- The above three models come from N-body simulations of the gravitational interactions in the Galaxy-LMC-SMC system to reproduce the observed gas distribution in the Magellanic System, primarily those of the Magellanic Stream and Bridge. While @Gardiner94 modeled the SMC as a single-component disk-like system, in simulations by @Gardiner96 the SMC was represented by a two-component particle system consisting of a nearly spherical halo and a rotationally supported disk, with a disk-to-halo mass ratio of 1:1. This high mass ratio resulted in the original disk quickly becoming unstable, and being transformed into a bar-like structure. The tidal model by @Yoshizawa03 included, for the first time, gas dynamics and star formation processes, while representing the SMC also as a rotationally supported exponential disk with a nearly spherical halo, having the disk-to-halo mass ratio of 3:7. The significantly lower disk mass resulted in the disk being stable against bar instabilities. Following typical findings for the Magellanic-type galaxies, a slowly rising rotation curve was assumed in both models, with a turnover radius of 3.5 kpc and maximum rotation velocity of 50 . The best spatial orientation of the original SMC disk was determined in @Gardiner96 to have an inclination $i=45^{\circ}$ and major axis $\theta=230^{\circ}$ in order to match the observed gas and stellar distributions. These values were adopted by @Yoshizawa03 and simulations were repeated for different star formation parameters. The major and common result of all three models is that the current 3-D structure of the SMC is composed of a central, disk- or bar-like, component and two tidal, spiral-arm-like tails. These tidal tails extend into the Magellanic Bridge, were formed 200 Myr ago, and are seen in both gas and stellar components. In particular, simulations by @Yoshizawa03 show great morphological change of the initial SMC’s gas disk – from its original size of about 10 kpc in diameter the disk shrunk forming first the Magellanic Stream and the Leading Arm (about 1.5 Gyr ago), and then later two tidal arms that form the Magellanic Bridge (about 200 Myr ago). At the end of the simulation the gas disk is still present but is now significantly smaller in size, approximately by a factor of 2.5. The current SMC consists of the following components. 1. The left-over [**disk-like component**]{}, located at 0$^{\rm h} 15^{\rm m}<$RA$<1^{\rm h}$, is not greatly elongated along the line-of-sight, with the stellar disk having a higher distance dispersion than the gas disk. Kinematically, the gas disk has a significant velocity gradient, from $\sim$80 to 180 . The left-over stellar disk, on the other hand, shows high dispersion in the velocity field but no significant velocity gradient. In simulations by @Gardiner96, this is a bar-like feature slightly bigger and centered more westward, covering 0$^{\rm h} 30^{\rm m}<$RA$<1^{\rm h} 30^{\rm m}$, and a slightly wider velocity range. 2. The [**eastern tail**]{}, starting around RA $\sim1^{\rm h}$, extending into the Magellanic Bridge, and covering a distance range from 55 to 40 kpc. This feature starts at a heliocentric velocity of about 170  and gradually decreases to about 120  at RA $>2^{\rm h}$. It is seen in both gas and stars but the gas tail is more extended. 3. The [**western tail**]{}, starting around RA $\sim0.5^{\rm h}$ and covering a distance range from 55 to almost 80 kpc. Kinematically, this feature starts westward from the main disk-like component at a velocity of about 80 , but then turns east passing the main component with increasing velocity all the way to about $>250$  around RA $>2^{\rm h}$. The stellar western tail follows the gas one and is less extended. Hence, the two tidal tails contribute mostly to the SMC’s elongation along the line-of-sight, while the SMC disk-like component is not significantly elongated ( $\sim$5 kpc). The best simulation by @Yoshizawa03 reproduces a large number of observational properties in the Magellanic System, suggesting that most of them may be predominantly of tidal origin. Very importantly, a good morphological and kinematic reproduction of the Magellanic Stream with almost no stars was achieved for the first time, by using a very compact initial configuration of the SMC’s stellar disk. The major shortcoming of the model is its failure to reproduce gas masses and mass ratios in the Magellanic Stream, Bridge, and the SMC. This may be related with the need for a more massive initial disk of the SMC, or a different halo-to-disk mass ratio, and requires further investigation. In addition, as pointed out by @Putman03, a number of detailed features related to the Magellanic Stream (e.g. its double-helix appearance and numerous small clouds surrounding its main filaments) need to be explained. It has also been shown that forces other than gravity must play a significant role in the kinematics of the SMC [@Kunkel00; @Zaritsky00]. The results of tidal numerical simulations show that the current central gaseous body of the SMC ($\sim4$ degrees in extent) has a significant velocity gradient. @Gardiner96 explained this gradient as being due to the elongation of the SMC’s bar-like component along the line-of-sight. However, a significant angular momentum left from before the last two close encounters with the LMC is expected to be present, if the pre-encounter SMC disk had an internal spin. The N-body simulations by @Mayer01 showed that in general disk-like dwarf irregular galaxies undergo a great morphological transformation due to close encounters with the more massive Milky Way. While dwarfs lose their mass through tidal stripping, their angular momentum gets gradually removed over a period of 6-10 Gyrs. This transformation is gradual, and on short timescales ($\sim2$ Gyrs) a significant angular momentum ($>$80%) is still preserved. Focusing especially on the SMC case, a similar conclusion was reached by @Kunkel00 who found that it is extremely hard for the SMC to lose its pre-encounter angular momentum during an encounter with the LMC. Comparison with HI ------------------ In Section \[s:profile-analysis\] we showed that the HI distribution contains a large velocity gradient and some signatures for the existence of ordered, systematic motions. This agrees with theoretical simulations which show that the current central gaseous body of the SMC should still posses angular momentum left over from before the last two close encounters with the LMC. This angular momentum should be less significant in the case of older stellar populations, as indeed has been found from observations (see Section \[s:other-tracers\] for a summary). We now compare RA-velocity slices through the HI data cube, shown in Fig. \[f:ra\_vel1\], with the theoretical predictions, shown in Fig. 17 of @Yoshizawa03 and Fig. 10 of @Gardiner96. The area enclosed in our observations contains only the central region of the SMC ($4.5^{\circ}\times4.5^{\circ}$). Images in Fig. \[f:ra\_vel1\] show coherent large-scale features with a wealth of small-scale structure. Many of the features have already been interpreted as expanding shells of gas (Staveley-Smith et al. 1997; Stanimirović et al. 1999). At a low Dec of $\sim-74$, the two most dominant features are supergiant shells 37A (RA 01$^{\rm h}$ 25$^{\rm m}$) and 494A (00$^{\rm h}$ 35$^{\rm m}$). These shells were investigated in Stanimirović et al. (1999). Around Dec $\sim-73$20$'$, Fig. \[f:ra\_vel1\] shows an HI emission, bar-like, feature stretching over the RA range 00$^{\rm h}$ 40$^{\rm m}$ to $\sim$00$^{\rm h}$ 55$^{\rm m}$, and having a velocity gradient from $\sim$120 to $\sim$170 . This feature continues eastward as a complex network of filaments and shell-like features, culminating around Dec $\sim-72$37$'$ with two long filaments centered at heliocentric velocities of 130 and 170 . The coherent appearance of these features traced throughout the data cube was interpreted as the low-Dec hemisphere of the supergiant shell 304A (see Stanimirović et al. 1999). At a very high Dec of $\sim-71$55$'$, a high-velocity component is apparent, at about 200 . We do not find obvious features that could correspond to the predicted eastern and western tails within the region covered by our observations. To search for potential tidal tails running out of the central SMC main body an investigation of a larger area is essential; this will be possible in the near future with new observations by @Bruns02 and @Muller03, but is beyond the scope of this paper. Concerning the position and velocity range of the central component, our observations are in better agreement with the @Gardiner96 model predictions: the coherent structure seen from RA 00$^{\rm h}$ 30$^{\rm m}$ to 01$^{\rm h}$ 30$^{\rm m}$ corresponds to the area where the central bar-like feature predicted by the model should be found, and its velocity span from 100 to 200  is very close to the predicted velocity range for the bar-like component. The @Yoshizawa03 model predictions for the left-over disk component cover a smaller area and velocity range than what is found in the observations. Rotation curve and mass modeling of the SMC {#s:rotation-curve} =========================================== As discussed in the previous section, the central $4^{\circ} \times 4^{\circ}$ of the SMC, covered by the HI observations, most likely corresponds to the left-over part of the original SMC’s gaseous rotationally supported disk. This disk should contain significant signatures of angular momentum left from the pre-encounter disk. Motivated by this, we proceed further to derive the rotation curve from the observed HI velocity field, and the mass model, assuming that the entire velocity gradient is due to rotation. Tilted ring analysis -------------------- The velocity field derived in Section \[s:profile-analysis\], and shown in Fig. \[f:gal\_velfield\], was used to define the major and minor kinematic axes, and the apparent kinematic center at $\sim$ RA $01^{\rm h} 05^{\rm m}$, Dec $-72^{\circ} 25'$ (J2000), with a systemic velocity of $\sim$ 20  in the galactocentric reference frame. The position angle (PA) of the major kinematic axis is around 50. The tilted ring algorithm [rocur]{} in the [aips]{} package was then used to derive the HI rotation curve at different radii from the center. A fairly standard procedure [@Meurer98; @Bureau02] was applied to estimate a set of kinematic parameters that represents well the observed velocity field at all radii. It was found that solutions for the inclination vary greatly especially for the outer radii. In the end, the value $i=40$$\pm$20 was adopted. The newly determined least-squares solution for the systemic velocity was found to be 24 . Finally, the solutions for the rotational velocity and position angle were determined and are shown in Fig. \[f:pa-vrot\]. The position angle varies systematically from 30 to 50 (as shown in the top panel of Fig. \[f:pa-vrot\]). Using the mean PA, the global and the separate rotation curves for the receding and approaching sides of the velocity field were derived, as shown in Fig. \[f:pa-vrot\] (bottom panel). The global rotation curve shows a slow rise up to $R\sim3$ kpc, where it reaches the maximum velocity of $\sim$40 . The receding and approaching curves are significantly different, which is not surprising since the velocity field is quite asymmetric (Fig. \[f:gal\_velfield\]). The receding curve is less smooth and could have a local maximum around 1 kpc from the center of rotation. The maximum rotation velocity found in previous studies [@Hindman67; @LoiseauBajaja81] was $\sim$ 36 . The line-of-sight HI velocity dispersion distribution shown in Fig. \[f:dispersion-mean\] pointed to a high velocity dispersion across most of the SMC. The mean value, $\sigma_{\rm HI}=(22\pm2)$ , is significantly higher than what is found for spiral galaxies, or even other dwarf galaxies [@Meurer98; @Cote00]. It is also large compared to the rotational velocity (Fig. \[f:pa-vrot\]). If interpreted as being all due to random motions, rather than bulk motions along the line-of-sight, such high values for $\sigma_{\rm HI}$ suggest that the turbulence in the ISM of the SMC has an important influence on the system dynamics. To account for this dynamical support we estimated the asymmetric drift correction. The usual prescription for determining this correction [@Meurer96] was followed: we derived the azimuthally averaged HI surface brightness and velocity dispersion profiles (for deprojected circular annuli with a mean position angle of 40 and an inclination of 40), and assumed a constant vertical scale height. In general, the derived asymmetric drift correction is quite significant and range from 0, for the inner radii, to $\sim 40$  for the outer radii. The observed rotation curve corrected for the pressure support is presented in Fig. \[f:rotmod-final\] (crosses). The difference is significant for $R>0.5$ kpc with the corrected curve being higher by $\sim$ 10  than the observed one. The mass model -------------- A two-component mass model was fitted to the corrected rotation curve. \(a) To derive the deprojected rotation curve due to the potential resulting from the neutral gas alone, the radial HI surface density profile was used in the [gipsy]{}’s [rotmod]{} task [@gipsy]. An exponential density law was assumed for the vertical disk distribution, with a scale height of 1 kpc. An indication of the large scale height was given by the typical size of large HI shells in the SMC, as well as from previous estimates based on the velocity dispersion and an average surface density of matter for the case of an isothermal disk. The resultant rotation curve ($V_{\rm g}$) is shown in Fig. \[f:rotmod-final\] with dotted line. \(b) To derive the deprojected rotation curve due to the optical surface density distribution, the V-band image (shown in Fig. \[f:v-HI\]) was used. This image was first smoothed to 20 arcmin resolution and scaled to match the total luminosity of the SMC in the V-band of $3.1\times10^{8}$ L$_{\odot}$ [@RC3]. The rotation curve arising from the stellar potential alone ($V_{\ast}$) was estimated from the V-band stellar density profile assuming, in first approximation, that $M_{\ast}/L_{\rm V}=1$ (see the curve shown as a dash-dotted line in Fig. \[f:rotmod-final\]). The best fit of the total predicted rotational velocity, $\sqrt{V_{\rm g}^{2}+V_{\ast}^{2}}$, to the observed rotational velocity corrected for the pressure support was obtained for $M_{\ast}/L_{\rm V}=1.02$ (solid line in Fig. \[f:rotmod-final\]). The two component mass model appears to fit the observed rotational velocities quite well, suggesting that no additional component, such as a dark halo, is needed to explain the rotation of the SMC. The rotation velocity of only the stellar component implies a total stellar mass of the SMC of $1.8\times10^{9}$ M$_{\odot}$, within a radius of 3 kpc. From Section \[s:hi-data\] and after correction for neutral He, the mass of HI$+$He is $5.6\times10^{8}$ M$_{\odot}$, being almost a third of the stellar mass within the same radius. The total mass of the SMC, implied from the rotation curve, is thus $2.4\times10^{9}$ M$_{\odot}$. This is almost twice that of $1.5\times10^{9}$ M$_{\odot}$, derived by @Hindman67 within the slightly smaller radius of 2.6 kpc, and is similar to the inital mass of the SMC assumed by @Gardiner94 (about $2\times10^{9}$ M$_{\odot}$) and by @Yoshizawa03 ($3\times10^{9}$ M$_{\odot}$). ### The stellar $M/L$ from population synthesis models Stellar population synthesis (SPS) models provide an independent way to estimate the stellar mass-to-light ratio. @Bell01 used simplified spiral galaxy evolution models, based on several different SPS models, to investigate trends of the stellar $M/L$ with galaxy properties, such as colors and gas fraction. They found good agreement with observational data and showed that the trend of $M/L$ with color, $\log_{10} (M/L)=a_{\lambda}+b_{\lambda}$Color, is robust with respect to the SPS models but is dependent on the type of the initial mass function (IMF) used. They tabulate coefficients $a_{\lambda}$ and $a_{\lambda}$ for different colors, stellar metallicity, SPS models, and IMFs. We have used coefficients for the case of a stellar metallicity appropriate for the SMC, which is about 25% of the solar abundance, based on the SPS model by [@Bruzual03], for a Salpeter IMF scaled by a factor of 0.7 [@Bell01], and the [@Schmidt59] star formation law. The stellar $M/L_{\rm V}$ was then estimated using the broad band colors given in Table 1, $B-V=0.41$ and $B-R=0.7$. We get $M/L_{\rm V}=0.8\pm0.2$ which agrees with the value obtained from the HI rotation curve. Critical density for star formation ----------------------------------- We now go a step further and address the gravitational stability of the current gaseous disk of the SMC and its possible consequences for star formation. We estimated the disk stability parameter, $Q$, [@Toomre64] using the mean velocity dispersion, azimuthally averaged HI surface brightness, and the angular velocity derived using the rotation velocity. According to the Toomre’s stability criterion, the disk is stable for $Q>1$, while regions with $Q \la 1$ are unstable and under possible fragmentation which can lead to new star formation [@BinneyTremaine]. The estimated values of $Q$, as a function of radius, are shown in Fig. \[f:Q\]. The plot shows that most of the gaseous disk, for $R<2.1$ kpc, is unstable and, most likely, under current star formation. The unstable region appears to be quite extended, reaching almost the cut-off H$\alpha$ radius, $R({\rm H}_{\alpha})\approx2.6$ kpc (based on H$\alpha$ observations by [@Kennicuttetal95]), which delineates the parts of the gaseous disk under the most recent star formation. We also determined the so called critical surface density, defined as the surface density needed to stabilize the disk (or for $Q=1$). Both the observed ($\Sigma_{\rm HI}$) and critical ($\Sigma_{\rm c}$) surface densities are shown in Fig. \[f:stability\] (top panel). The ratio of the observed to critical surface density ($\alpha=\Sigma_{\rm HI}/\Sigma_{\rm c}=1/Q$) is shown in the same figure (bottom panel). For the unstable part of the disk, $\alpha$ varies between 1.0 and 2.0, while it has a constant value of $\alpha=0.74\pm0.04$ for the stable part of the disk. This agrees with the empirical relationship between the star formation threshold and the disk stability found by @Kennicutt89 for a sample of normal galaxies, whereby most of the star formation is taking place for $\alpha>0.67$ (and hence for $Q<1.5$). Summary and concluding remarks {#s:summary} ============================== We have used the latest HI observations, obtained with the ATCA and the Parkes telescope, to re-examine the kinematics of the SMC. The HI velocity field, derived in Section \[s:profile-analysis\], shows a large velocity gradient from the south-west to the north-east. The iso-velocity contours of this velocity field show some symmetry, suggestive of a differential rotation. Some large-scale distortions in this velocity field are easily visible, but could be related to positions of several supergiant shells. In contrast to the HI distribution, the old stellar populations appear to have a spheroidal spatial distribution and a total absence of rotation. In Section \[s:3D-structure\], we summarized previous observational models of the SMC and the greatly controversial issue of its possibly large depth along the line-of-sight. We cautioned that the differential extinction across the SMC, revealed recently by @Zaritsky02, could be a significant source, unappreciated previously, for an overestimate of stellar distances, and therefore the SMC’s depth. In Section \[s:theoretical-models\], we summarized several tidal models that are concerned with the 3-D structure and kinematics of the SMC. In order to reproduce observational characteristics of the Magellanic System these models needed to assume the existence of angular momentum in the pre-encounter SMC (about 1.5–2 Gyr ago). Models predict that the current, left-over material from the pre-encounter SMC disk is a disk-like or a bar-like feature, about 4 kpc in extent, with a velocity gradient of about 100 . This velocity gradient is at least partially from the original angular momentum, as it is very difficult to lose the original spin during a galaxy encounter [@Mayer01; @Kunkel00]. The elongation along the line-of-sight of the pre-encounter SMC disk may also be partially responsible. It is, however, not clear from the models what are the relative contributions of these two effects to the predicted velocity gradient. In addition, theoretical models predict higher velocity dispersion for the post-encounter stellar disk without a significant velocity gradient; this agrees with observations of older stellar populations summarized in Section \[s:other-tracers-kinematics\]. While we do not find evidence for the existence of tidal tails predicted by the models, the central region of the SMC covered in these observations most likely corresponds to the central component left-over from the pre-encounter SMC gaseous (rotationally supported) disk. The observed HI velocity gradient agrees well with the model predictions. We then proceeded further to derive its rotation curve and mass model in Section \[s:rotation-curve\]. A set of kinematic parameters derived from the tilted ring analyses agrees extremely well with the assumptions used in theoretical models that led to a good reproduction of observational properties of the Magellanic System. The derived HI rotation curve in Section \[s:rotation-curve\] rapidly rises to about 60  up to the turnover radius of $\sim3$ kpc. A stellar mass-to-light ratio of about unity was required to scale the stellar component of the SMC’s rotation curve to match the observed rotation curve. This suggests that a dark matter halo is not needed to explain the dynamics of the SMC. This is a surprising result, as dwarf irregular galaxies are often found to be dark matter dominated, and could be related to the mechanism of tidal stripping, as indicated by [@Mayer01]. The total dynamical mass of the SMC derived from the rotation curve is $2.4\times10^{9}$ M$_{\odot}$, three quarters of which are due to the stellar mass only. We also derived Toomre’s disk stability parameter $Q$, which shows that almost all of the SMC disk is in the unstable regime. The gravitationally stable part, on the other hand, has a ratio of the observed to critical surface density $\alpha=0.74$ which is in excellent agreement with the empirical stability threshold found by @Kennicutt89. All of the above suggest that the HI distribution is capable of providing valuable information about the SMC dynamics. Although very disturbed, it still contains imprints of the original system prior to the latest encounters with the LMC and the Galaxy. The HI data discussed here comprise only the central $4.5^{\circ}\times4.5^{\circ}$ of the SMC. It is very important to compare tidal theoretical models with data comprising a larger area around the SMC to search for signatures of the proposed tidal tails. This will be possible in the near future using the Parkes Multibeam HI observations of the whole Magellanic System [@Bruns00], as well as the ATCA and Parkes observations of the Magellanic Bridge [@Muller03]. The HI SMC data set is available from the ATNF SMC web page\ (http://www.atnf.csiro.au/research/smc\_h1/). Acknowledgments {#acknowledgments .unnumbered} =============== We are greatly thankful to Mike Bessell for providing us with the V-band optical image of the SMC prior to publication. We thank Greg Bothun for sending us the H$\alpha$ image. We thank John Dickey, Jacco van Loon, and Mary Putman for stimulating and fruitful discussions, and an anonymous referee for insightful comments. This work was supported in part by NSF grants AST-0097417 and AST-9981308. [61]{} natexlab\#1[\#1]{} Bajaja, E. & Loiseau, N. 1982, A&AS, 75, 251 , E. F. & [de Jong]{}, R. S. 2001, , 550, 212 Binney, J. & Tremaine, S. 1987, Galactic dynamics (Princeton, New Jersey: Princeton University Press), 362 , C., [Kerp]{}, J., & [Staveley-Smith]{}, L. 2002, in ASP Conf. Ser. 276: Seeing Through the Dust: The Detection of HI and the Exploration of the ISM in Galaxies, 365 Br[" u]{}ns, C., Kerp, J., & Staveley-Smith, L. 2000, in ASP Conf. Ser. 218: Mapping the Hidden Universe: The Universe behind the Mily Way - The Universe in HI, 349 , G. & [Charlot]{}, S. 2003, , 344, 1000 , M. & [Carignan]{}, C. 2002, , 123, 1316 , S., [Carignan]{}, C., & [Freeman]{}, K. C. 2000, , 120, 3027 Caldwell, J. A. R. & Coulson, I. M. 1985, MNRAS, 212, 879 —. 1986, MNRAS, 218, 223 Cioni, M.-R. L., Habing, H. J., & Israel, F. P. 2000, A&A, 358, 9 de Vaucouleurs, G., de Vaucouleurs, A., Corwin, H. G., Buta, R. J., Paturel, G., & Fouque, P. 1991, Third reference catalogue of bright galaxies (New York: Springer-Verlag) Dopita, M. A., Ford, H. C., Lawrence, C. J., & Webster, B. L. 1985, ApJ, 296, 390 , L. T. & [Hatzidimitriou]{}, D. 1992, , 257, 195 Gardiner, L. T. & Noguchi, M. 1996, MNRAS, 278, 191 Gardiner, L. T., Sawa, T., & Fujimoto, M. 1994, MNRAS, 266, 567 , M. A. T. 2000, , 363, 901 Hardy, E., Suntzeff, N. B., & Azzopardi, M. 1989, ApJ, 344, 210 , D. 1999, in IAU Symp. 190: New Views of the Magellanic Clouds, 299 , D., [Cannon]{}, R. D., & [Hawkins]{}, M. R. S. 1993, , 261, 873 Hatzidimitriou, D., Croke, B. F., Morgan, D. H., & Cannon, R. D. 1997, A&AS, 122, 507 Hatzidimitriou, D. & Hawkins, M. R. S. 1989, MNRAS, 241, 667 Hindman, J. V. 1967, Aust. J. Phys., 20, 147 Hindman, J. V., McGee, R. X., Carter, A. W. L., Holmes, E. C. J., & Beard, M. 1963, Aust. J. Phys., 16, 552 Kennicutt, R. C., Bresolin, F., Bomans, D. J., Bothun, G. D., & Thompson, I. B. 1995, AJ, 109, 594 Kennicutt, R. C. J. 1989, ApJ, 344, 685 Kerr, F. J., Hindman, J. V., & Robinson, B. J. 1954, ApJ, 7, 297 Kim, S., Dopita, M. A., Staveley-Smith, L., & Bessell, M. S. 1999, AJ, 118, 2797 Kim, S., Staveley-Smith, L., Dopita, M. A., Freeman, K. C., Sault, R. J., Kesteven, M. J., & McConnell, D. 1998, ApJ, 503, 674 Kroupa, P. & Bastian, U. 1997, New Astronomy, 2, 77 Kroupa, P., R$\ddot{\rm o}$ser, S., & Bastian, U. 1994, MNRAS, 266, 412 , W. E., [Demers]{}, S., & [Irwin]{}, M. J. 2000, , 119, 2789 Loiseau, N. & Bajaja, E. 1981, RevMexAA (Serie de Conferencias), 6, 55 Luks, T. & Rohlfs, K. 1992, A&A, 263, 41 , F., [Kontizas]{}, M., [Morgan]{}, D. H., [Kontizas]{}, E., [Dapergolas]{}, A., & [Livanou]{}, E. 2001, , 379, 864 Martin, N., Maurice, E., & Lequeux, J. 1989, A&A, 215, 219 Mateo, M. 1998, in The Magellanic Clouds and Other Dwarf Galaxies, Proceedings of the Bonn/Bochum-Graduiertenkolleg Workshop, held at the Physikzentrum Bad Honnef, Germany, January 19-22, 1998, ed. T. Richtler & J. Braun (Shaker Verlag, Aachen), 53–66 Mathewson, D. S. & Ford, V. L. 1984, in Structure and Evolution of the Magellanic Clouds, ed. S. van den Bergh & K. de Boer, I.A.U. Symposium No. 108 (Reidel, Dordrecht), 125 Mathewson, D. S., Ford, V. L., & Visvanathan, N. 1986, ApJ, 301, 664M —. 1988, ApJ, 333, 617 , L., [Governato]{}, F., [Colpi]{}, M., [Moore]{}, B., [Quinn]{}, T., [Wadsley]{}, J., [Stadel]{}, J., & [Lake]{}, G. 2001, , 559, 754 Meurer, G. R., Carignan, C., Beaulieu, S. F., & Freeman, K. C. 1996, ApJ, 111, 1551 Meurer, G. R., Staveley-Smith, L., & Killeen, N. E. B. 1998, MNRAS, 300, 705 , E., [Staveley-Smith]{}, L., [Zealey]{}, W., & [Stanimirovi[' c]{}]{}, S. 2003, , 339, 105 Puche, D., Westpfahl, D., & Brinks, E. 1992, AJ, 103, 1841 Putman, M. E., Gibson, B. K., Staveley-Smith, L., Banks, G., Barnes, D. G., Bhatal, R., Disney, M. J., Ekers, R. D., Freeman, K. C., Haynes, R. F., Henning, P., Jerjen, H., Kilnborn, V., Koribalski, B., Knezek, P., Malin, D. F., Mould, J. R., Osterloo, T., Price, R. M., Ryder, S. D., Sadler, E. M., Stewart, I., Stootman, F., Vaile, R. A., Webster, R. L., & Wright, A. E. 1998, Nat, 394, 752 , M. E., [Staveley-Smith]{}, L., [Freeman]{}, K. C., [Gibson]{}, B. K., & [Barnes]{}, D. G. 2003, , 586, 170 Sasselov, D. D., Beaulieu, J. P., Renault, C., Grison, P., Ferlet, R., Vidal-Madjar, A., Maurice, E., Prevot, L., Aubourg, E., Bareyre, P., Brehin, S., Coutures, C., Delabrouille, N., Kat, J. D., Gros, M., Laurent, B., Lachieze-Rey, M., Lesquoy, E., Magneville, C., Milsztajn, A., Moscoso, L., Queinnec, F., Rich, J., Spiro, M., Vigroux, L., Zylberajch, S., Ansari, R., Cavalier, F., Moniez, M., Gry, C., Guibert, J., Moreau, O., & Tajhmady, F. 1997, A&A, 324, 471 , M. 1959, , 129, 243 , S. & [Lazarian]{}, A. 2001, , 551, L53 Stanimirović, S., Staveley-Smith, L., Dickey, J. M., Sault, R. J., & Snowden, S. L. 1999, MNRAS, 302, 417 Staveley-Smith, L., Sault, R. J., Hatzidimitriou, D., Kesteven, M. J., & McConnell, D. 1997, MNRAS, 289, 225 Toomre, A. 1964, ApJ, 139, 1217 Torres, G. & Carranza, G. J. 1987, MNRAS, 226, 513 van der Hulst, J. M., Terlouw, J. P., Begeman, K., Zwitser, W., & Roelfsema, P. R. 1992, in ASP Conference Series, Vol. 25, Astronomical Data Analysis Software and Systems I, ed. D. M. Worall, C. Biemesderfer, & J. Barnes (San Francisco: Astronomical Society of the Pacific), 131 Walter, F. 1999, Proc. Astron. Soc. Aust., 16, 106 Welch, D. L., McLaren, R. A., Madore, B. F., & McAlary, C. W. 1987, ApJ, 321, 162 Westerlund, B. E. 1997, The Magellanic Clouds (Cambridge, United Kingdom: Cambridge University Press), 32 , A. M. & [Noguchi]{}, M. 2003, , 339, 1135 Zaritsky, D., Harris, J., Grebel, E. K., & Thompson, I. B. 2000, ApJ, 534, 53 , D., [Harris]{}, J., [Thompson]{}, I. B., [Grebel]{}, E. K., & [Massey]{}, P. 2002, , 123, 855 \[lastpage\] [^1]: Throughout this paper we assume a distance to the SMC of 60 kpc [@Westerlund97]. [^2]: The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO.
{ "pile_set_name": "ArXiv" }
[**Minimal Neutrino Beta Beam for Large $\boldsymbol{\theta_{13}}$** ]{} **** Abstract We discuss the minimum requirements for a neutrino beta beam if $\theta_{13}$ is discovered by an upcoming reactor experiment, such as Double Chooz or Daya Bay. We require that both neutrino mass hierarchy and leptonic CP violation can be measured to competitive precisions with a single-baseline experiment in the entire remaining $\theta_{13}$ range. We find that for very high isotope production rates, such as they might be possible using a production ring, a [($^8$B,$^8$Li)]{} beta beam with a $\gamma$ as low as 60 could already be sufficient to perform all of these measurements. If only the often used nominal source luminosities can be achieved, for example, a [($^{18}$Ne, $^6$He)]{} beta beam from Fermilab to a possibly existing water Cherenkov detector at Homestake with $\gamma \sim 190-350$ (depending on the Double Chooz best-fit) could outperform practically any other beam technology including wide-band beam and neutrino factory. Introduction ============ In elementary particle physics, the main motivation to push to higher energies is the search for physics beyond the standard model. So far, there has been some evidence for such physics, such as the presence of dark matter or the observation of neutrino oscillations requiring a non-vanishing neutrino mass. It is therefore important to understand these indications for new physics very carefully. In neutrino oscillation physics, the so-called solar and atmospheric oscillation parameters have been measured to high precisions, see, [[*e.g.*]{}]{}, [Ref.]{} [@GonzalezGarcia:2007ib]. However, we only have an upper bound for the reactor mixing angle $\theta_{13}$, and we do not know the mass ordering (normal or inverted), the absolute neutrino mass scale, and the nature of neutrino mass (Dirac or Majorana). Furthermore, there may be (Dirac) CP violation in the lepton sector, which is described by ${\delta_\mathrm{CP}}$. For example, a detection of leptonic CP violation together with a $0\nu\beta\beta$ signal, which indicates that neutrinos are mostly Majorana particles, will motivate leptogenesis as a mechanism to produce the dominance of matter over antimatter in the early universe. In addition, a determination of $\theta_{13}$ and the mass ordering will help our understanding of stellar evolution [@Dighe:2007ks], and these parameters turn out to be excellent discriminators for neutrino mass models including grand unified theories [@Albright:2006cw]. Therefore, future neutrino oscillation experiments may use high energy neutrino beams over long distances to study the remaining unknown oscillation parameters $\theta_{13}$, $\mathrm{sgn}({\Delta m_{31}^2})$ (which we call mass hierarchy), and ${\delta_\mathrm{CP}}$, while nuclear physics experiments, such as $0\nu\beta\beta$ decay and tritium endpoint measurements, will probe absolute neutrino mass scale and the nature of the neutrino mass. An early determination of $\theta_{13}$ might already be possible by upcoming reactor experiments, such as Double Chooz or Daya Bay [@Ardellier:2006mn; @Guo:2007ug]. For the beam experiments, there are, in principle, different approaches depending on the magnitude of $\theta_{13}$. Superbeams, such as T2K or NO$\nu$A [@Itow:2001ee; @Ayres:2004js], or upgrades thereof, are based on neutrino production by pion or kaon decays using a high intensity proton beam on a target. This technique works especially well for large $\theta_{13}$, where the backgrounds are of little relevance. Potential future neutrino factories [@Geer:1998iz; @Apollonio:2002en; @ids] use neutrino production by muon decays. They are discovery machines with an excellent reach in $\theta_{13}$. A beam production technique, which is intimately connected to nuclear physics, is used by so-called beta beams [@Zucchelli:2002sa; @Mezzetto:2003ub; @Autin:2002ms; @Bouchez:2003fy; @Lindroos:2003kp]. For these beams, unstable nuclei, such as from the pairs [($^8$B,$^8$Li)]{} or [($^{18}$Ne, $^6$He)]{}, decay in straight sections of a storage ring to produce an electron flavor-clean $(\nu_e,\bar{\nu}_e)$ neutrino beam. There are two key components for such an experiment: A high intensity ion source, characterized by the number of produced ions per time frame, and a sufficiently large accelerator to boost the ions to higher energies, characterized by the boost factor $\gamma$. For the source, several approaches have been studied in the literature. For example, the ISOL (Isotope Separation On-Line) technique [@EURISOL] could also be used for a wider range of nuclear physics, which means that there will be a lot of synergies in the neutrino oscillation and nuclear physics programs. In addition, the direct production method with a storage ring might lead to even higher source luminosities than originally anticipated, which has been proposed for [($^8$B,$^8$Li)]{} [@Mori:2005zz; @Rubbia:2006pi; @Rubbia:2006zv]. For the accelerator, either an existing machine might be used (such as the CERN-SPS or the Tevatron), or a new one might be built. The difference in the [($^8$B,$^8$Li)]{} and [($^{18}$Ne, $^6$He)]{}ion pairs is their endpoint energies $E_0$: Since the peak of the spectrum is approximately given by $E_\nu \sim E_0 \cdot \gamma$, a lower $\gamma$ might be sufficient if a higher $E_0$ can be used, such as in [($^8$B,$^8$Li)]{} compared to [($^{18}$Ne, $^6$He)]{}. However, a lower $\gamma$ means worse beam collimation, which leads to lower event rates. The interplay between isotope pair, $\gamma$, and ion source luminosity is therefore non-trivial [@Agarwalla:2008gf]. Beta beams have been studied in specific scenarios from low to very high $\gamma$’s [@BurguetCastell:2003vv; @BurguetCastell:2005pa; @Agarwalla:2005we; @Campagne:2006yx; @Donini:2006dx; @Donini:2006tt; @Agarwalla:2006vf; @Agarwalla:2007ai; @Coloma:2007nn; @Jansson:2007nm; @Meloni:2008it; @Agarwalla:2008ti], and there has been a green-field optimization to push the sensitivities for small $\theta_{13}$ [@Huber:2005jk; @Agarwalla:2008gf]. In almost all cases, the luminosities and $\gamma$’s are more or less chosen arbitrarily from the physics point of view, whereas they are rather determined by technical boundary conditions in many cases. However, in the context to alternative superbeams and neutrino factories, the neutrino oscillation physics case of a beta beam might actually be defined by a (large) $\theta_{13}$ signal of the the upcoming beam or reactor experiments, such as Double Chooz or Daya Bay. In this work, we therefore discuss the [*minimal*]{} requirements for a beta beam to outperform any of its alternatives if Double Chooz finds $\theta_{13}$. For example, it is yet unclear if the $\gamma \simeq 350$ in [Ref.]{} [@BurguetCastell:2005pa], which has an excellent performance, is really the minimal allowable setup. Compared to the small $\theta_{13}$ case, in which one optimizes for $\theta_{13}$ reaches as good as possible, the definition of the minimum wish list from the physics point of view is rather straightforward: - $5\sigma$ independent confirmation of ${\sin^22\theta_{13}}>0$ - $3\sigma$ determination of the mass hierarchy (MH) for [*any*]{} (true) ${\delta_\mathrm{CP}}$ - $3\sigma$ establishment of CP violation (CPV) for 80% of all (true) ${\delta_\mathrm{CP}}$ in the [*entire remaining allowed $\theta_{13}$ range*]{}. Note that we do not know the (true) ${\delta_\mathrm{CP}}$ which nature has implemented, which significantly affects the sensitivities. Therefore, we follow a low risk strategy and postulate that our experiment works for any value of this parameter. The only exception is the fraction of ${\delta_\mathrm{CP}}$ for CPV: Since ${\delta_\mathrm{CP}}=0$ and $\pi$ are both CP conserving, one cannot measure CPV for any ${\delta_\mathrm{CP}}$. The arbitrarily chosen fraction 80% can be motivated by a competitive precision compared to a neutrino factory [@ids], or, as we will see later, compared to many other facilities. But what means “minimal” in terms of technical effort for a beta beam? Certainly, minimal refers to using only one baseline. For a given detector, minimal refers to a yet-to-be-defined product between accelerator cost ($\propto \gamma$) and ion source intensity. We study the minimal effort for the above measurements in terms of this product quantitatively, and we discuss the dependence on the isotopes and detector technology used. ${\sin^22\theta_{13}}$ best-fit $90\%$ CL range $3\sigma$ range Zero excl. at --------------------------------- ----------------- ----------------- --------------- 0.04 0.019 - 0.063 0.002 - 0.082 $3.2 \sigma$ 0.08 0.060 - 0.102 0.043 - 0.121 $6.4 \sigma$ 0.12 0.100 - 0.142 $\ge$ 0.084 $9.7 \sigma$ : \[tab:dchooz\] Several best-fit values for Double Chooz (first column), and the allowed range for ${\sin^22\theta_{13}}$ (second, third columns). The fourth column gives the exclusion power of ${\sin^22\theta_{13}}=0$. Simulation from [Ref.]{} [@Huber:2006vr] for 3 years of far detector operation and 1.5 years of near detector operation. Method ====== We assume that Double Chooz finds ${\sin^22\theta_{13}}$, and we require that the above conditions are met for [*any*]{} ${\sin^22\theta_{13}}$ within the 90% CL allowed region of Double Chooz ([[*cf.*]{}]{}, [Tab.]{} \[tab:dchooz\] for several simulated best-fit values). Note that the current bound on ${\sin^22\theta_{13}}$ is $0.157$ at $3\sigma$ [@GonzalezGarcia:2007ib]. We use [($^{18}$Ne, $^6$He)]{} and [($^8$B,$^8$Li)]{} as possible isotope pairs, with $1.1 \cdot 10^{18}$ ($\nu_e$) and $2.9 \cdot 10^{18}$ ($\bar{\nu}_e$) useful ion decays per year, respectively, which are the nominal isotope decay rates often chosen in the literature [@Terranova:2004hu]. For the sake of simplicity, we operate each ion at the [*same*]{} $\gamma$ for neutrinos and antineutrinos for five years, [[*i.e.*]{}]{}, we assume a total running time of ten years. As detectors, we use a 100 kt Totally Active Scintillating Detector (TASD), which could be replaced by a liquid argon detector for a similar performance, and a 500 kt water Cherenkov detector (WC); see [Refs.]{} [@Huber:2002mx; @Huber:2005jk] for simulation details. Note that for large ${\sin^22\theta_{13}}$, the cuts for both detectors should account for high efficiencies rather than low backgrounds, because in this limit, backgrounds are less relevant. We use $\gamma \lesssim 500$ as the allowed $\gamma$ range, unless [($^8$B,$^8$Li)]{} is combined with the WC detector, where we use $\gamma \lesssim 150$ to avoid an un-predictive detector behavior due to too large neutrino energies. Our simulations use the GLoBES software [@Huber:2004ka; @Huber:2007ji] with the current best-fit values and solar oscillation parameter uncertainties from [Ref.]{} [@GonzalezGarcia:2007ib], as well as a 2% error on the matter density profile. For the sake of simplicity, we use a normal simulated mass hierarchy. The uncertainty on the atmospheric oscillation parameters is simulated by the inclusion of 10 years of T2K disappearance data. In some cases, we will discuss our results as a function of the [*luminosity scaling factor*]{} $\mathcal{L}$, which scales the product of useful ion decays per year $\times$ running time $\times$ detector mass $\times$ detection efficiency. Thus, $\mathcal{L}=1$ corresponds to our nominal luminosity, whereas $\mathcal{L}=5$ corresponds to, for example, scaling up the detector mass by a factor of two and the source luminosity by a factor of $2.5$. ![\[fig:lgamma1\]Discovery of $\theta_{13}$ (dark/blue), a normal MH (medium gray/red), and CPV (light gray/yellow) as a function of baseline $L$ and boost factor $\gamma$. Sensitivity is given within the shaded regions at the $5\sigma$ CL for $\theta_{13}$ (for all values of true ${\delta_\mathrm{CP}}$), at the $3\sigma$ CL for the MH (for all values of true ${\delta_\mathrm{CP}}$), and at the $3\sigma$ CL for CPV (for at least 80% of all possible true ${\delta_\mathrm{CP}}$). The minimal possible $\gamma$, as well as the minimal $\gamma$’s for specific baselines, are marked. The figure is computed for the WC detector and [($^{18}$Ne, $^6$He)]{}, and ${\sin^22\theta_{13}}=0.08$ (best-fit) from [Tab.]{} \[tab:dchooz\].](lgamma1){width="10cm"} Results ======= For a given $\mathcal{L}$, isotope pair, and detector, the minimal effort is determined by the minimal $\gamma$ for [*any*]{} baseline $L$. Therefore, we need to perform an optimization in the $L$-$\gamma$ plane, as we illustrate in [[Fig.]{} \[fig:lgamma1\]]{} for [($^{18}$Ne, $^6$He)]{} to the WC detector and the Double Chooz best-fit ${\sin^22\theta_{13}}=0.08$. In this figure, sensitivity is given in the shaded regions to the corresponding performance indicators in the entire ${\sin^22\theta_{13}}$ range remaining after Double Chooz. The minimal possible $\gamma$, for which our conditions are fulfilled, is marked by the horizontal line. It is limited by the MH measurement from the left, and by the CPV measurement from the bottom. This means that the MH measurement leads to a sharp constraint $L \gtrsim L_{\mathrm{min}} \simeq 500 \, \mathrm{km}$, whereas the CPV measurement requires $\gamma \gtrsim 160$. The figure illustrates what is characteristic for a large fraction of the parameter space: The baseline window for the minimal $\gamma$ is rather sharp, and requires a fine-tuning of the detector location. Therefore, we focus on a set of longer, fixed baselines in the following, which allow for stable predictions. For some of these, the minimal $\gamma$’s are illustrated by the arrows. ![\[fig:lumiscale\] Minimal $\gamma$ as a function of the luminosity scaling factor $\mathcal{L}$ for different isotope pair-detector combinations (in steps of 0.25 in $\mathrm{log}_{10} \mathcal{L}$). Here $L=1290 \, \mathrm{km}$ and the Double Chooz best-fit ${\sin^22\theta_{13}}=0.08$ from [Tab.]{} \[tab:dchooz\] are chosen.](lscale){width="10cm"} In order to compare different detector technologies and isotope pairs as a function of $\mathcal{L}$, we show in [[Fig.]{} \[fig:lumiscale\]]{} the minimal $\gamma$ for $L=1290 \, \mathrm{km}$ (fixed) and a Double Chooz best-fit ${\sin^22\theta_{13}}=0.08$ as an example. No symbol in this figure means that we have not found a setup which satisfies our criteria in the indicated $\gamma$ ranges. Obviously, our chosen nominal luminosity $\mathcal{L}=1$ is sufficiently large for [($^{18}$Ne, $^6$He)]{}, but for [($^8$B,$^8$Li)]{}, $\mathcal{L} \gtrsim 5$ is required, [[*i.e.*]{}]{}, [($^8$B,$^8$Li)]{} cannot be used at the nominal luminosity $\mathcal{L}=1$ to fulfill our requirements. We have tested that this conclusion holds irrespective of our discussed ${\sin^22\theta_{13}}$ case, detector, and baseline. We furthermore find that the WC detector outperforms the TASD because of the larger detector mass. As far as the different isotope pairs are concerned, the minimal possible $\gamma$ for [($^{18}$Ne, $^6$He)]{} becomes asymptotically limited for large $\mathcal{L}$ by the neutrino energies too low to allow for sufficient matter effects. This means that for large $\mathcal{L}$, the MH measurement limits the [($^{18}$Ne, $^6$He)]{} setups, whereas the [($^8$B,$^8$Li)]{} setups allow for a lower $\gamma$. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- -- -- -- -- -- -- -- -- -- **[Setup]{} $\downarrow$ **[Baseline]{} \[km\] $\rightarrow$ &  730 &  810 & 1050 & 1290 &  730 &  810 & 1050 & 1290 &  730 &  810 & 1050 & 1290\ **[Beta beams]{} & & & & & & & & & & & &\ [($^{18}$Ne, $^6$He)]{} to WC, $\mathcal{L}=1$ & [**220**]{} & 230 & 290 & 350 & [**200**]{} & 210 & 240 & 230 & [**190**]{} & 200 & 220 & [**190**]{}\ [($^{18}$Ne, $^6$He)]{} to TASD, $\mathcal{L}=1$ & - & [**300**]{} & 370 & 430 & [**300**]{} & 310 & 340 & 380 & [**320**]{} & [**320**]{} & 340 & 380\ [($^{18}$Ne, $^6$He)]{} to WC, $\mathcal{L}=5$ & [**190**]{} & [**190**]{} & [**190**]{} & 230 & [**140**]{} & [**140**]{} & [**140**]{} & [**140**]{} & [**140**]{} & [**140**]{} & [**140**]{} & [**140**]{}\ [($^{18}$Ne, $^6$He)]{} to TASD, $\mathcal{L}=5$ & [**200**]{} & [**200**]{} & 220 & 230 & 180 & 180 & [**170**]{} & 180 & 180 & 170 & [**160**]{} & 170\ [($^8$B,$^8$Li)]{} to WC, $\mathcal{L}=5$ & - & - & [**100**]{} & 130 & [**80**]{} & [**80**]{} & 100 & 110 & [**90**]{} & [**90**]{} & 100 & 110\ [($^8$B,$^8$Li)]{} to TASD, $\mathcal{L}=5$ & - & - & [**150**]{} & 190 & - & - & [**190**]{} & [**190**]{} & - & - & - & [**310**]{}\ [($^8$B,$^8$Li)]{} to WC, $\mathcal{L}=10$ & [**70**]{} & [**70**]{} & 90 & 110 & [**60**]{} & 70 & 80 & 90 & [**60**]{} & [**60**]{} & 70 & 80\ [($^8$B,$^8$Li)]{} to TASD, $\mathcal{L}=10$ & - & [**100**]{} & 130 & 140 & [**110**]{} & [**110**]{} & 120 & 130 & [**120**]{} & [**120**]{} & [**120**]{} & 130\ **[Superbeam upgrades]{} & & & & & & & & & & & &\ T2KK from [Ref.]{} [@Barger:2007jq] & & &\ NO$\nu$A\* from [Ref.]{} [@Barger:2007jq] & & &\ WBB-120$_S$ from [Ref.]{} [@Barger:2007jq] & & &\ **[Neutrino factories]{} & & & & & & & & & & & &\ IDS-NF 1.0 from [Ref.]{} [@ids] & & &\ Low-E NF from [Ref.]{} [@Huber:2007uj] & & &\ **[Hybrids]{} & & & & & & & & & & & &\ NF-SB from [Ref.]{} [@Huber:2007uj] & & &\ ************ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- -- -- -- -- -- -- -- -- -- We show in [Tab.]{} \[tab:baseres\] the minimal $\gamma$ (rounded up to the next 10) to measure all of the discussed performance indicators at specific baselines (in columns) for the given setups and Double Chooz ${\sin^22\theta_{13}}$ best-fit cases, where $\mathcal{L}$ is the luminosity scaling factor. The different chosen baselines corresponds to CERN-LNGS or FNAL-Soudan (730 km), FNAL-Ash River (810 km), CERN-Boulby or JHF-Korea (1050 km), and FNAL-Homestake (1290 km). Obviously, the minimal $\gamma$ depends on the ${\sin^22\theta_{13}}$ case, which will be known after Double Chooz, and the baseline, which depends on the accelerator and detector locations. Therefore, once Double Chooz has found ${\sin^22\theta_{13}}$, one can easily read off this table the minimal $\gamma$. If [($^8$B,$^8$Li)]{} can be used at reasonably high source luminosities ($\mathcal{L}=10$), $\gamma$ can be as low as about 60. If, however, [($^{18}$Ne, $^6$He)]{} is used at a lower luminosity, a $\gamma$ of at least 190 will be required. In addition, we show in [Tab.]{} \[tab:baseres\] a number of superbeam upgrades and neutrino factory setups are tested for the same criteria, same simulated values, and same ${\sin^22\theta_{13}}$ cases, where the details are given in the respective references. From this comparison, it is clear that almost none of the simulated alternatives can satisfy our criteria for any value of ${\sin^22\theta_{13}}$. However, if, for example, ${\sin^22\theta_{13}}=0.08$, T2KK, a wide-band beam (WBB-120$_S$, in this case using a 100 kt LArTPC), or a low energy neutrino factory might be used. The only setup in this list which can measure all of the performance indicators for all values of ${\sin^22\theta_{13}}$ is the NF-SB hybrid from [Ref.]{} [@Huber:2007uj]. It combines a superbeam with a low energy neutrino factory beam directed towards the same detector in a distance of about $1 \, 250 \, \mathrm{km}$. From this comparison to alternative setups, it should be clear that the fraction of ${\delta_\mathrm{CP}}$ of 80%, which we have initially used for CPV, is a good benchmark value on the edge of alternative setups. Summary and conclusions ======================= We have studied the minimal requirements for a single baseline beta beam experiments for large ${\sin^22\theta_{13}}$. We have assumed that Double Chooz finds ${\sin^22\theta_{13}}$, and we have required that the next generation long-baseline experiment measure mass hierarchy and CP violation at $3\sigma$ in the entire remaining ${\sin^22\theta_{13}}$ allowed region. We have demonstrated that the minimal beta beam baseline is about $500 \, \mathrm{km}$. For any fixed baseline longer than this threshold, we have determined the minimal allowable $\gamma$. Let us conclude depending on the geographical region, where our discussion is based on [Tab.]{} \[tab:baseres\]. For Europe, the CERN-SPS might be used as an accelerator. The baseline to Frejus is not sufficient for a beta beam due to small matter effects. However, CERN-LNGS or CERN-Boulby can be used. If the SPS is not upgraded, [($^8$B,$^8$Li)]{} must be used at a high ion source luminosity $\mathcal{L} \gtrsim 5$, which might be achievable using a production ring. If the SPS can be upgraded, [($^{18}$Ne, $^6$He)]{} at a lower ion source luminosity can be used as well. A $\gamma$ as low as about 200 could be sufficient for large ${\sin^22\theta_{13}}$ (for both ions), whereas a $\gamma$ as high as 350 might not be necessary [@BurguetCastell:2005pa]. In addition, DESY might be used as a beta beam source, which opens new possibilities as long as $L \gtrsim 700 \, \mathrm{km}$. For the US, baselines such as FNAL-Soudan, FNAL-Ash River, or FNAL-Homestake are perfect for a beta beam experiment irrespective of the discussed ${\sin^22\theta_{13}}$ case. Since the Tevatron allows for higher $\gamma$’s than the SPS, [($^{18}$Ne, $^6$He)]{} might be used at our nominal ion source luminosity. For example, if a beta beam is directed towards a possibly existing large water Cherenkov detector at the Homestake mine, a $\gamma$ as low as 190 could be sufficient. Compared to a wide band beam, which is limited by the proton intensity and target power, the $\gamma$ can be chosen high enough to allow for all measurements for any discussed ${\sin^22\theta_{13}}$ case. For Japan, a baseline to Korea is perfectly suited for a beta beam, while the T2K baseline of 295 km is too short. Compared to its alternatives, a beta beam might be the most flexible approach to measure all remaining quantities for large ${\sin^22\theta_{13}}$. For a given ion pair, source luminosity, and ${\sin^22\theta_{13}}$ case, we obtain a certain minimal $\gamma$ which allows us to measure all remaining performance indicators to sufficient precisions. The resulting minimal $\gamma$’s are not unrealistically high to outperform almost any alternative superbeam or neutrino factory setup. Therefore, there might be a clear neutrino oscillation physics case for the beta beam if ${\sin^22\theta_{13}}$ turns out to be large. In addition, synergies with nuclear physics applications may make a beta beam the most attractive alternative. For example, a low $\gamma$ beta beam (or an off-axis beta beam) could be used to obtain complementary information on neutrino-nucleus interactions, which might be even relevant for $0\nu\beta\beta$ experiments [@Volpe:2003fi; @Serreau:2004kx]. [**Acknowledgments.** ]{} I would like to acknowledge support from Emmy Noether program of Deutsche Forschungsgemeinschaft. [10]{} bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} (), . , ****, (), . , ****, (), . *et al.* () (), . *et al.* () (), . *et al.*, . *et al.* () (), . , ****, (), . *et al.* (), . **,\ . , ****, (). , ****, (), . *et al.*, ****, (), . , , , ****, (), . (), . . , ****, (). , , , , ****, (), . (), . , , , (), . , , , , , ****, (), . , , , , , ****, (), . , , , ****, (), . , , , , ****, (), . , ****, (), . *et al.*, ****, (), . , , , ****, (), . , , (), . , , , (), . , , , (), . , , , , (), . , , (), . , , , , ****, (), . , , , , , ****, (), . , , , , ****, (), . , , , ****, (), . , , , ****, (), . , , , , , ****, (), . , , , , ****, (), . , ****, (), . , ****, (), . , ****, (), .
{ "pile_set_name": "ArXiv" }
--- abstract: 'The aim of this study is to improve the prediction of near-wall mean streamwise velocity profile $U^+$ by using a simple method. The $U^+$ profile is obtained by solving the momentum equation which is written as an ordinary differential equation. An eddy viscosity formulation based on a near-wall turbulent kinetic energy $k^+$ function (R. Absi, Analytical solutions for the modeled $k$-equation, ASME J. Appl. Mech. **75**, 044501, 2008) and the van Driest mixing length equation (E.R. van Driest, On turbulent flow near a wall, J. Aero. Sci. **23**, 1007, 1956) is used. The parameters obtained from the $k^+$ profiles are used for the computation of $U^+$ (variables with the superscript of $+$ are those nondimensionalized by the wall friction velocity $u_\tau$ and the kinematic viscosity $\nu$). Comparisons with DNS data of fully-developed turbulent channel flows for $109 < Re_{\tau} < 2003$ show good agreement (where $Re_{\tau}$ denotes the friction Reynolds number defined by $u_\tau$, $\nu$ and the channel half-width $\delta$).' author: - Rafik Absi title: A simple eddy viscosity formulation for turbulent boundary layers near smooth walls --- **NOMENCLATURE** $A_k^+$, $A_l^+$, $B$, $C$, $C_{\nu}$ = coefficients\ $k$ = turbulent kinetic energy\ $l_m$ = mixing length\ $P$ = pressure\ $Re_{\tau}$ = friction Reynolds number\ $x$, $y$ = coordinates in respectively the streamwise and wall normal directions\ $U$, $V$ = mean velocity components respectively in the $x$ and $y$ directions\ $u_\tau$ = wall friction velocity\ $\delta$ = channel half-width\ $\kappa$ = Kármán constant ($\approx 0.4$)\ $\nu$ = kinematic viscosity\ $\nu_t$ = eddy viscosity\ $\rho$ = density\ $\tau$ = shear stress\ All variables with the superscript of $+$ are those nondimensionalized by $u_\tau$ and $\nu$\ Introduction ============ Turbulent flows are significantly affected by the presence of walls [@Hinze]. Successful predictions of turbulence models used for wall-bounded turbulent flows depend on accurate description of the flow in the near-wall region. Numerous experiments of fully-developed turbulent channel flows, show that the near-wall region can be subdivided into three layers. A viscous sublayer (for a distance from the wall $y^+ < 5$), where the mean velocity $U^+$ can be approximated by $U^+ = y^+$ and the turbulent kinetic energy $k^+$ by a quadratic variation $k^+ \approx y^{+ 2}$ [@Hanjalic]. A fully-turbulent layer or outer layer (for $y^+ > 30$ until an upper limit), where $U^+$ can be correctly approximated by the logarithmic profile [@Tennekes] and $k^+$ by an exponential decaying function [@AbsiJAM]. Between these two layers, a buffer layer, where $k^+$ can be accurately predicted by an analytical function [@AbsiJAM]. The aim of this Note is to improve the prediction of $U^+$ by using a simple and accurate method. The $U^+$ profile will be obtained from the resolution of the momentum equation. An eddy viscosity formulation based on a near-wall turbulent kinetic energy $k^+$ function [@AbsiJAM], which was validated by DNS data for $109 < Re_{\tau} < 642$ for $y^+ < 20$, and the van Driest mixing length equation will be used. The values of $U^+$ and $k^+$ at an upper limit of the buffer layer could be used as boundary conditions for a turbulence closure model applied in the outer layer.\ The test case is the fully developed plane channel flow which is considered to be the simplest and most idealized boundary layer flow. Reynolds number effects on wall turbulence have been investigated by many experimental and computational studies. A review of turbulence closure models for wall-bounded shear flows was presented in Patel *et al.* (1985) [@Patel], and experiments in the range of $190 < Re_{\tau} < 1900$ was performed by Wei and Willmarth (1989) [@Wei] to investigate the effects of the Reynolds number very near the wall. There are several DNS studies of plane channel flows which have allowed to improve the knowledge of the boundary layer dynamics. The DNS was performed at $Re_{\tau} = 180$ by Kim *et al.* (1987) [@Kim], up to $Re_{\tau} = 590$ by Moser *et al.* (1999) [@Moser], up to $Re_{\tau} = 642$ by Iwamoto *et al.* [@Iwamoto] (2002), up to $Re_{\tau} = 950$ by del Álamo *et al.* (2004) [@delAlamo], and recently at $Re_{\tau} = 2003$ by Hoyas and Jiménez (2006) [@Hoyas]. Model equations =============== We consider a steady uniform fully developed plane channel flow (i.e. the flow between two infinitely large plates, fig. \[fig:Sketch\]), where $x$ and $y$ are respectively the coordinates in the streamwise and wall normal directions and the corresponding mean velocity components are respectively $U$ and $V$. The channel half width is $\delta$ (can represent the boundary layer thickness), and the flow is driven by a pressure gradient in the streamwise direction. ![\[fig:Sketch\] Sketch of the flow geometry for plane channel flow. ](Fig0.jpg){width="10cm" height="6cm"} Momentum equation ----------------- DNS data (Fig. \[fig:UpVp\]) [@Iwamoto] for $109 < Re_{\tau} < 642$ show that $V \approx 0$ for $y^+ < 20$. By taking $V = 0$, the streamwise momentum equation becomes $$\begin{aligned} (1 / \rho) \partial_x P = \partial_y \left( (\nu + \nu_t) \partial_y U \right) \label{NS3}\end{aligned}$$ where $\nu_t$ is the eddy viscosity, $P$ the pressure and $\rho$ the density. With the shear stress $\tau$, we write Eq. (\[NS3\]) as $$\begin{aligned} \partial_x P = \partial_y \tau \label{NS4}\end{aligned}$$ where $\tau = \rho \: (\nu + \nu_t) \partial_y U$. For a constant $\partial_x P$, by integrating Eq. (\[NS4\]) between $\tau(y=0) = \tau_w$ and $\tau(y=\delta) = 0$, we obtain $\tau_w = - \delta \: \partial_x P$ and therefore $u_{\tau} = \sqrt{(\delta/\rho) (- \partial_x P)}$. ![\[fig:UpVp\] DNS data [@Iwamoto] of mean velocity profiles for $109 < Re_{\tau} < 642$. Bottom figure, $v_{mean}^+(y^+)=V^+(y^+)$; Top figure, $u_{mean}^+(y^+)=U^+(y^+)$, dash-dotted lines, $U^+ = y^+$ and $U^+ = 2.5 \: ln(y^+)+5.0$ (figure from [@Iwamoto2]). ](UpVp.jpg){width="8cm" height="12cm"} By integrating Eq. (\[NS3\]) between $y=0$ and $y=\delta$ we obtain $$\begin{aligned} \frac{d U}{d y} = \frac{u_{\tau}^2}{\nu + \nu_t} \: \left(1 - \frac{y}{\delta}\right) \label{NS5}\end{aligned}$$ Or in wall unit $$\begin{aligned} \displaystyle \frac{d U^+}{d y^+} = \frac{1} {1 + \nu_t^+} \: \left( 1 - \frac{y^+} {Re_{\tau}} \right) \label{NSF}\end{aligned}$$ where $U^+ = U / u_{\tau}$, $y^+ = y \: u_{\tau} / \nu$ and $\nu_t^+ = \nu_t / \nu$. The resolution of the ordinary differential equation (\[NSF\]) needs the dimensionless eddy viscosity $\nu_t^+$. A near-wall eddy viscosity formulation -------------------------------------- The eddy viscosity is given by $$\begin{aligned} \displaystyle \nu_t = C_{\nu} \sqrt{k} \: l_m \label{nutu}\end{aligned}$$ where $l_m$ is the mixing length and $C_{\nu}$ a coefficient. On the one hand, the mixing length is given by the van Driest equation $$\begin{aligned} \displaystyle l_m = \kappa y \left( 1 - e^{\displaystyle - y^+ / A_l^+} \right) \label{lmVD}\end{aligned}$$ where $\kappa$ is the Kármán constant ($\approx 0.4$) and $A_l^+ = 26$. We write $\nu_t^+$ from equations (\[nutu\]) and (\[lmVD\]) as $$\begin{aligned} \displaystyle \nu_t^+ = \frac{\nu_t}{\nu} = C_{\nu} \sqrt{k^+} \kappa \frac{y u_{\tau}}{\nu} \left( 1 - e^{\displaystyle - y^+ / A_l^+}\right) = C_{\nu} \sqrt{k^+} \: l_m^+ \label{nutplus}\end{aligned}$$ where $k^+ = k / u_{\tau}^2$ and $l_m^+ = \kappa y^+ \left( 1 - e^{\displaystyle - y^+ / A_l^+}\right)$. ![\[fig:kpyp\] Turbulent kinetic energy $k^+(y^+)$ for different Reynolds numbers (for $y^+ < 20$). Symbols, DNS data. $Re_{\tau} = 150$, diamonds [@Iwamoto], dash-dotted line Eq. (\[kpluswall\]) with $A_k^+=8$ and $B=0.116$; $Re_{\tau} = 395$, squares [@Iwamoto], dashed line Eq. (\[kpluswall\]) with $A_k^+=8$ and $B=0.132$; $Re_{\tau} = 642$, circles [@Iwamoto], solid line Eq. (\[kpluswall\]) with $A_k^+=8$ and $B=0.14$; $Re_{\tau} = 2003$, $\times$ [@Hoyas], dashed line Eq. (\[kpluswall\]) with $A_k^+=8$ and $B=0.158$; Thin dashed line, $k^+=0.1 y^{+ 2}$. ](Figkpyp2.jpg){width="12cm" height="10cm"} On the other hand, from the modeled $k$-equation, we developed a function for $k^+$ for $y^+ < 20$ [@AbsiJAM]. For steady channel flows, we write the $k$-equation as $\displaystyle \partial_y \left( \nu_t \partial_y k \right) = - \left(G + \partial_y \left( \nu \: \partial_y k \right) - \epsilon \right)$, where $G$ and $\epsilon$ are respectively the energy production and dissipation. With an approximation for the right-hand side as $\left(G + d_y \left( \nu \: d_y k \right) - \epsilon \right) \approx 1 / y^2$ and by integrating, we obtained [@AbsiJAM] $$\begin{aligned} \displaystyle k^+ = B \: y^{+ 2 C} \: e^{\displaystyle( - y^+ / A_k^+)} \label{kplus}\end{aligned}$$ Where $A_k^+$, $B$ and $C$ are coefficients. Examination of Eq. (\[kplus\]) by DNS data of channel flows shows that for $y^+ \leq 20$, $C = 1$, $A_k^+ = 8$ and $B$ is $Re_{\tau}$-dependent [@AbsiJAM]. We write therefore $k^+$ for $y^+ \leq 20$ as $$\begin{aligned} \displaystyle k^+ = B \; y^{+ 2} \; e^{\displaystyle( - y^+ / A_k^+)} \label{kpluswall}\end{aligned}$$ Table \[tab:table1\] gives values of $B(Re_{\tau})$ obtained from Eq. (\[kpluswall\]) and DNS data [@Iwamoto], [@Hoyas], [@AbsiJAM]. We propose the following function Eq. (10) for the coefficient B $$\begin{aligned} B(Re_{\tau}) = C_{B1} ln(Re_{\tau}) + C_{B2} \label{B}\end{aligned}$$ where $C_{B1}$ and $C_{B2}$ are constants. The calibration (Fig. 4) gives $C_{B1} = 0.0164$ and $C_{B2} = 0.0334$. $Re_{\tau}$ 109 150 298 395 642 2003 ------------- ------ ------- ------- ------- ------ ------- $B$ 0.11 0.116 0.127 0.132 0.14 0.158 : \[tab:table1\]Values of coefficient $B(Re_{\tau})$ obtained from Eq. (\[kpluswall\]) and DNS data. ![\[fig:Usol\] Dependency of the coefficient B on the Reynolds number $Re_{\tau}$. o, values obtained from DNS data; Curve, proposed function (10). ](Fig4.jpg){width="12cm" height="10cm"} We noticed that the series expansion of the exponential in Eq. (\[kpluswall\]) at the first order gives $k^+=B y^{+ 2}-(B / A_k^+) y^{+ 3}$. This equation is similar to the approximation deduced from the continuity equation and the no-slip condition [@Hanjalic] (page 608). However, the quadratic variation of $k$ (first term in the right-hand side) is valid only in the immediate vicinity of the wall ($y^+ < 5$). Eq. (\[kpluswall\]) is therefore a more general and more accurate solution (Fig. \[fig:kpyp\]).\ With Eq. (\[kpluswall\]), we write the dimensionless eddy viscosity as $$\begin{aligned} \displaystyle \nu_t^+ = C_{\nu} \: \kappa \: B^{0.5} \: y^{+ 2} \: e^{\displaystyle - y^+ / (2 A_k^+)} \left( 1 - e^{\displaystyle - y^+ / A_l^+}\right) \label{nutplus2}\end{aligned}$$ Results and discussions ======================= ![\[fig:Usol\] Mean streamwise velocity profile $U^+(y^+)$ for $Re_{\tau} = 642$. o, DNS data [@Iwamoto]. Curves: bold red solid line, solution of Eq. (\[NSF\]) with Eq. (\[nutplus2\]) ($C_{\nu}=0.3$, $A_l^+=26$, $A_k^+=8$ and $B=0.14$); dashed line, $U^+ = y^+$; dash-dotted line, $U^+ = 2.5 \: ln(y^+)+5.0$. ](FigUsol.jpg){width="12cm" height="10cm"} \(a) ![\[fig:Usol3\] Mean streamwise velocity profiles $U^+(y^+)$ for $Re_{\tau} \geq 395$. Symbols, DNS data. Curves, Thin dashed line, $U^+ = y^+$. Red solid lines, solution of Eq. (\[NSF\]) with Eq. (\[nutplus2\]) ($A_l^+=26$, $A_k^+=8$); (a) $Re_{\tau} = 395$, squares [@Iwamoto], solid line ($C_{\nu}=0.3$, $B=0.132$); (b) $Re_{\tau} = 642$, circles [@Iwamoto], solid line ($C_{\nu}=0.3$, $B=0.14$); (c) $Re_{\tau} = 2003$, $\times$ [@Hoyas], solid line ($C_{\nu}=0.3$, $B=0.158$). ](FigU400.jpg "fig:"){width="7cm" height="7cm"} (b) ![\[fig:Usol3\] Mean streamwise velocity profiles $U^+(y^+)$ for $Re_{\tau} \geq 395$. Symbols, DNS data. Curves, Thin dashed line, $U^+ = y^+$. Red solid lines, solution of Eq. (\[NSF\]) with Eq. (\[nutplus2\]) ($A_l^+=26$, $A_k^+=8$); (a) $Re_{\tau} = 395$, squares [@Iwamoto], solid line ($C_{\nu}=0.3$, $B=0.132$); (b) $Re_{\tau} = 642$, circles [@Iwamoto], solid line ($C_{\nu}=0.3$, $B=0.14$); (c) $Re_{\tau} = 2003$, $\times$ [@Hoyas], solid line ($C_{\nu}=0.3$, $B=0.158$). ](FigU650.jpg "fig:"){width="7cm" height="7cm"}\ (c) ![\[fig:Usol3\] Mean streamwise velocity profiles $U^+(y^+)$ for $Re_{\tau} \geq 395$. Symbols, DNS data. Curves, Thin dashed line, $U^+ = y^+$. Red solid lines, solution of Eq. (\[NSF\]) with Eq. (\[nutplus2\]) ($A_l^+=26$, $A_k^+=8$); (a) $Re_{\tau} = 395$, squares [@Iwamoto], solid line ($C_{\nu}=0.3$, $B=0.132$); (b) $Re_{\tau} = 642$, circles [@Iwamoto], solid line ($C_{\nu}=0.3$, $B=0.14$); (c) $Re_{\tau} = 2003$, $\times$ [@Hoyas], solid line ($C_{\nu}=0.3$, $B=0.158$). ](FigU2000.jpg "fig:"){width="7cm" height="7cm"} \(a) ![\[fig:Usol2\] Mean streamwise velocity profiles $U^+(y^+)$ for $Re_{\tau} < 395$. Symbols, DNS data; Curves, Thin dashed line, $U^+ = y^+$; Red solid lines, solution of Eq. (\[NSF\]) with Eq. (\[nutplus2\]) ($A_l^+=26$, $A_k^+=8$); (a) $Re_{\tau} = 109$, $+$ [@Iwamoto], solid line ($C_{\nu}=0.2$, $B=0.11$); (b) $Re_{\tau} = 150$, diamonds [@Iwamoto], solid line ($C_{\nu}=0.25$, $B=0.116$); ](FigU110.jpg "fig:"){width="7cm" height="7cm"} (b) ![\[fig:Usol2\] Mean streamwise velocity profiles $U^+(y^+)$ for $Re_{\tau} < 395$. Symbols, DNS data; Curves, Thin dashed line, $U^+ = y^+$; Red solid lines, solution of Eq. (\[NSF\]) with Eq. (\[nutplus2\]) ($A_l^+=26$, $A_k^+=8$); (a) $Re_{\tau} = 109$, $+$ [@Iwamoto], solid line ($C_{\nu}=0.2$, $B=0.11$); (b) $Re_{\tau} = 150$, diamonds [@Iwamoto], solid line ($C_{\nu}=0.25$, $B=0.116$); ](FigU150.jpg "fig:"){width="7cm" height="7cm"} Predicted mean streamwise velocity $U^+(y^+)$ profiles are obtained from Eq. (\[NSF\]) and Eq. (\[nutplus2\]). Figure (\[fig:Usol\]) presents the mean streamwise velocity profile $U^+(y^+)$ for $Re_{\tau} = 642$. The solution of Eq. (\[NSF\]) with Eq. (\[nutplus2\]), where $A_l^+=26$, $A_k^+=8$, $B=0.14$ and $C_{\nu}=0.3$, is compared to DNS data [@Iwamoto]. The predicted $U^+(y^+)$ profile shows good agreement with DNS data. Values of $A_k^+=8$ and $B=0.14$ are those of the $k^+$ profile (Fig. \[fig:kpyp\]). In order to verify the dependency of the coefficient $C_{\nu}$ on the Reynolds number $Re_{\tau}$, we present predicted $U^+(y^+)$ profiles for different $Re_{\tau}$ (Fig \[fig:Usol3\]). Profiles of figure (\[fig:Usol3\]) for $Re_{\tau} = 395$, $Re_{\tau} = 642$ and $Re_{\tau} = 2003$ was obtained with $C_{\nu}=0.3$ and values of $A_k^+=8$ and $B$ from the $k^+$ profiles (Fig. \[fig:kpyp\]). It seems that $C_{\nu}$ is independent of the Reynolds number for $Re_{\tau} \geq 395$ and is equal to $0.3$. The values of $B(Re_{\tau})$ obtained from the $k^+$ profiles are suitable for computation of $U^+(y^+)$ profiles. However, for $Re_{\tau} = 150$ and $Re_{\tau} = 109$ (Fig \[fig:Usol2\]) the required values of $C_{\nu}$ are respectively $0.25$ and $0.2$. Therefore, $C_{\nu}$ seems to be $Re_{\tau}$-dependent for $Re_{\tau}$ less than $395$. This dependency seems to be associated to low-Reynolds-number effects. Indeed, Moser *et al.* [@Moser] showed that low-Reynolds-number effects are absent for $Re_{\tau} > 390$. We notice that for $y^+ < 20$, the required $C_{\nu}$ is different from $C_{\mu}^{1/4}$ (with $C_{\mu}$ is the empirical constant in the $k$-$\epsilon$ model equal to $0.09$). For $Re_{\tau} \geq 395$, $C_{\nu} = C_{\mu}^{1/2}$. Conclusion ========== In summary, mean streamwise velocity profiles $U^+$ was obtained by solving a momentum equation which is written as an ordinary differential equation. The analytical eddy viscosity formulation is based on an accurate near-wall function for the turbulent kinetic energy $k^+$ and the van Driest mixing length equation. The parameters obtained from the calibration of $k^+$ was used for the computation of $U^+$. Comparisons with DNS data of fully-developed turbulent channel flows show good agreement. Our simulations show that for $Re_{\tau} > 395$ the coefficient of proportionality $C_{\nu}$ in the eddy viscosity equation is independent of $Re_{\tau}$ and equal to $0.3$. However, for $Re_{\tau} < 395$, the coefficient $C_{\nu}$ is $Re_{\tau}$-dependent. The values of $k^+(y^+=20)$ and $U^+(y^+=20)$ could be used as boundary conditions for a turbulence closure model applied for $y^+ \geq 20$.\ **AKNOWLEDGEMENTS**\ The author would like to thank N. Kasagi, K. Iwamoto, Y. Suzuki from the University of Tokyo and J. Jimenez, S. Hoyas from Universidad Polite´cnica de Madrid, for providing the DNS data. [breitestes Label]{} J. O. Hinze, Turbulence, (MacGraw-Hill, 1975). K. Hanjalić, and B.E. Launder, “Contribution towards a Reynolds-stress Closure for low-Reynolds-number turbulence," J. Fluid Mech. **74**, 593 (1976). H.Tennekes and J. L. Lumley, A First Course in Turbulence, (MIT Press, 1972). R. Absi, “Analytical solutions for the modeled $k$-equation," ASME J. Appl. Mech. **75**, 044501 (2008). E.R. van Driest, “On turbulent flow near a wall", J. Aero. Sci. **23**, 1007 (1956). V. C. Patel , W. Rodi and G. Scheuerer, “Turbulence models for near-wall and low Reynolds numbers flows: A review,"’ AIAA J. **23**, 1308 (1985). T. Wei and W. W. Willmarth, “Reynolds-number effects on the structure of a turbulent channel flow," J. Fluid Mech. **204**, 57 (1989). J. Kim, P. Moin and R. D. Moser, “Turbulent statistics in fully developed channel flow at low Reynolds number," J. Fluid Mech. **177**, 133 (1987). R. D. Moser, J. Kim, N. N. Mansour, “Direct numerical simulation of turbulent channel flow up to $Re_{\tau} = 590$," Phys. Fluids **11**, 943 (1999). K. Iwamoto, Y. Suzuki and N. Kasagi, “Reynolds number effect on wall turbulence: toward effective feedback control," Int. J. Heat and Fluid Flow **23**, 678 (2002). J. C. del Alamo, J. Jimenez, P. Zandonade and R. D. Moser, “Scaling of the energy spectra of turbulent channels", J. Fluid Mech. **500**, 135 (2004). S. Hoyas and J. Jiménez, “Scaling of velocity fluctuations in turbulent channels up to $Re_{\tau} =2003$", Phys. Fluids **18**, 011702 (2006). K. Iwamoto, “Database of fully developed channel flow," THTLAB Inernal Report No. ILR-0201, Dept. Mech. Eng., Univ. Tokyo, (2002).
{ "pile_set_name": "ArXiv" }
--- abstract: | We study $N=2$, $d=4$ attractor equations for the quantum corrected two-moduli prepotential $\mathcal{F}=st^2+i\lambda$, with $\lambda$ real, which is the only correction which preserves the axion shift symmetry and modifies the geometry. In the classical case the black hole effective potential is known to have a flat direction. We found that in the presence of $D0-D6$ branes the black hole potential exhibits a flat direction in the quantum case as well. It corresponds to non-BPS $Z\neq 0$ solutions to the attractor equations. Unlike the classical case, the solutions acquire non-zero values of the axion field. For the cases of $D0-D4$ and $D2-D6$ branes the classical flat direction reduces to separate critical points which turn out to have a vanishing axion field. --- \ \ \ \ \ Introduction {#Intro} ============ The attractor mechanism was firstly described in the seminal papers [@FKS]-[@FGK] and is now the object of intense studies (for a comprehensive list of references, see *e.g.* [@bellucci2]). While originally this mechanism was discovered in the context of extremal BPS black holes, later it was found to be present even for non-BPS ones. Differently from the BPS black holes, such new attractors do not saturate the BPS bound and thus, when considering a supergravity theory, they break all supersymmetries at the black hole event horizon [@BPS]. Attractor mechanism equations are given by the condition of extremality [@FGK] $$\label{Mon-1} \phi _{H}\left( p,q \right):\qquad \left. \frac{\partial V_{BH}\left( \phi, p ,q \right)}{\partial\phi^a}\right| _{\phi =\phi _{H}\left( p,\,q \right)}=0$$ of the so-called black hole potential $V_{BH}$, which is a real function of the moduli $\phi^a$ and magnetic $p^{\Lambda}$ and electric $q_{\Lambda}$ charges. The crucial condition for a critical point $\phi _{H}\left(p,q\right)$ to be an attractor in the strict sense is that the Hessian matrix $$\label{Mon-2} \mathcal{H}_{ab}\left( p, q \right) = \nabla_a \nabla_b V_{BH}\rule[-0.5em]{0.4pt}{1.5em}_{\;\phi=\phi_H} = \partial_a \partial_b V_{BH}\rule[-0.5em]{0.4pt}{1.5em}_{\;\phi=\phi_H}$$ of $V_{BH}$ evaluated at the critical point (\[Mon-1\]) be positive definite. In $N=2$, $d=4$ Maxwell-Einstein supergravities based on homogeneous scalar manifolds, the Hessian matrix has in general either positive or zero eigenvalues. The latter ones correspond to massless Hessian modes, which have been proven to be flat directions of $V_{BH}$ [@Ferrara-Marrani-1; @ferrara4]. The presence of flat directions does not contradict the essence of the attractor mechanism: although the moduli might not be stabilized, the value of the entropy does not change when the moduli change along the flat directions of $V_{BH}$. Indeed, in $N=2$ $d=4$ supergravity, the black hole entropy is related to its potential through the formula [@FGK] $$S_{BH}\left( p,q\right) =\pi \,V_{BH}\left( \phi ,p,q\right) \rule[-0.5em]{0.4pt}{1.5em}_{\;\phi =\phi _{H}}. \label{SBH}$$ Therefore, whether the flat directions are present or not, it does not affect the value of the entropy. Consequently, one may allow the eigenvalues of the Hessian matrix to be zero, as well. Actually, this phenomenon always occurs in $N>2$-extended, $d=4$ supergravities, also for $\frac{1}{N}$-BPS configurations, and it can be understood through an $N=2$ analysis, as being due to $N=2$ hypermultiplets always present in these theories [@Ferrara-Marrani-1; @bellucci2]. In $N=2$, $d=4$ supergravity with more than one vector multiplet coupled to the supergravity one, the black hole potential $V_{BH}$ has flat directions provided that the critical points exist [@ferrara4; @bellucci1]. They correspond to non-BPS states with non-vanishing central charge. The simplest model possessing a flat direction is that with two vector multiplets, i.e. the so-called $st^2$ model. The latter we treat in this paper which might be thought of as a continuation of the investigation started in an earlier paper [@BFMS1], where we found an effect of multiplicity of the attractors in the presence of quantum corrections. This effect is related to the fact that when quantum corrections are introduced, the scalar manifold is not simply connected any more. Even in the classical case, solutions for the attractor equations are known just for quite a few models. For example, in the framework of special Kähler d-geometry, supersymmetric attractor equations are solved in [@Shmakova]. Non-supersymmetric ones are solved completely both for the $t^3$ model [@Saraikin-Vafa-1] and for the $stu$ one [@stu-unveiled], taking advantage of the presence of a large duality symmetry. States with vanishing central charge are investigated in [@BMOS-1; @stu-unveiled]. As it has been already mentioned, in the paper [@BFMS1] we began the study of a quantum $t^{3}$ model of $N=2$ $d=4$ supergravity with the prepotential[^1] $$F(X)=\frac{(X^{1})^{3}}{X^{0}}+i\lambda (X^{0})^{2}=(X^{0})^{2}\left( t^{3}+i\lambda \right) ,\qquad \lambda \in \mathbb{R}.$$ There it was argued that this is the only possible correction preserving the axion shift symmetry and that it cannot be reabsorbed by a field redefinition [@Peccei-Quinn; @CFG]. The black hole potential of this model does not possess any flat direction, nevertheless, the appearance of the quantum contribution reveals an effect of multiplicity of the attractors. This effect is similar to that observed in [@G]. Due to this effect other ones arise such as “transmutations” and “separation” of attractors. In $st^{2}$ model they appear as well, but here we are mostly concerned with another phenomenon, not present in $t^{3}$ model, – namely, how the flat direction of the  $st^{2}$ model undergoes the insertion of quantum corrections. The quantum corrected $st^{2}$ model that we consider is based on the holomorphic prepotential $$F(X)=\frac{X^{1}\left( X^{2}\right) ^{2}}{X^{0}}+i\lambda (X^{0})^{2}=(X^{0})^{2}\left( st^{2}+i\lambda \right) ,\qquad \lambda \in \mathbb{R}.$$ The complex moduli $s$ and $t$ span the rank-$2$ special Kähler manifold $\left( SU\left( 1,1\right) /U\left( 1\right) \right) ^{2}$. When $\lambda =0 $ this formula gives classical expression for the prepotential, which we start the next section with. Knowing the superpotential, one may easily calculate the corresponding black hole potential[^2] [@FGK] $$V_{BH}=e^{K}\left[ W\bar{W}+g^{a\bar{b}}\nabla _{a}W\bar{\nabla}_{\bar{b}}\bar{W}\right] \label{VBH}$$ in terms of the superpotential $W$ and the Kähler potential $K$ $$W=q_{\Lambda }X^{\Lambda }+p^{\Lambda }F_{\Lambda },\qquad K=-\ln \left[ -i\left( X^{\Lambda }\bar{F}_{\Lambda }-\bar{X}^{\Lambda }F_{\Lambda }\right) \right] . \label{WK}$$ $D0-D4$ branes {#Sect2} ============== This brane configuration corresponds to vanishing charges $q_a$ and $p^0$. The quartic invariant in this case is given by $$I_4= 4 q_0 p^1 \left( p^2\right)^2.$$ When it is negative, the classical black hole potential possesses a non-compact flat direction related to the $SO\left( 1,1\right)$ manifold  [@ferrara4] $$\label{flatDir1} \begin{array}{l} \displaystyle{{\tt Im\,}}s = \pm\,\sqrt{-\frac{p^1 q_0}{(p^2)^2}}\; \frac{\displaystyle ({{\tt Re\,}}t)^2 + \frac{q_0}{p^1}}{\displaystyle({{\tt Re\,}}t)^2 - \frac{q_0}{p^1}}\,, \qquad {{\tt Re\,}}s = \frac{p^1 q_0}{p^2}\, \frac{2{{\tt Re\,}}t}{\displaystyle({{\tt Re\,}}t)^2 - \frac{q_0}{p^1}}\,, \\ \displaystyle {{\tt Im\,}}t = \pm \sqrt{-\frac{q_0}{p^1}-({{\tt Re\,}}t)^2} \end{array}$$ parameterized, for instance, by the real part of the modulus $t$. Naturally, it solves the criticality condition of the black hole potential (\[VBH\]) evaluated when $\lambda=0$ $$\label{VBHcrit} \frac{\partial V_{BH}}{\partial s} = 0, \qquad \frac{\partial V_{BH}}{\partial t} = 0$$ and corresponds to a non-BPS state. The black hole entropy (\[SBH\]) turns out not to depend on ${{\tt Re\,}}t$ $$S_{BH} = \pi \sqrt{-I_4}=2 \pi \sqrt{-q_0p^1\left( p^2\right)^2}$$ in complete agreement with the attractor mechanism paradigm. When switching the quantum parameter $\lambda$ on, it is convenient to pass to the rescaled moduli $y^1,y^2$ and the quantum parameter $\alpha$ $$\label{Lun-1} s = p^1\sqrt{- \frac{q_0}{p^1(p^2)^2}}\,y^1,\quad t = p^2\sqrt{- \frac{q_0}{p^1(p^2)^2}}\,y^2,\quad \lambda = q_0\sqrt{-\frac{q_0}{p^1(p^2)^2}}\;\alpha\,$$ in order to factorize the dependence of $W$ and $V_{BH}$ on the charges $$\label{WVfactorized} W = q_0 \left[\rule{0pt}{1em} 1 - 2 y^1 y^2 - (y^2)^2 \right], \qquad V_{BH} = \frac12\,\sqrt{-I_4}\;v(y,\bar y) = \sqrt{\strut -q_0 p^1 (p^2)^2}\; v(y,\bar y).$$ The expression for the black hole potential is quite cumbersome and not too illustrative, so we restricted ourselves to writing down explicitly only the superpotential. The function $v(y,\bar y)$ is a rational one with the numerator being a polynomial of ninth degree and the denominator – of eighth degree on $y^a$ and $\bar y^a$. So at the moment it is quite improbable to resolve attractor mechanism equations (\[VBHcrit\]) analytically. Nevertheless, numerical simulations show that all solutions to eqs. (\[VBHcrit\]) have vanishing values of the axion fields $$\label{axion-free} {{\tt Re\,}}y^1={{\tt Re\,}}y^2=0.$$ This result differs from that present in the classical case (\[flatDir1\]). With this assumption, the attractor mechanism equations become $$\label{crit-axion-free} \begin{array}{l} \displaystyle 4 \alpha^4 - \alpha^3 \left[ \rule{0pt}{1em} -4t_1^2 t_2 - 2t_2 (-3+t_2^2) + 2 t_1 (3 + t_2^2) \right] + \alpha^2 t_1 t_2 \left[ \rule{0pt}{1em} 5 + 32 t_1^2 t_2^2 + 11 t_2^4+ \right. \\ \displaystyle\phantom{4 \alpha^4+} \left. + t_1 (-6 t_2 + 26 t_2^3) \rule{0pt}{1em}\right] -4\alpha t_1^2t_2^3\left[ \rule{0pt}{1em} -1 + 3 t_2^2 + 2 t_1^2 t_2^2 + 2 t_2^4 + t_1 t_2 (9 + t_2^2)\right] + \\ \displaystyle \phantom{4 \alpha^4+} - 8 t_1^3 t_2^5 \left( -1 + t_2^4 \right)=0, \\[0.5em] \displaystyle 4 \alpha^4 + 4 \alpha^3 t_2 (-3 + t_2^2) + \alpha^2 t_2^2 \left[ \rule{0pt}{1em} 5 + (-6 + 32 t_1^2) t_2^2 + 32 t_1 t_2^3 + 5 t_2^4\right] - \\ \displaystyle \phantom{4 \alpha^4+} -4\alpha t_1 t_2^4 \left[ \rule{0pt}{1em} -1 + 6t_2^2 + 4t_1^2 t_2^2 - t_2^4 + 2 t_1 t_2 ( 3 + t_2^2)\right] + 8 t_1^2 t_2^6 \left( 1 - 2 t_1^2 t_2^2 + t_2^4 \right)=0, \end{array}$$ where for the sake of brevity we denoted $t_a={{\tt Im\,}}y^a$. Depending on the value of the parameter $\alpha$, the number of the solutions to the eqs. (\[crit-axion-free\]) and their stability change. The stable solutions have all eigenvalues of the Hessian matrix positive, while for the unstable ones – one of them is negative. In what follows we consider only stable solutions. Substituting stable solutions of (\[crit-axion-free\]) into eq. (\[SBH\]) one gets the following behaviour of the entropy with respect to the quantum parameter (Fig. ). One can easily see that for $\alpha > 2/(3\sqrt3)$ there are two solutions to the attractor equations. The one having no classical limit is a 1/2-BPS solution. Such an effect – i.e. the appearance of a BPS solution when the quartic invariant $I_4$ is negative was also observed in a quantum $t^3$ model [@BFMS1]. [p[0.45]{}p[0.45]{}]{} ![image](V.eps){width="45.00000%"} Figure : Plot of $v(y,\bar y)$ for $D0-D4$ branes & ![image](W.eps){width="50.00000%"} Figure : Plot of $W$ for $D0-D4$ branes (in units of $q_0$) An interesting property is exhibited by the non-BPS solution with positive value of the quantum parameter $\alpha$ (Fig. ). Being evaluated on this solution, the superpotential does not depend on the parameter $\alpha$ and is equal to $$W=2q_0.$$ Generally, the superpotential without axion fields (\[axion-free\]) has the form $$W=q_0\left[ 1-2{{\tt Im\,}}y^1{{\tt Im\,}}y^2-({{\tt Im\,}}y^2)^2\right] .$$ By equating it to $2q_0$ one obtains the following relation between the moduli: $${{\tt Im\,}}y^1=-\frac{1+({{\tt Im\,}}y^2)^2}{2{{\tt Im\,}}y^2}\,,$$ which is consistent with the criticality condition (\[VBHcrit\]) provided $$\alpha -{{\tt Im\,}}y^2+({{\tt Im\,}}y^2)^3=0. \label{cubicImy2}$$ Obviously, such an algebraic equation has either one or three real solutions depending on the value of $\alpha$. When $\alpha$ is positive, only one of the solutions is stable. When it is negative, there is no stable solution of (\[cubicImy2\]). Just for the sake of mentioning it, we should say that there is another solution yielding $\alpha$-independent prepotential, namely $W=0$, so that the central charge vanishes as well. It holds provided $$D_t W = \lambda \frac{D_s W} {\left( {{\tt Im\,}}t \right)^3}$$ which is nothing but a “quantum” generalization of the zero central charge condition found in [@BMOS-1]. Since this solution turns out to be unstable, we do not consider it further. To conclude this section we consider a case when the quartic invariant $I_4$ is positive. Classically, it is known to correspond to $1/2$-BPS solutions and thus in this case the black hole potential has no flat direction. Although it is unlikely that a flat direction might appear when introducing quantum corrections, we consider this case as well and list the results: 1. there exists one $1/2$-BPS solution, which is, naturally, stable [@FGK]. This solution pertains to the $t^3+i\lambda$ model [@BFMS1]. 2. there exists a stable non-BPS $Z=0$ solution with ${{\tt Im\,}}y^1= {{\tt Im\,}}y^2$, which does not have a classical limit. 3. there exists a stable non-BPS $Z=0$ solution, having its classical limit as found in [@BMOS-1]. 4. there exist two unstable non-BPS solutions, having no classical limit. They correspond to $\alpha$-independent values of the superpotential: either $2q_0$ or zero. [2]{} The behaviour of the function $v$ related to the black hole potential via (\[WVfactorized\]) is presented in Fig. . The properties of the three solutions depicted here might be easily traced from the list above (remember that only stable solutions are depicted). To summarize, in the presence only of $D0-D4$ branes the flat direction of the classical black hole potential gets removed. ![image](VBPS.eps){width="35.00000%"}\ Figure : Plot of $v(y,\bar y)$ for $D0-D4$ branes, $I_4>0$. Unlike the classical case, there appear as well $1/2$-BPS quantum solutions with $I_4<0$ and non-BPS ones with $I_4>0$. This fact was observed in [@BMOS-1]. Another state not observed before is the $\alpha$-independent one of the superpotential. A question to be yet clarified is a correlation between the ground states of BPS and non-BPS solutions. As one sees from Fig. once both states –BPS and non-BPS – are present simultaneously for $I_4<0$, the ground state energy of the BPS one is lower than that of the non-BPS one. This is not valid any more for $I_4>0$: from Fig. one sees that there exists a value of $\alpha$ when the BPS state has a lower energy than that of the non-BPS state, and there exists as well a value of $\alpha$ when the relation between energies is opposite. $D2-D6$ branes ============== Let us consider now a situation when only $D2$ and $D6$ branes are present. The quartic invariant $I_4$ in this case is equal to $$I_4 = - p^0 q_1 q_2^2.$$ Let us start with $I_4>0$, when there exist classically 1/2-BPS and non-BPS $Z=0$ attractors. The black hole potential exhibits no flat directions. In the presence of $D6$ branes, switching the quantum correction $\alpha$ on, the attractor eqs. (\[VBHcrit\]) are hard to be solved even numerically, due to the presence of the $D6$-brane charge $p^0$ (\[WK\]). Thus, the question whether there is a quantum critical solution with positive $I_4$ remains open. Considering the case $I_4<0$, which classically admits only non-BPS attractors, one can simplify the analysis by defining rescaled moduli $y^a$ and a quantum parameter $\alpha$ as follows: $$s = \frac{y^1}{p^0\,q_1}\sqrt{\strut -I_4}\,,\quad t = \frac{y^2}{p^0\,q_2}\sqrt{\strut -I_4},\quad \lambda = \frac{\alpha}{(p^0)^2}\sqrt{\strut -I_4}. \label{Sat-eve1}$$ In the following treatment we choose $p^0$ to be positive. Classically, the black hole potential exhibits a non-compact flat direction $$\label{flatDir-elec} {{\tt Re\,}}y^1 = -\frac{{{\tt Re\,}}y^2}{1+({{\tt Re\,}}y^2)^2},\qquad {{\tt Im\,}}y^1 = -\frac12\frac{1+({{\tt Re\,}}y^2)^2}{1-({{\tt Re\,}}y^2)^2},\qquad {{\tt Im\,}}y^2 = \pm \sqrt{\strut 1-({{\tt Re\,}}y^2)^2},$$ which spans a one-dimensional manifold $SO(1,1)$ [@ferrara4]. The BH potential evaluated on eq. (\[flatDir-elec\]) gives as usual $$\label{Sun-aft-1} V_{BH} = \sqrt{-I_4} = \sqrt{p^0 q_1 q_2^2}.$$ Introducing quantum effects destroys the classical flat direction (\[flatDir-elec\]), and the critical solutions are reduced to a discrete set of points (usually one or two) having the axion fields ${{\tt Re\,}}y^a$ equal to zero. The domain of positivity of the metric is defined by condition $$\label{Sat-eve2} \left[ \alpha -{{\tt Im\,}}y^1({{\tt Im\,}}y^2)^2\right] \left[ \alpha +2{{\tt Im\,}}y^1({{\tt Im\,}}y^2)^2\right] <0.$$ [2]{} In the considered case $I_4<0$ with only $D2$ and $D6$ branes present, the dependence of the minimum value of the quantum corrected black hole potential $V_{BH}$ on the quantum parameter $\alpha$ is presented in Fig. . Numerical analysis does not support any evidence for the existence of solutions with , because such solutions fall outside the domain of positivity defined by eq. (\[Sat-eve2\]). ![image](VeffElettrico.eps){width="50.00000%"} Figure : Plot of $V_{BH}/\sqrt{-I_4}$ for $D2-D6$ branes and negative $I_4$. The plot for $\alpha<0$ and the short curve for $\alpha>0$ in Fig.  correspond to solutions to attractor equations with $\alpha$-independent covariant derivatives of the superpotential $$D_a W = q_a,\quad a=1,2.$$ $D0-D6$ branes {#Sect4} ============== Let us now briefly analyze the $D0-D6$ brane configuration in the $st^2$ model. The corresponding charges $p^0$ and $q_0$ are those associated to the Kaluza-Klein vector arising through dimensional reduction from $d=5$ to $d=4$ [@Ceresole]. In this framework, the quartic invariant $I_4$ is negative definite $$I_4 = - (p^0 q_0)^2.$$ In order to perform the analysis, it is once again convenient to introduce rescaled moduli $y^a$ and the quantum parameter $\alpha$ $$s = \sqrt[3]{\frac{q_0}{p^0}}\,y^1, \qquad t = \sqrt[3]{\frac{q_0}{p^0}}\,y^2,\qquad \lambda = \frac{\alpha\, q_0}{2p^0}.$$ The black hole potential has a flat direction $$\label{Sat-eve3} {{\tt Re\,}}y^1 = {{\tt Re\,}}y^2 = 0,\qquad {{\tt Im\,}}y^1 = \pm \frac1{({{\tt Im\,}}y^2)^2},$$ corresponding to non-BPS states. The sign plus is to be taken for $p^0 q_0<0$ and the sign minus – otherwise. This flat direction is characterized by a minimum of the black hole potential which turns out to be equal to $$V_{BH} = \sqrt{-I_4} = \left| p^0\, q_0 \right|.$$ Notice that, consistently with the analysis of [@Ceresole], the $D0-D6$ configuration admits axion-free solutions at the classical level, and they are actually general solutions. Performing a thorough numerical analysis, an unexpected evidence emerges: the classical non-BPS $Z\neq 0$ flat direction of the $st^2$ model in the $D0-D6$ brane configuration seemingly survives the considered quantum correction. This fact deeply distinguishes the $D0-D6$ configuration from the others treated above, when the quantum correction always lifts the flat direction. Another feature of this quantum flat direction is the presence of non-vanishing axion fields. Thus, one can conclude that the axion-free classical non-BPS $Z\neq 0$ flat direction is kept by the quantum corrections, but it gets distorted and acquires non-zero values of the axion fields. Naturally, the black hole potential takes a minimal value along the flat direction and the dependence of this value on the quantum parameter is presented in Fig. \[VeffNeutral\]. ![Plot of the minimum of $V_{BH}$ in the $D0-D6$ brane configuration[]{data-label="VeffNeutral"}](VeffNeutral.eps){width="45.00000%"} Conclusion and Outlook ====================== We addressed the issue of the fate of the unique non-BPS flat direction of $st^2$ model in the presence of the most general class of quantum perturbative corrections consistent with continuous axion-shift symmetry  [@Peccei-Quinn]. We performed our analysis in $D0-D4$, $D2-D6$ and $D0-D6$ brane configurations. For the first two cases we showed that the classical flat direction gets lifted at the quantum level. The same behavior one may expect for the unique non-BPS $Z=0$ flat direction of the third element of the cubic reducible sequence $\frac{SU(1,1)}{U(1)}\times \frac{SO(2,n)}{SO(2)\times SO(n)}$ [@ferrara4]. On the other hand, the analysis performed in the $D0-D6$ brane configuration yielded a somewhat surprising result: the classical flat direction gets modified at the quantum level, acquiring a non-zero value of the axion fields. The origin of such a deep difference among the brane configurations, which is expected to hold in other models as well, are yet to be understood, and we leave the study of this issue for future work. Clearly, in the considered two moduli quantum corrected special Kähler $d$-geometry based on a holomorphic prepotential, the phenomena of “separation” and “transmutation” of attractors, firstly observed in [@BFMS1], also occur, with a richer case study, due to the presence of non-BPS $Z=0$ attractors. By generalizing the results obtained in the present paper to the presence of more than one flat direction, one would thus be led to state that only a few classical attractors do remain attractors in a strict sense at the quantum level. Consequently, at the quantum level the set of actual extremal black hole attractors should be strongly constrained and reduced. As a final remark, it is worth pointing out that in $N=8$ ($d=4$) supergravity “large” $\frac{1}{8}$-BPS and non-BPS BHs exhibit $40$ and $42 $ flat directions, respectively [@ADF-Duality-d=4; @Ferrara-Marrani-1]. If $N=8$ supergravity is a finite theory of quantum gravity (see *e.g.* [@Bern] and Refs. therein), it would be interesting to understand whether these flat directions may be removed at all by perturbative and/or non-perturbative quantum effects. **Acknowledgments** {#acknowledgments .unnumbered} =================== A. M. would like to thank the Department of Physics, Theory Unit Group at CERN, and the Berkeley Center for Theoretical Physics (CTP) of the University of California, Berkeley, USA, where part of this work was done, for kind hospitality and stimulating environment. The work of S.F. has also been supported in part by D.O.E. grant DE-FG03-91ER40662, Task C, and by the Miller Institute for Basic Research in Science, University of California, Berkeley, CA, USA. The work of A. M. has been supported by an INFN visiting Theoretical Fellowship at SITP, Stanford University, Stanford, CA, USA. The work of A.S. has been supported by a Junior Grant of the *“Enrico Fermi”* Center, Rome, in association with INFN Frascati National Laboratories. [99]{} S. Ferrara, R. Kallosh and A. Strominger, $N=2$*Extremal Black Holes*, Phys. Rev. **D52**, 5412 (1995), `hep-th/9508072`. A. Strominger, *Macroscopic Entropy of* $N=2$* Extremal Black Holes*, Phys. Lett. **B383**, 39 (1996), `hep-th/9602111`. S. Ferrara and R. Kallosh, *Supersymmetry and Attractors*, Phys. Rev. **D54**, 1514 (1996), `hep-th/9602136`. S. Ferrara and R. Kallosh, *Universality of Supersymmetric Attractors*, Phys. Rev. **D54**, 1525 (1996), `hep-th/9603090`. S. Ferrara, G. W. Gibbons and R. Kallosh, *Black Holes and Critical Points in Moduli Space*, Nucl. Phys. **B500**, 75 (1997), `hep-th/9702103`. S. Bellucci, S. Ferrara, R. Kallosh and A. Marrani, *Extremal Black Hole and Flux Vacua Attractors*, contribution to the Proceedings of the Winter School on Attractor Mechanism 2006 (SAM2006), 20-24 March 2006, INFN-LNF, Frascati, Italy, `arXiv:0711.4547`. G. W. Gibbons and C. M. Hull, *A Bogomol’ny Bound for General Relativity and Solitons in* $N=2$* Supergravity*, Phys. Lett **B109**, 190 (1982). S. Ferrara and A. Marrani, $N=8$*non-BPS Attractors, Fixed Scalars and Magic Supergravities*, Nucl. Phys. **B788**, 63 (2008), `arXiV:0705.3866`. S. Ferrara and A. Marrani, *On the Moduli Space of non-BPS Attractors for *$N=2$* Symmetric Manifolds*, Phys. Lett. **B652**, 111 (2007), `arXiV:0706.1667`. S. Bellucci, S. Ferrara, M. Günaydin and A. Marrani, *Charge Orbits of Symmetric Special Geometries and Attractors*, Int. J. Mod. Phys **A21**, 5043 (2006), `hep-th/0606209`. S. Bellucci, S. Ferrara, A. Marrani and A. Shcherbakov, *Splitting of Attractors in* $\mathit{1}$*-modulus Quantum Corrected Special Geometry*, JHEP **0802**, 088 (2008), `arXiv:0710.3559`. M. Shmakova, *Calabi-Yau black holes*, Phys. Rev. **D56**, 540 (1997), `hep-th/9612076`. K. Saraikin and C. Vafa, *Non-supersymmetric Black Holes and Topological Strings*, Class. Quant. Grav. **25**, 095007 (2008), `hep-th/0703214`. S. Bellucci, S. Ferrara, A. Marrani and A. Yeranyan, *stu Black Holes Unveiled*, Entropy **10(4)**, 507-555 (2008), `arXiV:0807.3503`. S. Bellucci, A. Marrani, E. Orazi and A. Shcherbakov, *Attractors with Vanishing Central Charge*, Phys. Lett. **B655**, 185 (2007), `arXiV:0707.2730`. R. D. Peccei and H. R. Quinn, *Constraints imposed by CP conservation in the presence of instantons*, Phys. Rev. **D16**, 1791 (1977); R. D. Peccei and H. R. Quinn, *CP conservation in the presence of instantons*, Phys. Rev. Lett. **38**, 1440 (1977); R. D. Peccei and H. R. Quinn, *Some aspects of instantons*, Nuovo Cim. **A41**, 309 (1977). S. Cecotti, S. Ferrara and L. Girardello, *Geometry of Type* $\mathit{II}$* Superstrings and the Moduli of Superconformal Field Theories*, Int. J. Mod. Phys. **A4**, 2475 (1989). A. Giryavets, *New Attractors and Area Codes*, JHEP **0603**, 020 (2006), `hep-th/0511215`. P. Candelas, X. C. De La Ossa, P. S. Green and L. Parkes, *A Pair of Calabi-Yau Manifolds as an Exactly Soluble Superconformal Theory*, Nucl. Phys. **B359**, 21 (1991); P. Candelas, X. C. De La Ossa, P. S. Green and L. Parkes, *An Exactly Soluble Superconformal Theory from a Mirror Pair of Calabi-Yau Manifolds*, Phys. Lett. **B258**, 118 (1991). S. Hosono, A. Klemm, S. Theisen and Shing-Tung Yau, *Mirror symmetry, mirror map and applications to Calabi-Yau hypersurfaces*, Commun. Math. Phys. **167**, 301 (1995), `hep-th/9308122`. K. Behrndt, G. Lopes Cardoso, B. de Wit, R. Kallosh, D. Lüst and T. Mohaupt, *Classical and quantum* $N\mathit{=2}$* supersymmetric black holes*, Nucl. Phys. **B488**, 236 (1997), `hep-th/9610105`. L. Alvarez-Gaume, D. Z. Freedman, *Geometrical Structure and Ultraviolet Finiteness in the Supersymmetric Sigma Model*, Commun. Math. Phys. **80**, 443 (1981). M. T. Grisaru, A. van de Ven and D. Zanon, *Four Loop Beta Function for the* $N\mathit{=1}$* and* $N\mathit{=2}$* Supersymmetric Nonlinear Sigma Model in Two Dimensions*, Phys. Lett. **B173**, 423 (1986); M. T. Grisaru, A. van de Ven and D. Zanon, *Two Dimensional Supersymmetric Sigma Models on Ricci Flat Kähler Manifolds are not Finite*, Nucl. Phys. **B277**, 388 (1986); M. T. Grisaru, A. van de Ven and D. Zanon, *Four Loop Divergences for the* $N\mathit{=1}$* Supersymmetric Nonlinear Sigma Model in Two Dimensions*, Nucl. Phys. **B277**, 409 (1986). A. Ceresole, S. Ferrara and A. Marrani, $\mathit{4d}$*/*$\mathit{5d}$ *Correspondence for the Black Hole Potential and its Critical Points*, Class. Quant. Grav. **24**, 5651 (2007), `arXiV:0707.0964`. L. Andrianopoli, R. D’Auria and S. Ferrara, $\mathit{U}$* invariants, black hole entropy and fixed scalars*, Phys. Lett. **B403**, 12 (1997), `hep-th/9703156`. Z. Bern, J. J. Carrasco, L. J. Dixon, H. Johansson, D. A. Kosower and R. Roiban, *Three-Loop Finiteness of* $N=8$*Supergravity*, Phys. Rev. Lett. **98**, 161303 (2007), `hep-th/0702112`. [^1]: In general, $\lambda $ is related to perturbative quantum corrections at the level of non-linear sigma model, computed by $2$-dimensional CFT techniques on the world-sheet. For instance, in Type $IIA$ $CY_{3}$-compactifications [@CDLOGP1; @HKTY; @Quantum-N=2] $$\lambda =-\frac{\chi \zeta \left( 3\right) }{16\pi ^{3}},$$ where $\chi $ is the Euler character of $CY_{3}$, and $\zeta $ is the Riemann zeta-function. Within such a framework, it has been shown that $\lambda $ has a $4$-loop origin in the non-linear sigma-model [@Alvarez-Gaume; @Grisaru; @CDLOGP1]. [^2]: Generally, the indices $a,b,c,\ldots $ run from 1 to $n$, while $\Lambda ,\Sigma ,\ldots $ – from 0 to $n$, with $n=2$ for the $st^{2}$ model
{ "pile_set_name": "ArXiv" }
--- abstract: 'The Peres-Horodecki criterion of positivity under partial transpose is studied in the context of separability of bipartite continuous variable states. The partial transpose operation admits, in the continuous case, a geometric interpretation as mirror reflection in phase space. This recognition leads to uncertainty principles, stronger than the traditional ones, to be obeyed by all separable states. For all bipartite Gaussian states, the Peres-Horodecki criterion turns out to be necessary and sufficient condition for separability.' address: 'The Institute of Mathematical Sciences, Tharamani, Chennai 600 113, India' author: - 'R. Simon' title: 'Peres-Horodecki separability criterion for continuous variable systems[^1]' --- \#1\#2 Entanglement or inseparability is central to all branches of the emerging field of quantum information and quantum computation[@bennett]. A particularly elegant criterion for checking if a given state is separable or not was proposed by Peres[@peres]. This condition is necessary and sufficient for separability in the $2 \times 2$ and $2 \times 3$ dimensional cases, but ceases to be so in higher dimensions as shown by Horodecki[@horodecki]. While a major part of the effort in quantum information theory has been in the context of systems with finite number of Hilbert space dimensions, more specifically the qubits, recently there has been much interest in the canonical continuous case[@c1; @c2; @c3; @c4; @c5; @c6]. We may mention in particular the experimental realization of quantum teleportation of coherent states[@uncond]. It is therefore important to be able to know if a given state of a bipartite canonical continuous system is entangled or separable. With increasing Hilbert space dimension, any test for separability will be expected to become more and more difficult to implement in practice. In this paper we show that in the limit of infinite dimension, corresponding to continuous variable bipartite states, the Peres-Horodecki criterion leads to a test that is extremely easy to implement. Central to our work is the recognition that the partial transpose operation acquires, in the continuous case, a beautiful geometric interpretation as [*mirror reflection in the Wigner phase space*]{}. Separability forces on the second moments (uncertainties) a restriction that is stronger than the traditional uncertainty principle; even commuting variables need to obey an uncertainty relation. This restriction is used to prove that the Peres-Horodecki criterion is necessary and sufficient separability condition for all bipartite Gaussian states. Consider a single mode described by annihilation operator $\hat{a} = (\hat{q} + i\, \hat{p})/\sqrt{2}$, obeying the standard commutation relation $[\hat{q}, \hat{p}] = i$, which is equivalent to $[\hat{a}, \hat{a}^{\dag}]=1$. There is a one-to-one correpondence between density operators and c-number Wigner distribution functions $W(q, p)$[@wigner]. The latter are real functions over the phase space and satisfy an additional property coding the nonnegativity of the density operator. It follows from the definition of Wigner distribution that the transpose operation $T$, which takes every $\hat{\rho}$ to its transpose $\hat{\rho}^{T}$, is [*equivalent to*]{} a mirror reflection in phase space: $$\hat{\rho}\, \longrightarrow \,\hat{\rho}^{T} \;\Longleftrightarrow\; W(q, p)\, \longrightarrow \,W(q, -p). \label{eq:eq1}$$ Mirror reflection is not a canonical transformation in phase space, and cannot be implemented unitarily in the Hilbert space. This is consistent with the fact that while $T$ is linear at the density operator level, it is antilinear at the state vector or wave function level. Now consider a bipartite system of two modes described by annihilation operators $\hat{a}_{1} = (\hat{q}_{1} + i\, \hat{p}_{1})/\sqrt{2}$ and $\hat{a}_{2} = (\hat{q}_{2} + i\, \hat{p}_{2})/\sqrt{2}$. Let Alice be in possession of mode 1 and let mode 2 be in the possession of Bob. By definition, a quantum state $\hat{\rho}$ of the bipartite system is separable if and only if $\hat{\rho}$ can be expressed in the form $$\hat{\rho} = \sum_{j} p_{j} \,\hat{\rho}_{j1} \otimes \hat{\rho}_{j2}, \label{eq:eq2}$$ with [*nonnegative*]{} $p_{j}$’s, where $\hat{\rho}_{j1}$’s and $\hat{\rho}_{j2}$’s are density operators of the modes of Alice and Bob respectively. It is evident from (2) that partial transpose operation (i.e., transpose of the density matrix with respect to only the second Hilbert space under Bob’s possession), denoted $PT$, takes a separable density operator [*necessarily*]{} into a nonnegative operator, i.e., into a bonafide density matrix. This is the Peres-Horodecki separability criterion. In order to study the partial transpose operation in the Wigner picture, it is convenient to arrange the phase space variables and the hermitian canonical operators into four-dimensional column vectors $$\begin{aligned} \xi = \left( \begin{array}{cccc} q_{1}& p_{1} & q_{2}& p_{2} \end{array} \right),\;\;\; \hat{\xi} = \left( \begin{array}{cccc} \hat{q}_{1}& \hat{p}_{1}& \hat{q}_{2}& \hat{p}_{2} \end{array} \right). \label{eq:eq3}\end{aligned}$$ The commutation relations take the compact form[@gaussian] $$\begin{aligned} [\hat{\xi}_{\alpha}, \hat{\xi}_{\beta}] & = & i\,\Omega_{\alpha \beta}, ~~~ \alpha, \beta = 1, 2, 3, 4; \nonumber \\ \Omega & = & \left( \begin{array}{cc} J & 0\\ 0 & J \end{array} \right),\;\;\; J = \left( \begin{array}{cc} 0 & 1\\ -1 & 0 \end{array}\right). \label{eq:eq4}\end{aligned}$$ Wigner distribution and the density operator are related through the definition[@wigner; @gaussian] $$\begin{aligned} W(q,p) = \pi^{-2}\!\!\int\!\!d^{\,2}q' \langle q - q^{\prime}|\,\hat\rho\, |q + q^{\prime}\rangle \exp(2 i\, q^{\prime}\cdot p).\end{aligned}$$ where $q=(q_1,q_2),\; p=(p_1,p_2)$. It follows from this definition that the partial transpose operation on the bipartite density operator transcribes faithfully into the following transformation on the Wigner distribution: $$PT:\;\;\; W(q_{1}, p_{1}, q_{2}, p_{2})\; \longrightarrow \; W(q_{1}, p_{1}, q_{2}, -p_{2}). \label{eq:eq5}$$ This corresponds to a mirror reflection which inverts the $p_{2}$ coordinate, leaving $q_{1}$, $p_{1}$, and $q_{2}$ unchanged: $$\begin{aligned} PT:\;\;\; \xi \longrightarrow \Lambda \xi,\;\;\; \Lambda =\mbox{diag}(1, 1, 1, -1). \label{eq:eq6}\end{aligned}$$ And the Peres-Horodecki separability criterion reads: [*if $\hat{\rho}$ is separable, then its Wigner distribution necessarily goes over into a Wigner distribution under the phase space mirror reflection $\Lambda$*]{}. $W(\Lambda\xi)$, like $W(\xi)$, should possess the “Wigner quality", for any separable bipartite state. The Peres-Horodecki criterion has important implications for the uncertainties or second moments. Given a bipartite density operator $\hat{\rho}$, let us define $\Delta \hat{\xi} = \hat{\xi} - \langle \hat{\xi} \rangle$, where $\langle \hat{\xi}_\alpha \rangle = \mbox{tr}\hat{\xi}_\alpha \hat{\rho}$. The four components of $\Delta \hat{\xi}$ obey the same commutation relations as $\hat{\xi}$. Similarly, we define $\Delta \xi_{\alpha} = \xi_{\alpha} - \langle \xi_{\alpha} \rangle$ where $\langle \xi_{\alpha} \rangle$ is average with respect to the Wigner distribution $W(\xi)$, and it equals $\langle \hat \xi _\alpha\rangle$. The uncertainties are defined as the expectations of the hermitian operators $\{ \Delta\hat{\xi}_{\alpha}, \Delta\hat{\xi}_{\beta} \} = (\Delta\hat{\xi}_{\alpha}\Delta\hat{\xi}_{\beta}+ \Delta\hat{\xi}_{\beta}\Delta\hat{\xi}_{\alpha})/2$: $$\begin{aligned} \langle \{ \Delta\hat{\xi}_{\alpha}, \Delta\hat{\xi}_{\beta} \} \rangle &=& \mbox{tr}\left(\{ \Delta\hat{\xi}_{\alpha}, \Delta\hat{\xi}_{\beta} \} \hat{\rho}\right)\nonumber\\ & =& \int\!\! d^{\,4}\xi\, \Delta \xi_{\alpha}\, \Delta \xi_{\beta}\, W(\xi). \label{eq:eq7}\end{aligned}$$ Let us now arrange the uncertainties or variances into a $4 \times 4$ real variance matrix $V$, defined through $V_{\alpha \beta} = \langle \{ \Delta\hat{\xi}_{\alpha}, \Delta\hat{\xi}_{\beta} \} \rangle$. Then we have the following compact statement of the [*uncertainty principle*]{}[@gaussian]: $$V + \frac{i}{2}\,\Omega \geq 0. \label{eq:eq8}$$ Note that (7) implies, in particular, that $V > 0$. The uncertainty principle (7) is a direct consequence of the commutation relation (3) and the nonnegativity of $\hat{\rho}$. It is equivalent to the statement that $\hat{Q}=\hat\eta \, \hat{\eta}^{\dagger}$, with $\hat\eta=c_1\hat{\xi_1}+c_2\hat{\xi_2}+c_3\hat{\xi_3}+c_4\hat{\xi_4}$, is nonnegative for every set of (complex valued) $c$-number coefficients $c_\alpha$, and hence $\langle \hat{Q} \rangle = \mbox{tr} (\hat{Q}\hat{\rho}) \geq 0$. Viewed somewhat differently, it is [ *equivalent*]{} to the statement that for every pair of real four-vectors $d,d^{\,\prime}$ the hermitian operators $\hat{X}(d)=d^{\,T}\hat{\xi}=d_1\hat{q_1}+ d_2\hat{p_1}+ d_3\hat{q_2}+ d_4\hat{p_2}$ and $\hat{X}(d^{\,\prime}) = d ^{\,\prime \,T}\hat{\xi}= d^{\,\prime}_1\hat{q_1}+ d^{\,\prime}_2\hat{p_1}+ d^{\,\prime}_3\hat{q_2}+ d^{\,\prime}_4\hat{p_2}$ obey $$\begin{aligned} \langle ( \Delta\hat{X}(d) )^2\rangle &+& \langle ( \Delta\hat{X}(d^{\,\prime}) )^2\rangle \geq \left| d^{\,\prime \,T}\Omega \,d \right| \nonumber\\ &&\;\;=|d_1d^{\,\prime}_2-d_2d^{\,\prime}_1+d_3d^{\,\prime}_4 -d_4d^{\,\prime}_3|.\end{aligned}$$ The right hand side equals $|\,[\hat X(d),\hat X(d^{\,\prime})]\,|$. Under the Peres-Horodecki partial transpose the Wigner distribution undergoes mirror reflection, and it follows from (8) that the variances are changed to $V \to \tilde{V}=\Lambda V \Lambda$. Since $W(\Lambda\xi)$ has to be a Wigner distribution if the state under consideration is separable, we have $$\tilde{V}+\frac{i}{2}\,\Omega \geq 0, \;\;\; \tilde{V} = \Lambda V \Lambda,$$ as a [*necessary*]{} condition for separability. We may write it also in the equivalent form $$\begin{aligned} V+\frac{i}{2}\,\tilde{\Omega} \geq 0, \;\;\; \tilde{\Omega} = \Lambda \Omega \Lambda=\left( \begin{array}{cc} J & 0 \\ 0 & -J \end{array} \right),\end{aligned}$$ so that separability of $\hat{\rho}$ implies an additional restriction that has the same form as (8), with $| d^{\,\prime \,T}\Omega\, d |$ on the right hand side replaced by $\left| d^{\,\prime \,T}\tilde{\Omega}\, d \right|$. Combined with (8), this restriction reads $$\begin{aligned} \langle ( \Delta\hat{X}(d) )^2\rangle &+& \langle ( \Delta\hat{X}(d^{\,\prime}) )^2\rangle\nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\geq |d_1d^{\,\prime}_2-d_2d^{\,\prime}_1| + |d_3d^{\,\prime}_4-d_4d^{\,\prime}_3|,\;\;\forall\,d,d^{\,\prime}.\end{aligned}$$ This restriction, to be obeyed by all separable states, is generically stronger than the usual uncertainty principle (8). For instance, let $\hat{X}(d)$ commute with $\hat{X}(d^{\,\prime})$, i.e., let $d^{\,\prime\,T}\Omega \,d =0$. If the state is separable, then $\hat{X}(d)$ and $\hat{X}(d^{\,\prime})$ cannot both have arbitrarily small uncertainties unless $d^{\,\prime \,T}\tilde{\Omega}\,d=0$ as well, i.e., unless $d_1d^{\,\prime}_2-d_2d^{\,\prime}_1= 0 =d_3d^{\,\prime}_4-d_4d^{\,\prime}_3$. As an example, $\hat{X} = \hat{x_1}+\hat{p_1}+\hat{x_2}+\hat{p_2}$ and $\hat{Y} = \hat{x_1}-\hat{p_1}- \hat{x_2}+\hat{p_2}$ commute, but the sum of their uncertainties in any separable state is $\geq4$. The Peres-Horodecki condition (11) can be simplified. Real linear canonical transformations of a two-mode system constitute the ten-parameter real symplectic group $Sp(4,R)$. For every real $4 \times 4$ matrix $S \in Sp(4,R)$, the irreducible canonical hermitian operators $\hat\xi $ transform among themselves, leaving the fundamental commutation relation (3) invariant: $$\begin{aligned} S \in Sp(4,R): \;\;\; S\Omega S^T &=& \Omega, \nonumber \\ \hat{\xi} \rightarrow \hat{\xi^{\,\prime}} &=& S\hat{\xi}, \;\; \left[ \hat{\xi^{\,\prime}_\alpha}, \hat{\xi^{\,\prime}_\beta} \right] = i\,\Omega_{\alpha\beta}.\end{aligned}$$ The symplectic group acts unitarily and irreducibly on the two-mode Hilbert space[@gausson]. Let $U(S)$ represent the (infinite dimensional) unitary operator corresponding to $S \in Sp(4,R)$. It transforms the bipartite state vector $|\psi\rangle$ to $|\psi^{\,\prime} \rangle=U(S)|\psi\rangle$, and hence the density operator $\hat{\rho}$ to $\hat{\rho^{\,\prime}}= U(S) \,\hat{\rho} \,U(S)^\dagger$. This transformation takes a strikingly simple form in the Wigner description, and this is one reason for the effectiveness of the Wigner picture in handling canonical transformations: $$S\!:\;\;\hat{\rho} \longrightarrow U(S) \,\hat{\rho}\,U(S)^\dagger \Longleftrightarrow W(\xi) \longrightarrow W(S^{-1}\xi).$$ The bipartite Wigner distribution simply transforms as a scalar field under $Sp(4,R)$. It follows from (6) that the variance matrix transforms in the following manner: $$S \in Sp(4,R): \;\;\; V \rightarrow V^{\,\prime} = SVS^T.$$ The uncertainty relation (7) has an $Sp(4,R)$ invariant form (recall $S \Omega S^T = \Omega$). But separable states have to respect not just (7), but also the restriction (9), and this requirement is preserved only under the six-parameter $Sp(2,R) \otimes Sp(2,R)$ subgroup of $Sp(4,R)$ corresponding to independent [ *local linear canonical transformations*]{} on the subsystems of Alice and Bob: $$\begin{aligned} S_{\mbox{\scriptsize{local}}} &\in& Sp(2,R) \otimes Sp(2,R) \subset Sp(4,R): \nonumber \\ S_{\mbox{\scriptsize{local}}} &=& \left( \begin{array}{cc} S_1 & 0 \\ 0 & S_2 \end{array} \right), \;\;\;\; S_1JS_1^T = J = S_2JS_2^T.\end{aligned}$$ It is desirable to cast the Peres-Horodecki condition (11) in an $Sp(2,R) \otimes Sp(2,R)$ invariant form. To this end, let us write the variance matrix $V$ in the block form $$\begin{aligned} V=\left( \begin{array}{cc} A & C \\ C^T & B \end{array} \right).\end{aligned}$$ The physical condition (7) implies $A\ge 1/4, \; B\ge 1/4$. As can be seen from (14), the local group changes the blocks of $V$ in the following manner: $$\begin{aligned} A \rightarrow S_1AS^T_1, \;\;\;B \rightarrow S_2BS^T_2, \;\;\;C \rightarrow S_1CS^T_2.\end{aligned}$$ Thus, the $Sp(2,R) \otimes Sp(2,R)$ invariants associated with $V$ are $I_1 =\mbox{det}\, A, \; I_2 = \mbox{det}\, B, \; I_3 = \mbox{det}\, C$ and $I_4 = \mbox{tr}\, A J C J B J C^{T}J\;$ (det$\,V$ is an obvious invariant, but it is a function of the $I_k$’s, namely $\mbox{det}\, V \,=\, I_1 I_2 + I_3\,^2 - I_4$). We claim that the uncertainity principle (7) is equivalent to the $Sp(2,R) \otimes Sp(2,R)$ invariant statement $$\begin{aligned} \mbox{det}\, A \; \mbox{det}\, B + \left (\frac{1}{4} - \mbox{det}\, C\right )^2 &-&\,\, \mbox{tr} (A J C J B J C^{T} J) \nonumber \\ \ge &\frac{1}{4}& (\mbox{det}\, A \,+\, \mbox{det}\, B)\,.\end{aligned}$$ To prove this result, first note that (7) and (17) are equivalent for variance matrices of the special form $$\begin{aligned} V_0=\left( \begin{array}{cccc} a & 0 & c_1 & 0 \\ 0 & a & 0 &c_2 \\ c_1 & 0 & b & 0 \\ 0 & c_2 & 0 & b \end{array} \right).\end{aligned}$$ But any variance matrix can be brought to this special form by effecting a suitable local canonical transformation corresponding to some element of $Sp(2,R) \times Sp(2,R)$. In veiw of the manifest $Sp(2,R) \otimes Sp(2,R)$ invariant structure of (17), it follows that (7) and (17) are indeed equivalent for all variance matrices. Under the Peres-Horodecki partial transpose or mirror reflection, we have $V \rightarrow \tilde{V} = \Lambda V \Lambda$. That is, $C \rightarrow C \sigma_3$ and $B \rightarrow \sigma_3 B \sigma_3$, while $A$ remains unchanged \[$\sigma _3$ is the diagonal Pauli matrix: $\sigma_3 = \mbox{diag}(1,-1)$\]. As a consequence, $I_3= \mbox{det}\, C$ flips signature while $I_1,I_2$ and $I_4$ remain unchanged. Thus, condition (9) for $\tilde{V}$ takes a form identical to (17) with only the signature in front of det$\,C$ in the second term on the left hand side reversed. Thus the requirement that the variance matrix of a separable state has to obey (9), in addition to the fundamental uncertainty principle (7), takes the form $$\begin{aligned} \mbox{det}\, A \; \mbox{det}\, B + \left({\frac{1}{4}} - |\mbox{det}\, C| \right)^2 &-&\,\, \mbox{tr} (A J C J B J C^{T} J) \nonumber \\ \ge &\frac{1}{4}& (\mbox{det}\, A \,+\, \mbox{det}\, B).\end{aligned}$$ [*This is the final form of our necessary condition on the variance matrix of a separable bipartite state. This condition is invariant not only under $Sp(2,R) \otimes Sp(2,R)$, but also under mirror reflection, as it should be! It constitutes a complete description of the implication the Peres-Horodecki criterion has for the second moments.*]{} To summarise, conditions (7), (8), and (17) are equivalent statements of the fundamental uncertainty principle, and hence will be satisfied by every physical state. The mutually equivalent statements (9), (11), and (19) constitute the Peres-Horodecki criterion at the level of the second moments, and should necessarily be satisfied by every separable state. Interestingly, states with $\det C \ge 0$ definitely satisfy (19), which in this case is subsumed by the physical condition (17). For the standard form $V_0$, our condition (19) reads $$\begin{aligned} 4(ab - c_1^{\,2})(ab - c_2^{\,2}) \ge (a^2 + b^2) + 2|c_1 c_2| - 1/4.\end{aligned}$$ But the point is that the separability check (19) can be applied directly on $V$, with no need to go to the form $V_0$. We will now apply these results to Gaussian states. The mean values $\langle \hat{\xi}_\alpha\rangle$ can be changed at will using local unitary displacement operators, and so assume without loss of generality $\langle\hat{\xi}_\alpha \rangle = 0$. A (zero-mean) Gaussian states is fully characterized by its second moments, as seen from the nature of the Wigner distribution $$\begin{aligned} W(\xi) = \left( 4 \pi ^2 \,\sqrt{\det V}\right)^{-1} \exp\left(-\frac{1}{2}\xi^{\,T}V^{-1}\xi \right).\end{aligned}$$ [**Theorem:**]{} [*The Peres-Horodecki criterion (19) is necessary and sufficient condition for separability, for all bipartite Gaussian states.*]{}\ We begin by noting, in view of the P-representation $$\begin{aligned} \hat\rho = \int\!\!d^{\,2}z_1 d^{\,2} z_2 P(z_1,z_2) |z_1\rangle \langle z_1| \otimes |z_2 \rangle \langle z_2|,\end{aligned}$$ that a state which is classical in the quantum optics sense (nonnegative $P(z_1,z_2)\,)$ is separable. Since the local group $Sp(2,R) \otimes Sp(2,R)$ does not affect separability, any $Sp(2,R) \otimes Sp(2,R)$ transform of a classical state is separable too. Finally, a Gaussian state is classical if and only if $V - \frac{1}{2} \ge 0$. We will first prove a pretty little result.\ [**Lemma:**]{} [*Gaussian states with $\det C \ge 0$ are separable*]{}.\ First consider the case $\det C > 0$. We can arrange $a \ge b,\;\; c_1\ge c_2 > 0$ in the special form $V_0$ in (18). Let us do a local canonical transformation $S_{\mbox{\scriptsize{local}}} = \mbox{diag} \,(x,x^{-1},x^{-1},x)$, corresponding to reciprocal local scalings (squeezings) at the Alice and Bob ends, and follow it by $S_{\mbox{\scriptsize{local}}}^{\,\prime} = \mbox{diag}\, (y,y^{-1},y,y^{-1})$, corresponding to common local scalings at these ends. We have $$\begin{aligned} V_0 \to V_0^{\,\prime}=\left( \begin{array}{cccc} y^2x^2a & 0 & y^2c_1 & 0 \\ 0 & y^{-2}x^{-2}a & 0 &y{-2}c_2 \\ y^2c_1 & 0 & y^2x^{-2}b & 0 \\ 0 & y^{-2}c_2 & 0 & y^{-2}x^{2}b \end{array} \right).\end{aligned}$$ Choose $x$ such that $c_1/(x^2a - x^{-2}b) = c_2/(x^{-2}a - x^2b)$. That is, $x=[(c_1a + c_2b)/(c_2a + c_1b)]^{1/4}$. With this choice, $V_0^{\,\prime}$ acquires such a structure that it can be diagonalized by rotation through [*equal*]{} amounts in the $q_1,q_2$ and $p_1,p_2$ planes: $$\begin{aligned} V_0^{\,\prime} &\to& V_0^{\,\prime\prime} = \mbox{diag}\, (\kappa _+,\,\kappa_+^{\,\prime},\,\kappa_-,\,\kappa_-^{\,\prime})\,;\\ \kappa_\pm &=& \frac{1}{2}y^2\left\{ x^{2}a+x^{-2}b \pm [(x^{2}a - x^{-2}b)^2 + 4 c_1^{\,2}]^{1/2}\right\}, \\ \kappa_\pm^{\,\prime} &=& \frac{1}{2}y^{-2}\left\{ x^{-2}a+x^{2}b \pm [(x^{-2}a - x^{2}b)^2 + 4 c_2^{\,2}]^{1/2}\right\}.\end{aligned}$$ Such an equal rotation is a canonical transformation; it preserves the uncertainty principle, since it is canonical, and the pointwise nonnegativity of the P-distribution, since it is a rotation. For our diagonal $V_0^{\,''}$, the uncertainty principle $V_0^{\,''} + \frac{i}{2} \Omega \ge 0$ simply reads that the product $\kappa_-\kappa_-^{\,'} \ge 1/4$. It follows that we can choose $y$ such that $\kappa_-,\,\kappa_-^{\,'} \ge 1/2$ (for instance, choose $y$ such that $\kappa_-\, =\, \kappa_-^{\,'}$), i.e., $V_0^{\,''} \ge 1/2$. Since $V_0^{\,'}$ and $V_0^{\,''}$ are rotationally related, this implies $V_0^{\,'} \ge1/2$, and hence $V_0^{\,'}$ corresponds to positive P-distribution or separable state. This in turn implies that the original $V$ corresponds to a separable state, since $V$ and $V_0^{\,'}$ are related by local transformation. This completes proof for the case $\det C > 0$. Now suppose $\det C =0$, so that in $V_0$ we have $c_1\ge 0 = c_2$. Carry out a local scaling corresponding to $S_{\mbox{\scriptsize{local}}} = \mbox{diag}\, (\, \sqrt{2a},\, 1/\sqrt{2a},\, \sqrt{2b},\, 1/\sqrt{2b}\,)$, taking $V_0 \to V_0^{\,'}$; the diagonal entries of $V_0^{\,'}$ are $(2a^2,\, 1/2,\,2 b^2,\, 1/2)$, and the two nonzero off diagonal entries equal $2abc_1$. With this form for $V_0^{\,'}$, the uncertainty principle $V_0^{\,'} + \frac{i}{2} \Omega \ge 0$ implies $V_0^{\,'} \ge 1/2$, establishing separability of the Gaussian state. This completes proof of our lemma. Proof of the main theorem is completed as follows. We consider in turn the two distinct cases $\det C <0$ and $\det C \ge 0$. Suppose $\det C <0$. Then there are two possibilities. If (19) is violated, then the Gaussian state is definitely entangled since (19) is a necessary condition for separability. If (19) is respected, then the mirror reflected state is a physical Gaussian state with $\det C > 0$ (recall that mirror reflection flips the signature of $\det C$), and is separable by the above lemma. This implies separability of the original state, since a mirror reflected separable state is separable. Finally, suppose $\det C \ge 0$. Condition (19) is definitely satisfied since it is subsumed by the uncertainty principle (17) in this case. By our lemma, the state is separable. This completes proof of the theorem. We have worked in the Wigner picture. But, the geometric interpretation of the partial transpose as mirror reflection in phase space holds for other quasi-probability distributions as well.\ [*Note Added:*]{} Since completion of this work, a preprint by Duan et al. [@duan] describing an interesting approach to separability has appeared. These authors note that “the Peres-Horodecki criterion has some difficulties” in the continuous case, and hence aim at “a strong and practical inseparability criterion”, which proves necessary and sufficient in the Gaussian case. We believe that their criterion is unlikely to be any stronger than the Peres-Horodecki criterion (19). Further, it appears that to apply their criterion one has to first solve a pair of nonlinear simultaneous equations to determine a parameter $a_0$ that enters their inequality (16). In this sense the Peres-Horodecki criterion (19) seems to be easier to implement in practice; this is over and above the merit of manifest invariance under local transformations and mirror reflection it enjoys. [**Acknowledgement**]{}: The author is grateful to S. Chaturvedi, R. Jagannathan and N. Mukunda for insightful comments. C. H. Bennett, Phys. Today [**48**]{}, 24 (1995);\ D. P. DiVincenzo, Science [**270**]{}, 255 (1995). A. Peres, Phys. Rev. Lett. [**77**]{}, 1413 (1996). P. Horodecki, Phys. Lett. A [**232**]{}, 333 (1997). L. Vaidman, Phys. Rev. A [**49**]{}, 1473 (1994); L. Vaidman and N. Yoran, Phys. Rev. A [**59**]{}, 116 (1999). A. S. Parkins and H. J. Kimble, quant-ph/9904062; quant-ph/9907049; quant-ph/9909021. S. L. Braunstein, Nature [**394**]{}, 47 (1998); quant-ph/9904002; S. Lloyd and S. L. Braunstein, Phys. Rev. Lett. [**82**]{}, 1784 (1999). G. J. Milburn and S. L. Braunstein, quant-ph/9812018. P. van Loock, S. L. Braunstein, and H. J. Kimble, quant-ph/9902030; P. van Loock and S. L. Braunstein, quant-ph/9906021; quant-ph/9906075. S. L. Braunstein and H. J. Kimble, Phys. Rev. Lett. [**88**]{}, 869 91998). A. Furusawa et al., Science [**282**]{}, 706 (1998). E. P. Wigner, Phys. Rev. [**40**]{}, 749 (1932); R. G. Littlejohn, Phys. Rep. [**138**]{}, 193 (1986). R. Simon, E. C. G. Sudarshan, and N. Mukunda, Phys. Rev. A [**36**]{}, 3868 (1987); R. Simon, N. Mukunda, and B. Dutta, Phys. Rev. A [**49**]{}, 1567 (1994). R. Simon, E. C. G. Sudarshan, and N. Mukunda, Phys. Rev. A [**37**]{}, 3028 (1988). L. M. Duan, G. Giedke, J. I. Cirac, and P. Zoller, quant-ph/9908056 [^1]: This work was presented as part of an invited talk at the 6th International Conference on [*Squeezed States and Uncertainty Relations*]{}, Naples, May 24 – 29, 1999.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We summarize features and results on the problem of the existence of Ground States for the Nonlinear Schrödinger Equation on doubly-periodic metric graphs. We extend the results known for the two–dimensional square grid graph to the honeycomb, made of infinitely-many identical hexagons. Specifically, we show how the coexistence between one–dimensional and two–dimensional scales in the graph structure leads to the emergence of threshold phenomena known as dimensional crossover.' author: - 'Riccardo Adami, Simone Dovetta, Alice Ruighi' title: 'Quantum graphs and dimensional crossover: the honeycomb' --- Introduction ============ In the last decade there has been a dramatic increase in the study of the dynamics of systems on metric graphs, or [*networks*]{}. This is mainly due to two different issues: first, the extensive use of mathematics in topics traditionally confined to a more qualitative approach (e.g. biology, social sciences, economics); second, the flexibility and the simplicity of networks as a mathematical environment to model phenomena occurring in the actual world. Networks enter in the description of evolutionary phenomena on branched structures, namely, one-dimensional complexes made of [*edges*]{}, either finite or infinite, meeting at special points called [*vertices*]{}. Edges and vertices define the [*topology*]{} of the graph. The [*metric*]{} structure is defined by associating to every edge a [*length*]{} and then an arclength. This is easily accomplished by associating to every edge $e$ a coordinate $x \in [0, \ell_e]$, where $\ell_e$ is the length of the edge. Such a schema applies to signals propagating in networks, circuits, and to more recent scientific and technological challenges of the new emerging field of research called [*Atomtronics*]{}. The first appearance of metric graphs in the mathematical modeling of natural systems dates back to 1953 and is due to Ruedenberg and Scherr [@RS53], who modeled a naphtalene array as a network of edges and vertices arranged in a hexagonal lattice, like a honeycomb. Then, a Hamiltonian operator representing the quantum energy of the system was defined on such a structure, and its spectrum was computed in order to deduce the possible values of the energy of the valence electrons. The paper is not only a milestone in physical chemistry, but it also introduces some important mathematical tools like the so-called [*Kirchhoff’s conditions*]{} at the vertices of the graph, and it opens the research field of quantum graphs. Dealing with a standard quanto-mechanical system, the model is governed by a linear equation, i.e. the Schrödinger equation of the system. Since then, the use of metric graphs has become widespread in the literature, exiting the realm of quantum mechanics and extending to electromagnetism, acoustics, and many others physically relevant contexts. However, most of the models were linear. The first systematic introduction to nonlinear dynamics on graphs was given by Ali Mehmeti [@alimehmeti] in a nowadays classical treatise published in 1994, but one had to wait around three decades to see the analysis of the dynamics of a specific nonlinear model, first given in [@acfn2011] and concerning the effect of the impact of a fast soliton of the Nonlinear Schrödinger Equation (NLSE) on the vertex of an infinite star-graph. After this result, the resarch on the NLSE on graphs underwent an important development, especially because of great technical advances on the study of the mathematical aspects of the nonlinear Schrödinger Equation (especially following the seminal papers by Keel and Tao [@kt] and by Kenig and Merle [@km]) from one side, and because of the rapid evolution of the technology of Bose-Eintein condensates (BEC) from the other, and in particular of the new accomplishments in the construction of [*traps*]{} of various shapes, to be used in BEC experiments. In order to motivate the mathematical problem we are dealing with, let us be more specific on this point. A Bose-Einstein condensate is a system of a large (from thousands to millions) number of identical bosons, usually magnetically and/or optically confined in a spatial region, called [*trap*]{}. As predicted by Bose [@bose] and Einstein [@einstein], under a prescribed value of the temperature, called ”critical value”, the system collapses into a very peculiar and non-classical state, in which: - Every particle acquires an individual wave function (which is in general not the case for many-body systems, that are given a collective wave function only). - The wave function is the same for all particles, and is called [*wave function of the condensate*]{}. - The wave function of the condensate solves the following variational problem: $$\label{gp} \min_{u \in H^1 (\Omega), \int |u|^2 = N} E_{GP}(u)$$ where - $E_{GP}$ is the [*Gross-Pitaevskii energy (GP) functional*]{}, namely $$\label{gpe} E_{GP} (u) \ = \ \| \nabla u\|_{L^2(\Omega)}^2 + 8 \pi \alpha \| u \|_{L^4 (\Omega)}^4$$ ($\alpha$ is the scattering length of the two-body interaction between the particles in the condensate); - $\Omega$ is the trap where the condensate is confined; - $N$ is the number of particles in the condensate; - provided it exists, the minimum corresponds to a standing wave for the Gross-Pitaevskii [*Nonlinear Schrödinger*]{} Equation $$i \partial_t \psi (t,x) \ = \ - \Delta \psi (t,x) + 32 \pi \alpha | \psi (t,x) |^2 \psi (t,x).$$ Then it becomes an important issue to solve the problem of minimizing the functional under the constraint $\int_{{\mathcal{G}}}|u|^2 dx = \mu$ given in . As one might expect, the result heavily depends on $\Omega$, not only for what concerns the actual shape of the minimizer, but also for the sake of its mere existence. It is indeed this last issue that has been mostly studied during the last years, and will be the subject of the present note. Existence of Ground States: Results ----------------------------------- From now on, we consider a metric graph $\mathcal G$ and the NLS energy functional defined as $$\label{energy} E (u, \mathcal G) \ = \ \frac 1 2\int_{{\mathcal{G}}}|u'|^2 dx - {\frac}1 p \int_{{\mathcal{G}}}|u|^p dx.$$ The first term is called [*kinetic term*]{}, as it represents the kinetic energy associated to the system, while the second is the [*nonlinear term*]{}. The main difference of with respect to the GP energy is that in a more general nonlinearity power is considered instead of the only case $p=4$, but we restrict to the so-called focusing case, where the nonlinear term has a negative sign, and encodes the fact that the two-body interaction between the particles is attractive. Owing to the choice of the sign, it is clear that there is a competition between the two terms: the kinetic term favours widespread signals, while the nonlinear term prevents the minimizers from dispersing too much. When a minimizer exists, it always realizes a compromise between the two terms and the two corresponding tendencies: spreading or squeezing. We study the problem of minimizing the energy with the constraint of constant mass, namely $$\label{mass} {\|u\|_{L^2(\mathcal{G})}}^2= \int_{{\mathcal{G}}}|u|^2 \, dx = \mu > 0.$$ We shall use the notation $$\label{inf} {\mathcal E}(\mu):=\inf_{u\in{H_\mu^1(\mathcal{G})}}E(u,{{\mathcal{G}}}),$$ and introduce the ambient space $${H_\mu^1(\mathcal{G})}:=\{\,u\in H^1({{\mathcal{G}}})\,:\,{\|u\|_{L^2(\mathcal{G})}}^2=\mu\,\}$$ We call [*ground state at mass $\mu$*]{} or, for short, [*ground state*]{}, every minimizer of among all functions sharing the same mass $\mu$. First of all, it is well-known [@cazenave; @lions; @zakharov], that in the case of the real line, and provided that $2 < p < 6$, the compromise between kinetic and nonlinear term that gives rise to a ground state is realized for every $\mu$ by the [*soliton*]{} $$\label{soliton} \phi_\mu (x) \ = \ \mu^\alpha \phi_1 (\mu^\beta x), \qquad \alpha:=\frac{2}{p-2}, \,\,\, \beta: =\frac{p-2}{6-p} ,$$ where the prototype soliton is denoted by $\phi_1$ and equals $$\phi_1 (x) : = C{\rm{sech}}(cx)\,.$$ In the case of a real half-line ${{\mathbb R}}^+$, by elementary symmetry arguments one can immediately realize that a solution exists for every value of the mass $\mu$ and it coincides with a half-soliton with the maximum at the origin, possibly multiplied by a phase factor. Despite the result for the half-line and for the line (i.e. a pair of half-lines), for the graph made of three half-lines meeting one another at a single vertex (i.e. a [*star graph*]{}) it has been proven that there is no ground state, irrespectively of the choice of $\mu$ ([@acfn12]). Starting from this negative result, the problem of ensuring (or excluding) the existence of ground states for the NLS on graphs gained some popularity in the community, and some general results were found, isolating a key topological condition ([@ast15]), studying in detail particular cases ([@DT-p; @mp; @nps]), dealing with compact graphs ([@CDS; @dovetta]), introducing concentrated nonlinearities ([@DT; @serratentarellly; @st2; @T]), focusing on the more challenging $L^2$-critical case (i.e. $p = 6$ [@ast17]). More recently, also some pioneering investigations of nonlinear Dirac equations has been initiated ([@BCT-p; @BCT-19]). (0,0) grid (6,6); in [0,2,...,6]{} in [0,2,...,6]{} at (,) \[nodo\] ; in [0,2,...,6]{} [(,6.2)–(,7.2) (,-0.2)–(,-1.2) (-1.2,)–(-0.2,) (6.2,)–(7.2,); ]{} The analysis of NLS equations on periodic graphs has been developed for instance in ([@GPS; @pankov; @pelinovsky]), and a systematic discussion of the problem of ground states for periodic graphs has been carried out in [@dovetta-per], however here we shall focus on a particular phenomenon highlited in [@adst] and called [*dimensional crossover*]{}. Investigating the problem of proving the existence or the nonexistence of ground states for the NLS on the regular two–dimensional square grid (see Figure \[grid\]), it was found that three different regimes come into play: 1. if $2 < p < 4$, then a ground state exists for every value $\mu$ of the mass; 2. if $p > 6$, then there is no ground state irrespectively of the value chosen for the mass; 3. if $p=6$, then there is a particular value of the mass, called [*critical mass*]{} and denoted by $\mu^*$, such that the infimum of the energy passes from $0$ to $-\infty$ as the mass exceeds $\mu^*$, and ground states never exist for any value of the mass; 4. if $4 \leq p < 6$, then there is a particular value of the mass, $\mu_p$, such that ground states exist only beyond $\mu_p$. Now, points 1 and 2 are common to what one finds in the problem of the ground states in ${\mathbb R}$ and ${\mathbb R}^2$. The transition of the actual value of the infimum of the energy as in Point 3 is characteristic of one-dimensional domains, in particular of quantum graphs made of a compact core and a certain number of half–lines. What really distinguishes the case of the grid graph from the previously studied cases of quantum graphs is point 4, where an unprecedented behaviour is detected for nonlinearity powers ranging from $4$ to $6$. Here power $4$ is meaningful since it is the critical power for two-dimensional problems! Then, the fact that power $4$ corresponds to a transition in the beaviour of the problem reveals that the two-dimensional structure is emerging. Qualitatively, the grid is two-dimensional on a large scale, and this fact must emerge when searching for low-mass ground states, since low-mass means widespread functions. From a quantitative point of view, the emergence of the two-dimensional large scale structure occurs in the validity of the [*two-dimensional Sobolev inequality*]{}, i.e. $$\label{2sobolev} \| u \|_{L^2 ({{\mathcal{G}}})} \ \leq \ C \| u' \|_{L^1 ({{\mathcal{G}}})} \qquad(u\in W^{1,1}({{\mathcal{G}}})).$$ As well-known in Functional Analysis, such an inequality is typical of two-dimensional domains, whereas in one-dimension one has the [*one-dimensional Sobolev inequality*]{} $$\label{1sobolev} \| u \|_{L^\infty ({{\mathcal{G}}})} \ \leq \ C \| u' \|_{L^1 ({{\mathcal{G}}})}\qquad (u\in W^{1,1}({{\mathcal{G}}})).$$ Now, inequality is easy to prove for every one-dimensional non-compact graph, just using $$u (x) = \int_\gamma u'(t) \, dt$$ where $x$ is any point of the graph and the symbol $\gamma$ denotes a path isomorphic to a half-line starting at $x$. The existence of such a path is ensured by the fact that the graph is non-compact (therefore it extends up to infinity) and connected (so that it is possible to reach the infinity from $x$ through a sequence of adjacent edges). It is then clear that what marks the transition between the one and the two-dimensional regime is the coexistence of estimates and , so that what really characterizes the grid, as well as every structure dysplaying a two-dimensional nature in the large scale, is the validity of . As one shall expect, such a portrait can be generalized to the setting of periodic graphs exploiting higher dimensional structures in the large scale, like regular $n$–dimensional grids. In this context, it is readily seen that the dimensional crossover takes place between the one–dimensional and the $n$–dimensional critical power (see [@ad] for the explicit discussion of the case $n=3$). ![The infinite two-dimensional hexagonal grid ${{\mathcal{G}}}$.[]{data-label="fig-grid"}](hex_grid){width="50.00000%"} In this paper we show that for the [*honeycomb graph*]{}, namely the grid made of the periodic repetition of a hexagon along a two-dimensional mesh (see Figure \[fig-grid\]), estimate holds true. Moving from this fact, we deduce a complete result about the existence or nonexistence of ground states, closely following the steps intoduced in [@adst]. Existence of ground states in the honeycomb: the complete result ---------------------------------------------------------------- According to the roadmap established in [@adst], the validity of a Sobolev inequality results in the validity of a corresponding family of Gagliardo-Nirenberg inequalities. Namely, from one obtains the [*1-dimensional Gagliardo-Nirenberg inequalities*]{} that provide the following estimate of the potential term in : $$\label{1gn} \| u \|_{L^p ({{\mathcal{G}}})}^p \ \leq \ C \| u' \|_{L^2({{\mathcal{G}}})}^{{\frac}p 2 - 1} \| u \|_{L^2 ({{\mathcal{G}}})}^{{\frac}p 2 + 1},$$ that, inserted in , gives $$\label{subcritical} E (u, {{\mathcal{G}}}) \ \geq \ {\frac}1 2 \| u' \|_{L^2 ({{\mathcal{G}}})}^2 - {\frac}C p \| u' \|_{L^2 ({{\mathcal{G}}})}^{ {\frac}p 2 - 1} \mu^{{\frac}p 4 + {\frac}1 2}$$ from which one immediately concludes that, if $2 < p < 6$, then $$\mathcal{E}(\mu) > - \infty,$$ opening the possibility of existence of a ground state. In order to conclude for the existence, one should then consider the behaviour of minimizing sequences. By periodicity, the translation invariance of the problem excludes immediately escaping to infinity, so that the only possibility for a sequence not to converge is to spread along the grid, reaching in the limit zero energy. As a consequence, if there exists a function with negative energy, then minimizing sequences must converge and therefore a ground state exists. The existence of a function with negative energy in the cases $2 < p <4 $ for every $\mu$, and $4 \leq p < 6$ for $\mu$ large enough, is the content of Theorem \[THM\] and of the positive part of point $(i)$ in Theorem \[THM 2\]. Conversely, to get to the core of our non-existence results, let us consider inequality and notice that for $p=6$ it specializes to $$\label{1gn6} \| u \|_{L^6 ({{\mathcal{G}}})}^6 \ \leq \ C \| u' \|_{L^2({{\mathcal{G}}})}^{2} \| u \|_{L^2 ({{\mathcal{G}}})}^{4}.$$ On the other hand, from one derives $$\label{2gn} \| u \|_{L^p ({{\mathcal{G}}})}^p \ \leq \ C \| u' \|_{L^2 ({{\mathcal{G}}})}^{p - 2} \| u \|_{L^2 ({{\mathcal{G}}})}^2,$$ that, for $p=4$, gives $$\label{2gn4} \| u \|_{L^4 ({{\mathcal{G}}})}^4 \ \leq \ C \| u' \|_{L^2 ({{\mathcal{G}}})}^{2} \| u \|_{L^2 ({{\mathcal{G}}})}^2.$$ Now, interpolating between and one has, for every $p \in [4,6]$ $$\label{interpolate} \| u \|_{L^p ({{\mathcal{G}}})}^p \ \leq \ C \| u' \| _{L^2 ({{\mathcal{G}}})}^{2} \| u \|_{L^2 ({{\mathcal{G}}})}^{p-2}.$$ Then, by $$\label{energybelow} \begin{split} E (u, {{\mathcal{G}}}) \ \geq \ & {\frac}1 2 \| u' \|_{L^2 ({{\mathcal{G}}})}^2 - {\frac}C p \| u' \|_{L^2 ({{\mathcal{G}}})}^2 \| u \|_{L^2 ({{\mathcal{G}}})}^{p-2} \\ \ = \ & {\frac}1 2 \| u' \|_{L^2 ({{\mathcal{G}}})}^2 \left(1 - {\frac}{2C} p \mu^{{\frac}p 2 - 1} \right) \end{split}$$ Then, for every $p \in [4,6]$ there exists a positive value $\mu_p > 0$ given by $$\mu_p:=\Big({\frac}p {2C}\Big)^{{\frac}2 {p-2}}\,,$$ with $C$ being the sharpest constant in , such that - If $\mu < \mu_p$, then $E (u, {{\mathcal{G}}}) > 0$ for every $u \in H^1_\mu ({{\mathcal{G}}})$. Since, by spreading the function $u$ along the grid, one immediately gets $\mathcal{E}(\mu) = 0$, it turns out that the infimum is not attained and ground states do not exist. - If $\mu > \mu_p$ it turns out that $\mathcal{E}(\mu) < 0$, and possibly infinitely negative. The dimensional crossover lies exactly in this continuous transition from the subcritical regime (where for every mass there is a ground state) to the supercritical, where there are values of the mass in correspondence of which the energy is not lower bounded. In standard cases, such a transition only occurs in correspondence of the unique critical case, that amounts to $6$ in dimension one, and to $4$ in dimension two. In the case of a doubly periodic graph as the honeycomb we consider here, this actually takes place for all the nonlinearities $p$ between $4$ and $6$, so that a continuum of critical exponents arises between the critical power of dimension $2$ and the one of dimension $1$. Here are the complete results: \[THM\] Let $2<p<4$. Then, for every $\mu>0$, there exists a ground state of mass $\mu$. \[THM 2\] For every $p\in[4,6]$ there exists a critical mass $\mu_p>0$ such that - if $p\in(4,6)$ then ground states of mass $\mu$ exist if and only if $\mu\geq\mu_p$, and $$\label{inf p46} {\mathcal E}({{\mathcal{G}}})\begin{cases} =0 & \text{if }\mu\leq\mu_p\\ <0 & \text{if }\mu>\mu_p\,. \end{cases}$$ - if $p=4$ then ground states of mass $\mu$ exist if $\mu>\mu_4$ and they do not exist if $\mu>\mu_4$. Furthermore, holds true also in the case $p=4$. - if $p=6$ then ground states never exist, independently of the value of $\mu$, and $$\label{inf p6} {\mathcal E}(\mu)=\begin{cases} 0 & \text{if }\mu\leq\mu_6\\ -\infty & \text{if }\mu>\mu_6\,. \end{cases}$$ Theorems 1.1 and 1.2 do not differ from their analogues in the case of the square grid, treated in [@adst]. The only remarkable new procedures concern the proof of Sobolev inequality as in Theorem \[thm-sob\] and the construction of a function with negative energy proving the existence of a ground state in the regime $p\in(2,4)$. The remainder of the paper is organised as follows. Section \[sec:notation\] sets some notation for the honeycomb, whereas Section \[sec:sobolev\] develops the proof of Sobolev inequality . Finally, within Section \[sec:competitor\] we exhibit functions realizing strictly negative energy when $p\in(2,4)$, giving the proof of Theorem \[THM\]. Notation {#sec:notation} ======== Before going further, a bit of notation is necessary. Particularly, to ease several of the upcoming arguments, it is useful to decompose the exagonal grid in two family of parallel infinite paths, so that the whole graph ${{\mathcal{G}}}$ can be described as their union. To this purpose, let us introduce the following construction. Fix any cell in ${{\mathcal{G}}}$ and denote by $o$ its lower left vertex. Note that, starting at $o$, there is one horizontal edge on the right and both one up-directed and one down-directed edge on the left. Consider then the infinite path running through $o$ built up this way. First, moving from $o$ to the right, follow the infinite path that alternates a horizontal and an up-directed edge. Then, moving from $o$ to the left, follow the infinite path that alternates a down-directed and a horizontal edge. We denote by $L_0$ the union of these two paths (see Figure \[fig-paths\](a)). Similarly, consider both the infinite path that goes from $o$ to the left alternating an up-directed and a horizontal edge, and the one that originates at $o$ and moves to the right alternating a horizontal and a down-directed edge. We denote the union of these two by $R_0$ (see Figure \[fig-paths\](b)). Note that both on $L_0$ and on $R_0$ natural coordinates $x_{L_0}:L_0\to(-\infty,+\infty),\,x_{R_0}:R_0\to(-\infty,+\infty)$ can be defined, so that they can be identified with real lines (with the origin corresponding to $o$). Now, consider for instance the vertex belonging to $L_0$ which is at distance $2$ from $o$ on its right. It is immediate to see that an infinite path running through this vertex and parallel to $R_0$ can be recover simply repeating the procedure used to construct $R_0$. However, this is not the case if we consider the vertex of $L_0$ at distance $1$ from $o$ on its right, as it already belongs to $R_0$. More generally, through every vertex on $L_0$ located at an even distance from $o$ on its right runs an infinite path parallel to $R_0$. It is then straightforward to check that the same holds true also for every vertex on $L_0$ located at an odd distance from $o$ on its left (whereas vertices at even distances on the left do not provide any additional path). This leads to a family $\{R_j\}_{j\in\mathbb{Z}}$ of infinite parallel paths in ${{\mathcal{G}}}$. Analogously, one can consider the family of infinite paths $\{L_i\}_{i\in\mathbb{Z}}$ all parallel to $L_0$, which arises taking any vertex on $R_0$ either at an even distance from $o$ on its right or at odd distance from $o$ on its left and repeating the steps in the construction of $L_0$. We stress the fact that the set defined by $ \Big(\bigcup_{i\in\mathbb{Z}}L_i\Big)\cap\Big(\bigcup_{j\in\mathbb{Z}}R_j\Big)\, $ is composed by all the horizontal edges of $\mathcal{G}$ and for this reason it follows $$\mathcal{G}\subset\Big(\bigcup_{i\in\mathbb{Z}}L_i\Big)\cup\Big(\bigcup_{j\in\mathbb{Z}}R_j\Big)\,.$$ In particular $L_i\cap R_j\neq\emptyset$ for every $i,j\in\mathbb{Z}$, as they share exactly one horizontal edge. Finally, given $i,j\in\mathbb{Z}$, we denote by $I_i ^j\subset L_i$ the union of the horizontal edge that $L_i$ shares with $R_j$ and the up-directed edge on its right. Moreover, we set $v_i ^j$ the first vertex of $I_i ^j$ that we meet walking down $R_j$ from $-\infty$ (see Figure \[fig-I\](a)). Note that, for every $i$, $L_i= \bigcup_{j\in\mathbb{Z}}I_i ^j$. Similarly, we define $J_j ^i$ as the union of the horizontal edge shared by $L_i$ and $R_j$ and the up-directed edge on its left. As before, we observe that, for every $j \in \mathbb{Z}$, $R_j=\bigcup_{i\in\mathbb{Z}}J_j ^i$ and again we denote by $w_j ^i$ the first vertex of $J_j ^i$ that we encounter walking down $L_i$ from $-\infty$ (Figure \[fig-I\](b)). Sobolev inequality {#sec:sobolev} ================== This section is devoted to the derivation of some functional inequalities that are responsible for the grid ${{\mathcal{G}}}$ interpolating between one-dimensional and two-dimensional behaviours. Particularly, the two-dimensional nature of the graph shows up explicitly with the following result, stating the validity of the Sobolev inequality in the form typical of dimension two. \[thm-sob\] For every $u \in W^{1,1}(\mathcal{G})$, $$\label{sobolev} \|u\|_{L^2(\mathcal{G})} \leq 2\sqrt{2l} \|u'\|_{L^1(\mathcal{G})}.$$ We beforehand remind that $\mathcal{G}\subset\Big(\bigcup_{i\in\mathbb{Z}}L_i\Big)\cup\Big(\bigcup_{j\in\mathbb{Z}}R_j\Big)$, so that $$\label{norm} \|u\|^2 _{L^2(\mathcal{G})} \leq \sum_i \|u\|^2 _{L^2(L_i)} + \sum_j \|u\|^2 _{L^2(R_j)}.$$ In order to prove (\[sobolev\]), we aim to estimate the two terms on the right side of (\[norm\]). Let us start with $\sum_i \|u\|^2 _{L^2(L_i)}$, where $\|u\|^2 _{L^2(L_i)} = \int_{L_i} |u(x)|^2 dx$. Consider any point $x \in \mathcal{G}$ located on $L_i$. Observe that $x$ can be reached following at least two different paths on $\mathcal{G}$. The first one walks down $L_i$ from $-\infty$ to $x$, whereas the second one runs through $R_j$ from $-\infty$ to the vertex $v_i ^j$ and then moves on $L_i$ from $v_i ^j$ to $x$ (Figure \[fig-path\]). Identifying with some abuse of notation the points $x$ and $v_i^j$ with their corresponding coordinates $x_{L_i}(x),\,x_{L_i}(v_i^j)$ and $x_{R_j}(v_i^j)$, we denote by $L_i(-\infty,x)$, $R_j(-\infty,v_i^j)$ and $L_i(v_i^j,x)$ the paths from $-\infty$ to $x$ along $L_i$, from $-\infty$ to $v_i^j$ along $R_j$ and from $v_i^j$ to $x$ along $L_i$, respectively. Thus, we get $$\label{direct} u(x)=\int_{L_i(-\infty,\,x)} u'(\tau) d\tau$$ and $$\label{indirect} u(x)=\int_{R_j(-\infty,\,v_i ^j)} u'(\tau) d\tau + \int_{L_i(v_i ^j,\,x)} u'(\tau) d\tau.$$ Multiplying (\[direct\]) and (\[indirect\]) and using the fact that $L_i(-\infty,\,x)\subset L_i,\,R_j(-\infty,\,v_i^j)\subset R_j$ and $L_i(v_i^j,\,x)\subset I_i^j$, we estimate $$\begin{aligned} |u(x)|^2 & = \bigg| \int_{L_i(-\infty,\,x)} u'(\tau) d\tau \bigg|\cdot \bigg| \int_{R_j(-\infty,\,v_i ^j)} u'(\tau) d\tau + \int_{L_i(v_i ^j,\,x)} u'(\tau) d\tau \bigg| \\ & \leq \bigg( \int_{L_i(-\infty,\,x)} |u'(\tau)| d\tau \bigg) \cdot \bigg( \int_{R_j(-\infty,\,v_i ^j)} |u'(\tau)| d\tau + \int_{L_i(v_i ^j,\,x)} |u'(\tau)| d\tau \bigg) \\ & \leq \bigg( \int_{L_i} |u'(\tau)| d\tau \bigg) \cdot \bigg( \int_{R_j} |u'(\tau)| d\tau + \int_{I_i ^j} |u'(\tau)| d\tau \bigg).\end{aligned}$$ Then, integrating over $L_i$ $$\label{stima} \int_{L_i} |u(x)|^2 dx = \int_{L_i} |u'(\tau)| d\tau \bigg( \int_{L_i} \bigg( \int_{R_j} |u'(\tau)| d\tau + \int_{I_i ^j} |u'(\tau)| d\tau \bigg) dx \bigg).$$ Recall that $L_i= \bigcup_{j \in \mathbb{Z}} I_i ^j$ and note that both $\int_{R_j} |u'(\tau)| d\tau$ and $\int_{I_i ^j} |u'(\tau)| d\tau$ are piecewise constant on each $I_i ^j$ as functions of $x$. Hence, it results $$\label{const1} \int_{L_i} \bigg( \int_{R_j} |u'(\tau)| d\tau \bigg) dx = 2l \sum_{j \in \mathbb{Z}} \int_{R_j} |u'(\tau)| d\tau,$$ and $$\label{const2} \int_{L_i} \bigg( \int_{I_i ^j} |u'(\tau)| d\tau \bigg) dx = 2l \sum_{j \in \mathbb{Z}} \int_{I_i ^j} |u'(\tau)| d\tau = 2l\int_{L_i} |u'(\tau)| d\tau.$$ By (\[stima\]), (\[const1\]) and (\[const2\]) it follows $$\begin{aligned} \int_{L_i} |u(x)|^2 dx & = \int_{L_i} |u'(\tau)| d\tau \bigg( 2l \sum_{j \in \mathbb{Z}} \int_{R_j} |u'(\tau)| d\tau + 2l\int_{L_i} |u'(\tau)| d\tau \bigg) \\ & \leq 4l \| u' \| _{L^1(\mathcal{G})} \int_{L_i} |u'(\tau)| d\tau,\end{aligned}$$ as each term in the sum can be dominated by $\| u' \| _{L^1(\mathcal{G})}$. Finally, summing over $i \in \mathbb{Z}$ yields at $$\sum_{i \in \mathbb{Z}} \int_{L_i} |u(x)|^2 \leq 4l\| u' \| _{L^1(\mathcal{G})} \sum_{i \in \mathbb{Z}} \int_{L_i} |u'(\tau)| d\tau \leq 4l \| u' \| ^2 _{L^1(\mathcal{G})}.$$ The same procedure can be adapted to estimate $\sum_{j \in \mathbb{Z}} \int_{R_j} |u(x)|^2 dx$, replacing $I_i^j$ with $J_j^i$ whenever need, so that by (\[norm\]) we end up with $$\|u\|^2 _{L^2(\mathcal{G})} \leq 8l\| u' \| ^2 _{L^1(\mathcal{G})}.$$ Arguing as in the proof of Theorem 2.3 in [@adst], it can then be proved that Theorem \[thm-sob\] entails the following two-dimensional Gagliardo-Nirenberg inequality on ${{\mathcal{G}}}$ $$\label{2GN} {\|u\|_{L^p(\mathcal{G})}}^p\leq C {\|u\|_{L^2(\mathcal{G})}}^2{\|u'\|_{L^2(\mathcal{G})}}^{p-2}$$ for every $u\in H^1({{\mathcal{G}}})$ and $p\geq2$ (here $C$ denotes a universal constant). On the other hand, as for every non-compact metric graph, it is known that also the one-dimensional Gagliardo-Nirenberg inequality $$\label{1GN} {\|u\|_{L^p(\mathcal{G})}}^p\leq {\|u\|_{L^2(\mathcal{G})}}^{\frac{p}{2}+1}{\|u'\|_{L^2(\mathcal{G})}}^{\frac{p}{2}-1}$$ holds true on ${{\mathcal{G}}}$, again for every $u\in H^1({{\mathcal{G}}})$ and $p\geq2$ (for a simple proof relying on the theory of rearrangements on graphs see for instance [@ast-jfa]). Hence, combining –, a new version of the Gagliardo-Nirenberg inequality can be derived, which we refer to as [*interpolated Gagliardo-Nirenberg inequality*]{}, that accounts for the dimensional crossover in Theorem \[THM 2\]. Indeed, for every $p\in[4,6]$ there exists a constant $K_p$, depending only on $p$, such that $${\|u\|_{L^p(\mathcal{G})}}^p\leq K_p{\|u\|_{L^2(\mathcal{G})}}^{p-2}{\|u'\|_{L^2(\mathcal{G})}}^2$$ for every $u\in H^1({{\mathcal{G}}})$ (as the argument is the same, we refer to Corollary 2.4 in [@adst] for a complete proof of this fact). Existence result: proof of Theorem \[THM\] {#sec:competitor} ========================================== Throughout this section, we provide the proof of Theorem \[THM\], showing that whenever $p$ is smaller than $4$, ground states always exist for every value of the mass. To this purpose, we first recall a general compactness result, originally proved in Proposition 3.3 of [@adst], which is valid for every doubly periodic metric graphs, so that it also applies in the case of the two-dimensional hexagonal grid we are dealing with. \[prop\_comp\] Let $p<6$ and $\mu<0$. If ${\mathcal E}(\mu)<0$, then a ground state with mass $\mu$ exists. In view of Proposition \[prop\_comp\], given $\mu>0$, it is enough to prove that ${\mathcal E}(\mu)<0$ to show that ground states in ${H_\mu^1(\mathcal{G})}$ exist. We henceforth consider the following construction. For every $i\in{{\mathbb Z}}$, recall that $L_i$ is identified with a real line $(-\infty,+\infty)$ through a coordinate $x_{L_i}$, and we are free to choose which vertex ${\textsc{v}}\in L_i$ corresponds to the origin $x_{L_i}({\textsc{v}})=0$. We thus fix the origin of each $L_i$ in the following way. First, set the origin of $L_0$ at any of its vertices being the left endpoint of a horizontal edge. Then, since the up-directed edge on the left of this vertex connects $L_0$ with $L_1$, set the origin of $L_1$ at the endpoint of this bridging edge. Let then $\overline{L}_0$ be the straight line in the plane passing through both the origin of $L_0$ and the one of $L_1$. For each $i\in{{\mathbb Z}}$, $\overline{L}_0$ intersects $L_i$ in exactly one vertex of ${{\mathcal{G}}}$, so that we set this point to be the origin of $L_i$. Note that the intersection of $\overline{L}_0$ with the whole grid ${{\mathcal{G}}}$ is a disjoint union of edges, each joining a couple of paths $L_i,L_{i+1}$, for some $i\in{{\mathbb Z}}$. Precisely, we write $$\overline{L}_0\cap{{\mathcal{G}}}=\bigsqcup_{i\in{{\mathbb Z}}}b_{2i}^0$$ where, given $i\in{{\mathbb Z}}$, $b_{2i}^0$ denotes the bridging edge between $L_{2i}$ and $L_{2i+1}$ that belongs to $\overline{L}_0$. Similarly, for every $k\in{{\mathbb Z}}$, let $\overline{L}_k$ be the straight line in the plane parallel to $\overline{L}_0$ passing through the vertex of ${\textsc{v}}\in L_0$ corresponding to $x_{L_0}({\textsc{v}})=k$, so that $$\overline{L}_k\cap{{\mathcal{G}}}=\begin{cases} \bigsqcup_{i\in{{\mathbb Z}}}b_{2i}^k & \text{if $k$ even}\\ \bigsqcup_{i\in{{\mathbb Z}}}b_{2i-1}^k & \text{if $k$ odd} \end{cases}$$ where again $b_{2i}^k$ (resp. $b_{2i-1}^k$) is the edge of ${{\mathcal{G}}}$ joining $L_{2i}$ with $L_{2i+1}$ (resp. $L_{2i-1}$ with $L_{2i}$) that belongs to $\overline{L}_k$. Moreover, identifying each $b_j^k$ with the interval $[0,1]$ through the coordinate $x_{b_j^k}:b_j^k\to[0,1]$, we stick to the following agreement: if $j\geq0$, then we set $x_{b_j^k}({\textsc{v}})=0$ for ${\textsc{v}}=b_j^k\cap L_j$, whereas if $j<0$, then we set $x_{b_j^k}(0)={\textsc{v}}$ for ${\textsc{v}}=b_j^k\cap L_{j+1}$. Then, given $\varepsilon>0$, we define (see Figure \[fig-neg\]) $$u_\varepsilon(x):=\begin{cases} e^{-\varepsilon(|x|+|i|)} & \text{if $x\in L_i$, for some $i\in{{\mathbb Z}}$}\\ e^{-\varepsilon(|x|+|i|+j)} & \text{if $x\in b_j^i$, for some $j,k\in{{\mathbb Z}}$, $j\geq0$}\\ e^{-\varepsilon(|x|+|i|+|j+1|)} & \text{if $x\in b_j^i$, for some $j,k\in{{\mathbb Z}}$, $j<0$}\,. \end{cases}$$ ![the construction of the function $u$ in the proof of Theorem \[THM\], with the straight lines $\overline{L}_i$ and the values of $u$ at the vertices of ${{\mathcal{G}}}$.[]{data-label="fig-neg"}](exagon){width="50.00000%"} By construction, $u\in H^1({{\mathcal{G}}})$ and, given $i\in{{\mathbb Z}}$, $$\begin{split} \int_{L_i}|u_\varepsilon|^p{\,dx}=&\,2\int_0^{+\infty}e^{-p\varepsilon (x+|i|)}{\,dx}=\frac{2e^{-p\varepsilon |i|}}{p\varepsilon}\\ \int_{\overline{L}_i\cap{{\mathcal{G}}}}|u_\varepsilon|^p{\,dx}=&\int_0^{+\infty}e^{-p\varepsilon (x+|i|)}{\,dx}=\frac{e^{-p\varepsilon |i|}}{p\varepsilon} \end{split}$$ for every $p\geq2$ and $$\begin{split} \int_{L_i}|u_\varepsilon'|^2{\,dx}=&\,2\varepsilon^2\int_0^{+\infty}e^{-2\varepsilon(|x|+|i|)}{\,dx}=\varepsilon e^{-2\varepsilon|i|}\\ \int_{\overline{L}_i\cap{{\mathcal{G}}}}|u_\varepsilon'|^2{\,dx}=&\,\varepsilon^2\int_0^{+\infty}e^{-2\varepsilon (x+|i|)}{\,dx}=\frac{\varepsilon e^{-2\varepsilon |i|}}{2}\,. \end{split}$$ Since ${{\mathcal{G}}}=\Big(\bigcup_{i\in\mathbb{Z}}L_i\Big)\cup\Big(\bigcup_{i\in\mathbb{Z}}\overline{L}_i\cap{{\mathcal{G}}}\Big)$, we get $$\begin{split} \int_{{\mathcal{G}}}|u_\varepsilon|^p{\,dx}=&\sum_{i \in \mathbb{Z}}\int_{L_i}|u_\varepsilon|^p{\,dx}+\sum_{i \in \mathbb{Z}}\int_{\overline{L}_i\cap{{\mathcal{G}}}}|u_\varepsilon|^p{\,dx}=3\Big(\frac{1}{p\varepsilon}+2\sum_{i=1}^{\infty}\frac{e^{-p\varepsilon i}}{p\varepsilon}\Big)=\frac{3(e^{p\varepsilon}+1)}{p\varepsilon(e^{p\varepsilon}-1)}\\ \int_{{\mathcal{G}}}|u_\varepsilon'|^2{\,dx}=&\sum_{i \in \mathbb{Z}}\int_{L_i}|u_\varepsilon'|^2{\,dx}+\sum_{i \in \mathbb{Z}}\int_{\overline{L}_i\cap{{\mathcal{G}}}}|u_\varepsilon'|^2{\,dx}=3\Big(\frac{\varepsilon}{2}+\sum_{i=1}^{\infty}\varepsilon e^{-2\varepsilon i}\Big)=\frac{3\varepsilon(e^{2\varepsilon}+1)}{2(e^{2\varepsilon}-1)}\,. \end{split}$$ Hence, setting $$k_\varepsilon:=\Big(\,\frac{2\varepsilon(e^{2\varepsilon}-1)}{3(e^{2\varepsilon}+1)}\mu\,\Big)^{1/2}$$ and letting $$v_\varepsilon(x):=k_\varepsilon u_\varepsilon(x)\qquad \forall\, x\in{{\mathcal{G}}}$$ yields at $$\|v_\varepsilon\|_{L^2({{\mathcal{G}}})}^2=k_\varepsilon^2\int_{{\mathcal{G}}}|u_\varepsilon|^2{\,dx}=\mu\,.$$ Therefore, $v_\varepsilon\in{H_\mu^1(\mathcal{G})}$ for every $\varepsilon>0$ and, taking advantage of the explicit formula above, as $\varepsilon\to0$ $$E(v_\varepsilon,{{\mathcal{G}}})=\frac{1}{2}k_\varepsilon^2\int_{{\mathcal{G}}}|u_\varepsilon'|^2{\,dx}-\frac{1}{p}k_\varepsilon^p\int_{{\mathcal{G}}}|u_\varepsilon|^p{\,dx}\sim\frac{1}{2}\mu\varepsilon^2-\frac{1}{p}C\mu^{p/2}\varepsilon^{p-2}$$ for some $C>0$ depending only on $p$. Thus, whenever $p<4$ and $\varepsilon$ is small enough, we have $${\mathcal E}(\mu)\leq E(v_\varepsilon,{{\mathcal{G}}})<0$$ and we conclude. [99]{} Adami R., Cacciapuoti C., Finco D., Noja D. [*Fast solitons on star graphs*]{}, Rev. Math. Phys. [**23**]{} 04 (2011), 409–451. Adami R., Cacciapuoti C., Finco D., Noja D. [*On the structure of critical energy levels for the cubic focusing NLS on star graphs*]{}. J. Phys. A [**45**]{} (2012), no. 19, 192001, 7pp. Adami R., Dovetta S., [*One-dimensional versions of three-dimensional system: Ground states for the NLS on the spatial grid*]{}, Rend. Mat. [**39**]{} (2018), 181–194. Adami R., Dovetta S., Serra E., Tilli P., [*Dimensional crossover with a continuum of critical exponents for NLS on doubly periodic metric graphs*]{}, arXiv:1805.02521, to appear on Analysis and PDEs. Adami R., Serra E., Tilli P.: [*NLS ground states on graphs*]{}. Calc. Var. and PDEs [**54**]{} (2015) no. 1, 743–761. Adami R., Serra E., Tilli P. [*Negative energy ground states for the $L^2$–critical NLSE on metric graphs*]{}. Comm. Math. Phys. [**352**]{} (2017), no. 1, 387–406. Adami R., Serra E., Tilli P., [*Threshold phenomena and existence results for NLS ground states on graphs*]{}, J. Funct. An. **271(1)**, 201–223 (2016). Ali Mehmeti, F. Nonlinear waves in Networks, Wiley VCH, 1994. Borrelli W., Carlone R., Tentarelli L., [*Nonlinear Dirac equation on graphs with localized nonlinearities: bound states and nonrelativistic limit*]{}, arXiv:1807.06937 \[math.AP\] (2018). Borrelli W., Carlone R., Tentarelli L., [*An overview on the standing waves of nonlinear Schrödinger and Dirac equations on metric graphs with localized nonlinearity*]{}, arXiv:1901.02696 \[math.AP\] (2019). Bose S.N., [*Plancks Gesetz und Lichtquantenhypothese*]{}, Zeit. für Physik, [**26**]{} (1924), 178–181. Cacciapuoti C., Dovetta S., Serra E., [*Variational and stability properties of constant solutions to the NLS equation on compact metric graphs*]{}. Milan Journal of Mathematics, **86(2)** (2018), 305–327. Dovetta S., [*NLS ground states on metric graphs with localized nonlinearities*]{}, J. Differential Equations, **264** (2018), no. 7, 4806–4821. S. Dovetta, L. Tentarelli, [*Ground states of the $L^2$-critical NLS equation with localized nonlinearity on a tadpole graph*]{}, Operator Theory: Advances and Applications, to appear. S. Dovetta, L. Tentarelli, [*$L^2$–critical NLS on noncompact metric graphs with localized nonlinearity: topological and metric features*]{}, arXiv:1811.02387 \[math.AP\] (2018). Gilg S., Pelinovsky D., Schneider G., *Validity of the NLS approximation for periodic quantum graphs*, Nonlinear Differ. Equ. Appl., no 6, Art. 63, 30 pp. (2016). A. Pankov, *Nonlinear Schrödinger equations on periodic metric graphs*, Discrete Contin. Dyn. Syst. [**38**]{} (2018), no. 2, 697–714. D. Pelinovsky, G. Schneider, *Bifurcations of Standing Localized Waves on Periodic Graphs*, Ann. H. Poincaré [**18 (4)**]{} (2017), 1185–1211. Ruedenberg K., Scherr C.-W., [*Free-Electron Network Model for Conjugated Systems. I. Theory*]{}, J. Chem. Phys. [**21**]{}, 1565 (1953). Cazenave T. [*Semilinear Schrödinger Equations*]{}, Courant Lecture Notes 10, American Mathematical Society, Providence, RI, 2003. S. Dovetta, [*Mass-constrained ground states of the stationary NLSE on periodic metric graphs*]{}, arXiv:1811.06798 (2018). Einstein A., [*Quantentheorie des einatomigen idealen Gases*]{}, Sitz. Preus. Akad. Wiss., [**1**]{} (1925), 3. Keel M., Tao T., [*Endpoint Strichartz Estimates*]{}, Amer. J. Math., [**120**]{}, 5 (1998), 955–980. Kenig C., Merle F., [*Global well-posedness, scattering and blow-up for the energy critical, focusing non-linear Schrödinger Equation in the radial case*]{}, Inv. Math. [**166**]{} (3) (2006), 645–675. Cazenave T., Lions P.-L. [*Orbital stability of standing waves for some nonlinear Schrödinger equations*]{}. Commun. Math. Phys. [**85**]{} (1982), no. 4, 549–561. Marzuola J. L., Pelinovsky D. E., *Ground state on the dumbbell graph*, Appl. Math. Res. Express **2016**, no. 1 (2016), 98–145. Noja, D., Pelinovsky, D., Shaikhova, G., [*Bifurcations and stability of standing waves in the nonlinear Schrödinger equation on the tadpole graph*]{} Nonlinearity [**28**]{} (2015), vol. 7, 2343-2378. Serra E., Tentarelli L. [*Bound states of the NLS equation on metric graphs with localized nonlinearities*]{}. J. Diff. Eq. [**260**]{} (2016), no. 7, 5627–5644. Serra E., Tentarelli L. [*On the lack of bound states for certain NLS equations on metric graphs*]{}. Nonlinear Anal. [**145**]{} (2016), 68–82. Tentarelli L., NLS ground states on metric graphs with localized nonlinearities. *J. Math. Anal. Appl.* [**433**]{} (2016), no. 1, 291–304. Zakharov V.E., Shabat B., [*Exact Theory of Two–Dimensional Self–Focusing and One–Dimensional Self–Modulation of Waves in Nonlinear Media*]{}, Soviet Phys. JETP [**34**]{} (1) (1972), 62-–71.
{ "pile_set_name": "ArXiv" }
--- bibliography: - 'text/ms.bib' title: | Reasoning About Physical Interactions with\ Object-Oriented Prediction and Planning ---
{ "pile_set_name": "ArXiv" }
--- abstract: 'The presence of active forces in various biological and artificial systems may change how those systems behaves under forcing. We present a minimal model of a suspension of passive or active swimmers driven on the boundaries by time-dependent forcing. In particular, we consider a time-periodic drive from which we determine the linear response functions of the suspension. The meaning of these response functions are interpreted in terms of the storage and dissipation of energy through the particles within the system. We find that while a slowly driven active system responds in a way similar to a passive system with a re-defined diffusion constant, a rapidly driven active system exhibits a novel behavior related to a change in the motoring activity of the particles due to the external drive.' author: - Michael Wang - 'Alexander Y. Grosberg' title: 'Dynamical response of a system of passive or active swimmers to time-periodic forcing' --- \[sec:Introduction\]Introduction ================================ Active matter has been of great interest due to its use in understanding a wide range of biological and artificial out-of-equilibrium systems [@Marchetti; @et; @al; @Bechinger; @et; @al]. Many active systems studied consist of individual particles that consume energy locally and generate independent motion. Examples of these self-propelled particles include bacteria (e.g. *E.coli* [@Berg; @et; @Brown]); their artificial counterparts, micron-sized catalytic swimmers [@Palacci; @et; @al; @Paxton; @et; @al]; and molecular motors [@Astumian; @Julicher; @et; @al]. Two quantities that characterize the motion of self-propelled particles is a propulsion force or velocity and most importantly, a persistence time for the direction of propulsion. On short time scales, their motion is ballistic-like, while on long time scales in free space, they undergo random walks and effectively diffuse, much like passive Brownian particles undergoing thermal diffusion. However, despite the similarities with passive particles on long time scales, it is known that a nonzero persistence time combined with interactions with an environment, for example obstacles and other particles, can lead to emergent out-of-equilibrium behaviors: liquid-gas phase separation in the absence of attractive interactions [@Cates; @and; @Tailleur; @Redner; @et; @al; @Fily; @and; @Marchetti; @Wysocki; @et; @al; @Bialke; @et; @al], preferential motion of bacteria in one direction through funnel-shaped gates [@Galajda; @et; @al], and bacteria-powered microscopic ratchets, gears, or motors [@Di; @Leonardo; @et; @al; @Vizsnyiczai; @et; @al]. In addition, the persistent nature of active particles—and more generally, systems containing components driven by non-thermal noise—has noticeable effect on the dynamic response of those systems to mechanical and chemical perturbations [@Sokolov; @et; @al; @Rafai; @et; @al; @Hatwalne; @et; @al; @Turlier; @et; @al; @Chu; @et; @al; @Fodor; @et; @al; @Bi; @et; @al; @Caprini; @et; @al; @Solon; @et; @al; @Marconi; @et; @al; @Sheshka; @et; @al]. What has been less studied is how the details of the diffusive and ballistic movements of passive and active particles within a larger system may affect that system’s response to external forcing. In this paper, we present a solvable minimal model of a suspension of non-interacting passive or active particles driven periodically at the boundaries. We extract the response functions of the system, which relate the external forcing to the behavior of the suspension, and consider how the particles store and dissipate energy from the drive. We find that the diffusivities of the particles play an important role in how the suspension responds to forcing on different time-scales. We observe that on short time-scales, persistent particles indeed exhibit a response different from that of diffusive particles. \[sec:Model\]Model ================== ![Model of the system. **Left:** Large bulk of particles confined by a piston and the corresponding ramp potential (Eq. (\[ramp potential\])). **Right:** Particles trapped and transported between two co-moving pistons represented by a V-shaped potential (Eq. (\[v-shaped potential\])). The red double arrows indicate movement of the pistons and potentials due to drive.[]{data-label="Model"}](Model_Figure.png) To model an externally forced suspension of passive or active particles, we consider two scenarios (Fig. \[Model\]): a uniform bulk of particles confined by a single piston and particles trapped between two pistons. To make this model analytically tractable, we restrict ourselves to 1D and describe the pistons as linear potentials. In the first case of a single piston, the confining potential is given by a ramp potential $$\Phi_{\textrm{ramp}}(x)= \begin{cases} 0, & x<0\\ fx, & x\ge0 \end{cases}. \label{ramp potential}$$ Here, $f$ is the force experienced by the particles when they enter the ramp region. In the second case of two pistons, the confining potential is given by a V-shaped potential $$\Phi_{\textrm{V-shaped}}(x)=f\left|x\right|. \label{v-shaped potential}$$ The presence of a bulk, or lack thereof, affects the response of these particles to an external time-dependent forcing. A time-dependent forcing can be realized by moving the positions of the pistons according to some protocol $a(t)$, or mathematically, by replacing $\Phi(x)$ with $\Phi(x-a(t))$. In this paper, we consider a time-periodic drive given by $a(t)=a\sin\omega t$. Note that we assume the pistons are completely permeable to fluid such that the drive does not generate fluid flow and the only change to the local density of particles is through direct interaction with the potentials. The particles we study here are non-interacting passive particles with diffusivity $D$ and run-and-tumble (RnT) particles with propulsion velocity $\pm v$ and tumble rate $\alpha$. The particles are assumed to be overdamped with mobility $\mu$ in the static fluid. For passive particles, the density $\rho(x,t)$ evolves according to the usual advection-diffusion equation $$\frac{\partial\rho}{\partial t}=-\frac{\partial}{\partial x}\Big[{-\mu\Phi'(x-a(t))\rho}\Big]+D\frac{\partial^2\rho}{\partial x^2} \label{eq:diffusion pde lab}$$ while for RnT particles in 1D, the densities $\rho_{\pm}(x,t)$ of left ($+$) and right ($-$) moving particles evolve according to [@Schnitzer] $$\begin{aligned} \frac{\partial\rho_{\pm}}{\partial t}=-\frac{\partial}{\partial x}\Big[(\pm v-\mu\Phi'(x-a(t)))\rho_{\pm}\Big]-\alpha\rho_{\pm}+\alpha\rho_{\mp}. \label{eq:rnt pde lab} \end{aligned}$$ The first term on the right-hand side is responsible for drift due to self-propulsion and external forces while the second and third terms capture the transitions or tumbles between the two propulsion directions. We are interested in how perturbations to the local density influences the response due to external forcing. \[sec:Calculation\]Calculation ============================== \[subsec:Setup\]Co-moving frame and dimensionless parameters ------------------------------------------------------------ It is more convenient to solve the model described in Sec. \[sec:Model\] in the rest frame of the pistons. Transforming to said frame using $y=x-a\sin\omega t$ introduces a fictitious drift $-\dot{a}(t)=-a\omega\cos\omega t$ (see Appendix \[app:Solution Details\] for the resulting PDEs). We pick the characteristic time scale to be $\tau_{\textrm{drive}}=1/\omega$, the time scale of the drive. Consequently, we pick the characteristic length scale to be the root mean-squared displacement of the particles in that time $\tau_{\textrm{drive}}$. $\bullet$ For passive particles, that length scale is $l_{\textrm{diffusion}}=\sqrt{D\tau_{\textrm{drive}}}=\sqrt{D/\omega}$. With these choices, Eq. (\[eq:diffusion pde lab\]) becomes $$\frac{\partial\rho}{\partial\tilde{t}}=-\frac{\partial}{\partial\tilde{y}}\left[\left(-\epsilon\cos\tilde{t}-\gamma\tilde{\Phi}'(\tilde{y})\right)\rho\right]+\frac{\partial^2\rho}{\partial\tilde{y}^2}, \label{eq:diffusion pde rest dim}$$ where the rescaled potential is $\tilde{\Phi}=\Phi/f$. The dimensionless parameters are defined as $$\epsilon=\sqrt{\frac{a^2\omega}{D}},\ \ \ \gamma=\sqrt{\frac{\mu^2f^2}{\omega D}}. \label{eq:def passive params}$$ It is useful to note that $\epsilon$ and $\gamma$ can be written as the ratios $a/l_{\textrm{diffusion}}$ and $l_{\textrm{diffusion}}/l_{\textrm{penetration}}$, respectively. Here, $l_{\textrm{penetration}}=D/\mu f$ (alternatively $k_BT/f$ for thermal particles) is approximately the maximum distance a diffusing particle penetrates into a linear potential region. We want to study the linear response of the system to external drive, that is, the response which is linear in the drive amplitude $a$. We claim that the correct way to form the dimensionless criteria for determining the “smallness” of the amplitude $a$ is the above defined $\epsilon=a\sqrt{\omega/D}=a/l_{\textrm{diffusion}}$. This is clear physically because at small drive frequencies $\omega$, even a relatively large drive amplitude $a$ corresponds to a gentle drive, as the system remains very close to steady-state at all times. We therefore will perform expansions linear in $\epsilon\ll1$. The parameter $\gamma$ characterizes how far the particles climb up or down the confining potentials and hence how much of the potentials they can explore over each cycle. If $\gamma\gg1$ ($l_{\textrm{diffusion}}\gg l_{\textrm{penetration}}$), the particles have sufficient time to diffuse over the entirety of their confinement up to the penetration depth. We call this the “slow” drive regime. On the other hand, if $\gamma\ll1$ ($l_{\textrm{diffusion}}\ll l_{\textrm{penetration}}$), the particles can only explore a small portion of the potentials. We call this the “fast” drive regime. $\bullet$ For RnT particles, the effective diffusivity over long time scales is given by $D=v^2/2\alpha$ and so we pick $l_{\textrm{diffusion}}=\sqrt{v^2/\alpha\omega}$. Eq. (\[eq:rnt pde lab\]) becomes $$\frac{\partial\rho_{\pm}}{\partial\tilde{t}}=-\frac{\partial}{\partial\tilde{y}}\left[\left(\pm\gamma_{\omega}^{-1/2}-\epsilon\cos\tilde{t}-\gamma_f\gamma_{\omega}^{-1/2}\tilde{\Phi}'(\tilde{y})\right)\rho_{\pm}\right]-\gamma_{\omega}^{-1}\rho_{\pm}+\gamma_{\omega}^{-1}\rho_{\mp}, \label{eq:rnt pde rest dim}$$ where the parameters are defined as $$\epsilon=\sqrt{\frac{a^2\omega\alpha}{v^2}},\ \ \ \gamma_{\omega}=\frac{\omega}{\alpha},\ \ \ \gamma_f=\frac{\mu f}{v}. \label{eq:def active params}$$ Here, $\epsilon$ plays the same physical role as that of the passive particles and as such, we consider corrections to linear order in $\epsilon$. The key difference between passive and active particles is the introduction of a new time scale $\alpha^{-1}$. The parameter $\gamma_{\omega}$ controls the number of times a RnT particle tumbles during one cycle. This introduces a new regime where the drive is faster than the tumbling ($\omega\gg\alpha$), which cannot happen for passive particles. Finally, the parameter $\gamma_f$ compares the force the potential exerts on the particles to their propulsion force. We take $\gamma_f<1$ so that the RnT particles are able to climb up the potentials. It should be noted that the quantity $\gamma_f\gamma_{\omega}^{-1/2}$ in Eq. (\[eq:rnt pde rest dim\]) is analogous to $\gamma$ for passive particles. This is easily seen by taking $D\sim v^2/\alpha$. As we will see, this means that the ratio $l_{\textrm{diffusion}}/l_{\textrm{penetration}}$ will be important in determining the behavior of RnT particles, in addition to $\gamma_{\omega}$. \[subsec:Solution\]Mechanical force and linear response function ---------------------------------------------------------------- In linear response theory, we may ask how much extra force $\Delta F$ the pistons exert on the suspension of particles due to a particular motion $a(t)$ of the pistons. Mathematically, we write $$\Delta F(t)=\int B(t-t')a(t')dt', \label{eq:B def}$$ where $B(t)$ is the linear response function linking the perturbations to the responses. In frequency space, Eq. (\[eq:B def\]) can be written as $\Delta F_{\omega}=B_{\omega}a_{\omega}$, that is, a drive with frequency $\omega$ leads to a response with the same frequency in the linear regime. As discussed, we solve Eqs. (\[eq:diffusion pde rest dim\]) and (\[eq:rnt pde rest dim\]) to linear order in $\epsilon$. The details of the calculation for all cases can be found in Appendix \[app:Solution Details\]. For all of the cases, the resulting density can be written in the form $$\rho(y,t)=\rho^{(0)}(y)+2\epsilon\textrm{Re}\left\{p(y)e^{i\omega t}\right\},$$ where $\rho=\rho_++\rho_-$ for RnT particles and $p(y)$ describes the position dependence of the perturbation to the density. The total mechanical force applied to the suspension can be computed as $$F(t)=-\int_{-\infty}^{\infty}\Phi'(y)\rho(y,t)dy=F^{(0)}+\Delta F(t), \label{force}$$ where $F^{(0)}$ is the force needed to keep the pistons stationary when there is no time-dependent drive. For the ramp potential in 1D, this is the usual ideal gas pressure/force $F^{(0)}=-\rho_0D/\mu$, where $D=v^2/2\alpha$ for RnT particles. For the V-shaped potential, $F^{(0)}=0$. The extra force generated by the motion $a(t)=a\sin\omega t$ in the pistons is $$\Delta F(t)=-2\epsilon\int_{-\infty}^{\infty}\Phi'(y){\,\textrm{Re}}\left\{p(y)e^{i\omega t}\right\}dy. \label{eq:Delta F}$$ Using the relation $\Delta F_{\omega}=B_{\omega}a_{\omega}$ and after some algebra (Appendix \[app:Response Function\]), we arrive at $$B_{\omega}=-\frac{2i}{l_{\textrm{diffusion}}}\int_{-\infty}^{\infty}\Phi'(y)p(y)dy, \label{response function}$$ ![**Top:** Response function for V-shaped potential. **Bottom:** Response function for ramp potential. Real (blue) and imaginary (orange) parts of the response function for passive (dots) and RnT (solid line) particles. For the rescaled frequency $\tilde{\omega}=\gamma_{\omega}/2\gamma_f^2$, we show $\gamma_f=0.1$ for the RnT particles. There are three regimes: $\tilde{\omega}\ll1$, $1\ll\tilde{\omega}\ll\gamma_f^{-2}$, and $\tilde{\omega}\gg\gamma_f^{-2}$.[]{data-label="fig:Bw passive active v ramp"}](Bw_passive_active_v_ramp.png) $\bullet$ We start with the results for the **double pistons** (**V-shaped potential**). For passive particles, the response function is $$B_{\omega}=i\rho_0f\left(-2i-\gamma^2+\gamma\sqrt{\gamma^2+4i}\right),$$ where $\gamma=\sqrt{\mu^2f^2/\omega D}$ (Eq. (\[eq:def passive params\])). For RnT particles, it is $$B_{\omega}=i\rho_0f\left(-2i-\frac{2\gamma_f^2}{\gamma_{\omega}}+\frac{\sqrt{2}\gamma_f}{\gamma_{\omega}^{1/2}}\sqrt{\frac{2\gamma_f^2}{\gamma_{\omega}}+4i-2\gamma_{\omega}}\right),$$ where $\gamma_{\omega}=\omega/\alpha$ and $\gamma_f=\mu f/v$ (Eq. (\[eq:def active params\])). Both cases can be compared in terms of a single rescaled frequency $\tilde{\omega}=\gamma_{\omega}/2\gamma_f^2=\gamma^{-2}$ (Fig. \[fig:Bw passive active v ramp\]) by equating the diffusivities $v^2/2\alpha$ and $D$. There are a total of three regimes. For passive particles, the two regimes $\tilde{\omega}\ll1$ and $\tilde{\omega}\gg1$ correspond to the aforementioned slow and fast drives. The key is that for RnT particles there is an additional region $\tilde{\omega}\gg\gamma_f^{-2}$ or $\omega\gg\alpha$, in which the pistons oscillate many times during a single tumble, that is, the particles appear persistent on the time scale of the drive. ![Level sets of $B_{\omega}$ for RnT particles in the V-shaped potential. The contours correspond to fixed $\tilde{\omega}$. **Top:** Real part. **Bottom:** Imaginary part. There are two crossovers indicated by the black dashed lines: $\gamma_{\omega}\sim\gamma_f^2$, which divides the slow and fast regimes; and $\gamma_{\omega}\sim1$, which divides the diffusive and persistent regimes.[]{data-label="fig:Bw contour v"}](reimBw_active_v.png) The region $\omega\ll\alpha$ where the passive and active particles have similar behaviors is the passive limit of RnT particles, which can be obtained by taking $v,\alpha\rightarrow\infty$ while holding $v^2/2\alpha$ constant, or equivalently moving along the contours $\gamma_{\omega}=2\tilde{\omega}\gamma_f^2$ for fixed $\tilde{\omega}$ (Fig. \[fig:Bw contour v\]). $\bullet$ For the **single piston** (**ramp potential**), the response function for passive particles is $$B_{\omega}=\frac{i^{3/2}\rho_0f}{2}\left(\gamma-\sqrt{\gamma^2+4i}\right).$$ For RnT particles, the expression is rather cumbersome and is presented in Appendix \[app:Variable Definitions\] (Eq. (\[eq:response ramp potential\])). We see the same three regions as with the V-shaped potential. The key difference is the frequency dependence of the response functions in the slow drive regime $\tilde{\omega}\ll1$ (Fig. \[fig:Bw passive active v ramp\], right). \[sec:Discussion\]Discussion ============================ To briefly summarize, we obtained the linear response functions of a system of passive particles (diffusivity $D$) or run-and-tumble particles (swim speed $v$ and tumble rate $\alpha$) under external forcing by considering a suspension of these particles driven by a single piston or between two pistons, which we represented by linear potentials. In particular, we considered a periodic drive $a(t)=a\sin\omega t$, from which we extracted the frequency dependence of the response of the mechanical force to the drive. The real and imaginary parts of the response function $B_{\omega}=B_{\omega}'+iB_{\omega}''$ are associated with the storage and dissipation of energy, respectively. Note that the excess mechanical force $\Delta F$ and the rate at which excess work $\Delta\dot{W}=\Delta F\dot{a}$ is performed on the particles can be written as $$\Delta F(t)=aB_{\omega}'\sin\omega t+aB_{\omega}''\cos\omega t$$ and $$\Delta\dot{W}=\frac{1}{2}a^2\omega B_{\omega}'\sin2\omega t+\frac{1}{2}a^2\omega B_{\omega}''\left(1+\cos2\omega t\right), \label{eq:work rate}$$ where the first terms correspond to in-phase responses and the second terms, out-of-phase responses. The work performed on the system by the drive is stored and dissipated by the particles. The first term on the right-hand side of Eq. (\[eq:work rate\]), which is due to the in-phase “elastic"-like response, is related to storage. In this non-interacting ideal gas-like system, energy is stored when the particles climb up and spend time in the potentials. Even though no energy is stored on average over time, we identify the amplitude of the rate of storage as $$\Delta\dot{U}\approx\frac{1}{2}a^2\omega B_{\omega}'. \label{eq:storage rate}$$ The second term on the right-hand side of Eq. (\[eq:work rate\]), which comes from the out-of-phase “viscous"-like response, is related to dissipation. As expected, this term has a non-zero average over each cycle since the dissipated energy cannot be returned. Thus the average rate of dissipation is $$\langle\dot{Q}\rangle=\frac{1}{2}a^2\omega B_{\omega}''. \label{eq:dissipation rate}$$ There are three distinct regimes of response, depending on the driving frequency $\omega$: 1. Slow drive—In this regime, we have $\sqrt{D/\omega}\gg D/\mu f$ (for the case of active particles, $D=v^2/2\alpha$), that is, the region over which particles diffuse in a cycle is much greater than the penetration depth of the linear potentials. As a consequence, the particles can explore the confining potentials and effectively equilibrate with them. 2. Fast drive, diffusive particles—In this regime, we instead have $\sqrt{D/\omega}\ll D/\mu f$. In other words, the particles will not have sufficient time to diffuse sufficiently far in a cycle to explore the confinement and thus will not equilibrate with the potentials. 3. Fast drive, persistent particles—In this regime, the drive is sufficiently fast $\omega\gg\alpha$ such that the rate of driving is faster than the tumble rate of the particles and the particles appear persistent on the time scale of the drive. Note that in the first two regimes (Fig. \[fig:Bw passive active v ramp\]), the passive and active particles have similar behaviors. This is the passive limit for RnT particles, that is, for sufficiently slow drive, the RnT particles effectively behave as passive particles with diffusivity $v^2/2\alpha$. As such, for these two regimes, we will consider particles with diffusivity $D$, keeping in mind that we can simply replace $D$ with $v^2/2\alpha$ for active particles. We now illustrate out results with scaling arguments. ---------------------------------------- -------------------------------------------------------------- ------------------------------------------------------------------ -------------------------------------------------------------- -------------------------------------------------------- $\langle\dot{Q}\rangle$ $\Delta\dot{U}$ $\langle\dot{Q}\rangle$ $\Delta\dot{U}$ slow drive $\rho_0\frac{D}{\mu f}\frac{(a\omega)^2}{\mu}$ $\rho_0\frac{D}{\mu f}\frac{(a\omega)^2}{\mu}\frac{1}{\gamma^2}$ $\rho_0\sqrt{\frac{D}{\omega}}\frac{(a\omega)^2}{\mu}$ $\rho_0\sqrt{\frac{D}{\omega}}\frac{(a\omega)^2}{\mu}$ ($\tilde{\omega}\ll1$) fast drive, diffusive particles $\rho_0\sqrt{\frac{D}{\omega}}\frac{(a\omega\gamma)^2}{\mu}$ $\rho_0a^2f\omega$ $\rho_0\sqrt{\frac{D}{\omega}}\frac{(a\omega\gamma)^2}{\mu}$ $\rho_0a^2f\omega$ ($1\ll\tilde{\omega}\ll\gamma_f^{-2}$) fast drive, persistent particles $\rho_0\frac{a^2\mu f^2\alpha}{v}$ $\rho_0a^2f\omega$ $\rho_0\frac{a^2\mu f^2\alpha}{v}$ $\rho_0a^2f\omega$ ($\gamma_f^{-2}\ll\tilde{\omega}$) ---------------------------------------- -------------------------------------------------------------- ------------------------------------------------------------------ -------------------------------------------------------------- -------------------------------------------------------- : Average rate of dissipation $\langle\dot{Q}\rangle$ and the characteristic rate of storage $\Delta\dot{U}$ for a drive $a(t)=a\sin\omega t$. As defined in the text, $\tilde{\omega}=\omega D/\mu^2f^2$. Physically, the first regime corresponds to $l_{\textrm{diffusion}}\gg l_{\textrm{penetration}}$; the second regime to $l_{\textrm{diffusion}}\ll l_{\textrm{penetration}}$ and $\omega\ll\alpha$; and the third regime to $\omega\gg\alpha$. Note that for slow and fast drives, the case of active particles can be obtained by replacing $D$ with $v^2/2\alpha$.[]{data-label="tab:dissipation storage"} \[subsec:slow drive\]Slow drive ------------------------------- When the drive is sufficiently slow, the drive probes time scales greater than the relaxation time of diffusing particles in the linear potentials and we have $l_{\textrm{diffusion}}\gg l_{\textrm{penetration}}$ ($\gamma\gg1$). Note that in this regime, the system can be described by an adiabatic approximation. We start with the average rate at which particles dissipate work performed by the drive (Eq. (\[eq:dissipation rate\])): $$\langle\dot{Q}\rangle_{\textrm{V-shaped}}\sim\rho_0\frac{D}{\mu f}\frac{(a\omega)^2}{\mu}, \label{eq:disspation, v-shaped, slow}$$ $$\langle\dot{Q}\rangle_{\textrm{ramp}}\sim\rho_0\sqrt{\frac{D}{\omega}}\frac{(a\omega)^2}{\mu}. \label{eq:dissipation, ramp, slow}$$ Before we continue, it is useful to interpret the dissipation rates for both the slow and fast drive regimes in terms of the particle current generated by the drive (Appendix \[app:Currents\]). The current can be written as $J_x(y,t)=\rho(y,t)v_x(y,t)$, where $v_x$ is the average drift velocity of the particles relative to the static fluid. The dissipation rate due to drag can then be computed as $$\dot{Q}=\int_{-\infty}^{\infty}\rho\frac{v_x^2}{\mu}dy.$$ Upon time averaging, we finally note that the average dissipation rate scales as $$\langle\dot{Q}\rangle\sim\rho_0L\frac{V^2}{\mu}, \label{eq:dissipation form}$$ where $L$ is the characteristic decay length of the current, or $\rho_0L$ is the number of particles contributing to dissipation, and $V$ is the characteristic drift velocity of those particles. For the **double pistons** (**V-shaped potential**), we find in terms of Eq. (\[eq:dissipation form\]) $L\sim D/\mu f$, the penetration depth, and $V\sim a\omega$, the characteristic speed of the pistons. Since the particles can equilibrate with the potential, they on average drift with the same velocity as the pistons. Thus, we have the net transport of $\rho_0D/\mu f$ particles (the total number of trapped particles) each with a dissipation rate of $(a\omega)^2/\mu$ (Eq. (\[eq:disspation, v-shaped, slow\])). For the **single piston** (**ramp potential**), particles are still transported with the piston. However, the key difference from the V-shaped potential is in the number of particles that contribute to dissipation. This is due to the presence of a bulk. Again using the language of Eq. (\[eq:dissipation form\]), we have $L\sim\sqrt{D/\omega}$, the diffusion distance. In the bulk, only particles within a diffusion distance of the piston will equilibrate with the potential. Particles farther than that remain unaffected by the movement of the pistons during a cycle. Thus, unlike with the V-shaped potential, $\rho_0\sqrt{D/\omega}$ particles are instead transported each with dissipation rate $(a\omega)^2/\mu$ (Eq. (\[eq:dissipation, ramp, slow\])). The storage rates given by Eq. (\[eq:storage rate\]) are $$\Delta\dot{U}_{\textrm{V-shaped}}\sim\rho_0\frac{D}{\mu f}\frac{(a\omega)^2}{\mu}\frac{1}{\gamma^2}\ll\langle\dot{Q}\rangle_{\textrm{V-shaped}}, \label{eq:storage, v-shaped, slow}$$ $$\Delta\dot{U}_{\textrm{ramp}}\sim\rho_0\sqrt{\frac{D}{\omega}}\frac{(a\omega)^2}{\mu}\sim\langle\dot{Q}\rangle_{\textrm{ramp}}. \label{eq:storage, ramp, slow}$$ For the **double pistons** (**V-shaped potential**), we expect that very little energy will be stored during a cycle. Indeed, we find that the storage rate is significantly smaller than the dissipation rate since $\gamma\gg1$. This is due to the symmetry of the potential. When the potential shifts positions slowly, depending on the direction, the potential energy on one side of the origin increases slightly from an influx of particles while the potential energy on the other side decreases by roughly an equal amount from an efflux of particles. For the **single piston** (**ramp potential**), the situation is different since particles in the bulk do not have any potential energy; a flow in or out of the potential region will lead to a much more significant change in the potential energy of the system. In fact, we find that the storage rate is of order the dissipation rate. As the piston moves towards the bulk, the influx of particles into the potential region leads to an increase in potential energy while the efflux of particles from the bulk does nothing. Since the drive is slow, these particles have sufficient time to leave the region and dissipate any energy given to them. Note that since the slow drive regime can be described by an adiabatic approximation, these results should hold for any confining potential. ![Schematic of the density gradients that give rise to dissipation through diffusion for fast drive. If the drive were sufficiently slow, the original density (solid blue) would relax to the new shifted density (dashed blue). However, for fast drive, only particles within $l_{\textrm{diffusion}}$ of the origin will see a change in the environment and begin to relax through diffusion.[]{data-label="fig:density gradients"}](density_gradient_fast_drive.png) \[subsec:Fast drive\]Fast drive, diffusive particles ---------------------------------------------------- When the pistons are driven quickly, the particles do not have sufficient time to relax in the potential regions since $l_{\textrm{diffusion}}\ll l_{\textrm{penetration}}$ ($\gamma\ll1$). In this case, the dissipation rates (Eq. (\[eq:dissipation rate\])) go as $$\langle\dot{Q}\rangle_{\textrm{V-shaped}}\sim\langle\dot{Q}\rangle_{\textrm{ramp}}\sim\rho_0\sqrt{\frac{D}{\omega}}\frac{(a\omega\gamma)^2}{\mu}. \label{eq:dissipation, passive fast}$$ Here for both potentials, we find $L\sim\sqrt{D/\omega}$ and $V\sim a\omega\gamma$, which is much slower than the drive. It is not too surprising that the decay length scale of the current is the diffusion distance. For fast drive, particles farther than that distance from the origin will not see a change in their environment (they experience the same constant force $\pm f$ throughout each cycle) and thus the density in those regions will roughly remain unaffected by the drive. However, particles within a diffusion distance of the origin will notice the shift in the potentials and begin to relax to a new steady-state density (Fig. \[fig:density gradients\]). This relaxation of density near the origin generates the current. To understand how the particular speed $V\sim a\omega\gamma$ arises, note that the condition $l_{\textrm{diffusion}}\ll l_{\textrm{penetration}}$ can be written as $fl_{\textrm{diffusion}}\ll D/\mu$ ($\equiv k_BT_{\textrm{eff}}$). Therefore, the effect of the potentials is weak compared to that of diffusion, that is, the diffusive flux due to density gradients dominates over the advective flux due to the potentials. The density gradients of interest are those that occur over the region of size $\sim l_{\textrm{diffusion}}$ around the origin since only the density in that region evolves. In other words, by shifting the potential a distance $a$ on time-scale $\omega^{-1}$, the density at the boundaries of the region bounded by $\pm l_{\textrm{diffusion}}$ is either increased or decreased by a fixed amount $\delta\rho$ (Fig. \[fig:density gradients\]). The unperturbed density is given by $\rho(x)=\rho_0e^{-\mu f|x|/D}$. Shifting and taking the difference, we find for small $a$ $$\delta\rho=\rho(\pm l_{\textrm{diffusion}})-\rho(\pm l_{\textrm{diffusion}}-a)\approx\mp\frac{\rho_0a\mu f}{D}.$$ Therefore, the diffusive current over this region can be computed as $$J\sim D\frac{\delta\rho}{l_{\textrm{diffusion}}}\sim\rho_0a\omega\gamma,$$ from which we obtain $V\sim a\omega\gamma$. Note that although $\delta\rho\approx0$ at $-l_{\textrm{diffusion}}$ for the case of the ramp potential (the bulk does not change very much), the density gradient still scales the same and we would still obtain the same characteristic drift speed. The storage rates (Eq. (\[eq:storage rate\])) go as $$\Delta\dot{U}_{\textrm{V-shaped}}\sim\Delta\dot{U}_{\textrm{ramp}}\sim(\rho_0a)(fa)\omega. \label{eq:storage, passive fast}$$ Note that since particles on average transport much slower than the pistons, we can treat them on average as stationary over a period. Thus, when a piston moves a distance $a$ towards particles, $\rho_0a$ particles are forced a distance of order $a$ into the potential region. The potential energy increase per particle is then roughly $fa$. This happens at a rate $\omega$ and so we obtain the same scaling. \[subsec:active fast drive\]Fast drive, persistent particles ------------------------------------------------------------ As we saw in the previous two regimes, there is little difference between the passive and active particles aside from a re-defined diffusion constant. This is due to the diffusive nature of the active particles on long time-scales. In this regime, however, the drive rate is larger than the tumbling rate ($\omega\gg\alpha$) and the active particles will be persistent on the time-scales of the drive. The dissipation rates (Eq. (\[eq:dissipation rate\])) are given by $$\langle\dot{Q}\rangle_{\textrm{V-shaped}}\sim\langle\dot{Q}\rangle_{\textrm{ramp}}\sim\rho_0\frac{a^2\mu f^2\alpha}{v}. \label{eq:persistent dissipation}$$ If we compute the currents as we did for the slow and fast drive regimes for diffusive particles, we find that the dissipation through particle currents is $$\langle\dot{Q}\rangle\sim\rho_0\frac{a^2\mu f^2\alpha}{v}\left(\frac{\omega}{\alpha}\right)^2,$$ which is different from the work the drive performs on the system. To illustrate the source of this discrepancy, consider a passive Brownian particle and an active swimmer, both acted on by an external force $f_{\textmd{ext}}$. For the Brownian particle, we know that in addition to diffusion, it will on average drift with a speed $v=\mu f_{\textrm{ext}}$. The average work rate of the external force is $\dot{w}_{\textrm{ext}}=f_{\textrm{ext}}v=\mu f_{\textrm{ext}}^2=\dot{q}$, which is the rate of dissipation through friction. For a swimmer, on a time-scale shorter than the correlation time $\alpha^{-1}$, the drift velocity is $v=\mu(f_p+f_{\textrm{ext}})$, where $f_p$ is the propulsion force, and the rate of dissipation is $\dot{q}=(f_p+f_{\textrm{ext}})v=\mu(f_p+f_{\textrm{ext}})^2$. The change in the rate of dissipation from when there is no external force is $\delta\dot{q}=\mu(f_p+f_{\textrm{ext}})^2-\mu f_p^2=2\mu f_pf_{\textrm{ext}}+\mu f_{\textrm{ext}}^2$. Naively, we would attribute this extra dissipation to work performed by the external force; however, a quick calculation reveals that the work rate of the external force is given by $\dot{w}_{\textrm{ext}}=f_{\textrm{ext}}v=\mu f_{\textrm{ext}}(f_p+f_{\textrm{ext}})\ne\dot{q}$. Note that the work rate performed by the swimmer propulsion is $\dot{w}_p=\mu f_p(f_p+f_{\textrm{ext}})$. Therefore, the dissipation for a swimmer not only accounts for the work performed by the external force, but also the effect of that external force on the motoring activity of the swimmer, that is, the external force changes the work rate performed by self-propulsion. We should note that different physical situations are possible with regards to the interplay between external drive and internal motoring activity of the swimmers. We assumed above—as was natural in the context of our work—that the propulsion force $f_p$ remains unaffected by the external drive. In other systems, it is possible, for instance, that the power output by a propelled motor, $f_pv$, is a constant, in which case the propulsion force itself will have to be dependent on the external drive (since velocity depends on it). Various intermediate cases, between constant propulsion force and constant power output, are also possible. We do not analyze all these possibilities in this paper. The storage rates (Eq. (\[eq:storage rate\])) are given by $$\Delta\dot{U}_{\textrm{V-shaped}}\sim\Delta\dot{U}_{\textrm{ramp}}\sim(\rho_0a)(fa)\omega. \label{eq:storage, active fast}$$ Not surprisingly, this scaling is the same as fast drive since the same reasoning applies, that is, moving a piston a distance $a$ forces $\rho_0a$ particles into the potential region and gives them a potential energy $fa$ on a time scale $\omega$. This is independent of whether these particles are passive or active. \[sec:Conclusion\]Concluding remarks ==================================== To conclude, we examined the linear response of a system of passive or active (run-and-tumble) particles to a time-periodic drive. We found that the active suspension responds in a way similar to that of passive particles when the frequency of drive is smaller than the inverse persistence time of the active particles. At higher frequencies (larger than the inverse persistence time), however, the persistence of the active particles changes the response a great deal; in particular, we found that the dissipation rate of the active particles changes not only due to work performed by the drive but also due to the effect of the drive on the motoring activity of the particles, that is, the drive changes the work rate of the particles own self-propulsion. The phenomena we studied here has an interesting analogy with a very old problem examined first by J. Fourier himself in his treatise from where Fourier transforms originate [@Fourier; @French; @Fourier; @English]. In particular, Fourier considered the diffusion of heat through a medium due to temperature variations at the boundary; for example, the heating and cooling below ground as a result of temperature variations throughout the seasons. In our notation, he found that the temperature variations as a function of the depth went as $$\delta T(y,t)=\delta T_0e^{\sqrt{i\frac{\omega}{D}}y+i\omega t}.$$ The key feature is that the wavelength of the variations is the same as the decay length. For us, for example for the ramp potential (Eq. (\[eq:ramp solution\])) in the bulk, the density variations due to a drive effectively at the boundaries (since the drive amplitude is much smaller than how far the particles diffuse in a single period, $l_{\textrm{diffusion}}$) have the same form $$\delta\rho(y,t)=\delta\rho_0e^{\sqrt{i\frac{\omega}{D}}y+i\omega t}.$$ It is interesting to note that for active particles in the persistent limit (drive frequency larger than the inverse persistence time), the density variations (Eq. (\[eq:rnt, ramp, y&lt;0\])) become $$\delta\rho(y,t)=\delta\rho_0e^{\frac{\alpha}{v}y}e^{i\left(-\frac{\omega}{v}y+\omega t\right)}.$$ where now the wavelength of the variations (swim distance in a cycle) is much shorter than the decay length (persistence length of the active particles). Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported primarily by the MRSEC Program of the National Science Foundation under Award Number DMR-1420073. We thank J.-F. Joanny and W. Srinin for stimulating discussions and their insightful comments. We were fortunate to have worked in the same department with Pierre Hohenberg and one of us (AYG) had the opportunity to discuss with him this work at its early stages. We therefore feel honored to submit this paper to the journal dedicated to his memory. \[app:Solution Details\]Details of solution =========================================== Here, we show the details of obtaining the solutions in Sec. \[sec:Calculation\] for passive or RnT particles in the V-shaped and ramp potentials. \[subapp:passive, v\]Passive particles, V-shaped potential ---------------------------------------------------------- In the frame of the potential, the Fokker-Planck equation is $$\frac{\partial\rho}{\partial t}=\frac{\partial}{\partial y}\Big[\left(a\omega\cos\omega t+\mu{\,\textrm{sgn}}(y)\right)\rho\Big]+D\frac{\partial^2\rho}{\partial y^2},$$ where $y=x-a\sin\omega t$. Picking the time and length scales to be $\omega^{-1}$ (time-scale of the drive) and $\sqrt{D/\omega}$ (diffusion distance), we arrive at (Eq. (\[eq:diffusion pde rest dim\])) $$\frac{\partial\rho}{\partial\tilde{t}}=\frac{\partial}{\partial\tilde{y}}\Big[\left(\epsilon\cos\tilde{t}+\gamma{\,\textrm{sgn}}(\tilde{y})\right)\rho\Big]+\frac{\partial^2\rho}{\partial\tilde{y}^2}$$ The transient solution can be written as $\rho(\tilde{y},\tilde{t})=\rho^{(0)}(\tilde{y})+\epsilon\rho^{(1)}(\tilde{y},\tilde{t})+O(\epsilon^2)$. To zeroth order, we have $$0=\frac{\partial}{\partial\tilde{y}}\left[\gamma{\,\textrm{sgn}}(\tilde{y})\rho^{(0)}+\frac{\partial\rho^{(0)}}{\partial\tilde{y}}\right],$$ which gives the usual exponential distribution $$\rho^{(0)}(\tilde{y})=\rho_0e^{-\gamma|\tilde{y}|}.$$ The first order correction can be written as $\rho^{(1)}(\tilde{y},\tilde{t})=p(\tilde{y})e^{i\tilde{t}}+p^*(\tilde{y})e^{-i\tilde{t}}$, where $p^*$ is the complex conjugate of $p$. For $\tilde{y}<0$, $p(\tilde{y})$ satisfies $$\frac{d^2p}{d\tilde{y}^2}-\gamma\frac{dp}{d\tilde{y}}-ip=-\frac{\rho_0}{2}\gamma e^{\gamma\tilde{y}}.$$ Requiring that $\lim\limits_{\tilde{y}\rightarrow-\infty}\rho=0$, we obtain $$p(\tilde{y}<0)=a_+e^{\xi\tilde{y}}+ce^{\gamma\tilde{y}},$$ where $\xi=\frac{1}{2}\left(\gamma+\sqrt{\gamma^2+4i}\right)$ and $c=-i\rho_0\gamma/2$. For $\tilde{y}>0$, taking $\gamma\rightarrow-\gamma$ and requiring $\lim\limits_{\tilde{y}\rightarrow\infty}\rho=0$ gives $$p(\tilde{y}>0)=b_-e^{-\xi\tilde{y}}+de^{-\gamma\tilde{y}},$$ where $d=i\rho_0\gamma/2$. Continuity in density and current at $\tilde{y}=0$ gives the two conditions $$\begin{aligned} a_++c&=b_-+d,\\ (\xi-\gamma)a_+&=(-\xi+\gamma)b_-, \end{aligned}$$ from which we find $$a_+=-b_-=i\frac{\rho_0\gamma}{2}.$$ Therefore, the first order correction can be written as $$\rho^{(1)}(\tilde{y},\tilde{t})=-\rho_0\gamma{\,\textrm{sgn}}(\tilde{y}){\,\textrm{Re}}\left\{i\left(e^{-\xi|\tilde{y}|}-e^{-\gamma|\tilde{y}|}\right)e^{i\tilde{t}}\right\}.$$ \[subapp:passive, ramp\]Passive particles, ramp potential --------------------------------------------------------- For the ramp potential $\Phi'(y)=f\theta(y)$, where $\theta$ is a step function. Eq. (\[eq:diffusion pde rest dim\]) is $$\frac{\partial\rho}{\partial\tilde{t}}=\frac{\partial}{\partial\tilde{y}}\Big[\left(\epsilon\cos\tilde{t}+\gamma\theta(\tilde{y})\right)\rho\Big]+\frac{\partial^2\rho}{\partial\tilde{y}^2}.$$ As before, we take the transient solution to be $\rho=\rho^{(0)}+\epsilon\rho^{(1)}+O(\epsilon^2)$. To zeroth order, $$0=\frac{\partial}{\partial\tilde{y}}\left[\gamma\theta(\tilde{y})\rho^{(0)}+\frac{\partial\rho^{(0)}}{\partial\tilde{y}}\right],$$ and assuming that $\lim\limits_{\tilde{y}\rightarrow-\infty}=\rho_0$, we have $$\rho^{(0)}(\tilde{y})=\rho_0e^{-\gamma\theta(\tilde{y})\tilde{y}}.$$ Again, taking $\rho^{(1)}=pe^{i\tilde{t}}+p^*e^{-i\tilde{t}}$, we get for $\tilde{y}<0$ $$\frac{d^2p}{d\tilde{y}^2}-ip=0,$$ which gives $$p(\tilde{y})=a_+e^{\sqrt{i}\tilde{y}}.$$ For $\tilde{y}>0$, $$\frac{d^2p}{d\tilde{y}^2}+\gamma\frac{dp}{d\tilde{y}}-ip=\frac{\rho_0\gamma}{2}e^{-\gamma\tilde{y}},$$ the solution of which is $$p(\tilde{y})=b_-e^{-\xi\tilde{y}}+de^{-\gamma\tilde{y}},$$ where $\xi=\frac{1}{2}\left(\gamma+\sqrt{\gamma^2+4i}\right)$ and $d=i\rho_0\gamma/2$. At $\tilde{y}=0$, continuity in density and current gives $$\begin{aligned} a_+&=b_-+d,\\ \sqrt{i}a_+&=(\gamma-\xi)b_-, \end{aligned}$$ and so $$a_+=\frac{\xi-\gamma}{\xi-\gamma+\sqrt{i}}d,$$ $$b_-=-\frac{\sqrt{i}}{\xi-\gamma+\sqrt{i}}d.$$ Thus, $$\rho^{(1)}(\tilde{y},\tilde{t})=\rho_0\gamma{\,\textrm{Re}}\begin{cases} \frac{i(\xi-\gamma)}{\xi-\gamma+\sqrt{i}}e^{\sqrt{i}\tilde{y}}e^{i\tilde{t}},& \tilde{y}<0\\ i\left(-\frac{\sqrt{i}}{\xi-\gamma+\sqrt{i}}e^{-\xi\tilde{y}}+e^{-\gamma\tilde{y}}\right)e^{i\tilde{t}},& \tilde{y}>0 \end{cases} \label{eq:ramp solution}$$ \[subapp:rnt, v\]RnT particles, V-shaped potential -------------------------------------------------- In this case, the Fokker-Planck equation is $$\frac{\partial\rho_{\pm}}{\partial t}=-\frac{\partial}{\partial y}\Big[\left(\pm v-a\omega\cos\omega t-\mu f{\,\textrm{sgn}}(y)\right)\rho_{\pm}\Big]-\alpha\rho_{\pm}+\alpha\rho_{\mp},$$ where $y=x-a\sin\omega t$. Picking the same time scales and length scales, $1/\omega$ (time scale of drive) and $\sqrt{v^2/\omega\alpha}$ (diffusion distance), we arrive at (Eq. (\[eq:rnt pde rest dim\])) $$\frac{\partial\rho_{\pm}}{\partial\tilde{t}}=-\frac{\partial}{\partial\tilde{y}}\left[\left(\pm\gamma_{\omega}^{-1/2}-\epsilon\cos\tilde{t}-\gamma_f\gamma_{\omega}^{-1/2}{\,\textrm{sgn}}(\tilde{y})\right)\rho_{\pm}\right]-\gamma_{\omega}^{-1}\rho_{\pm}+\gamma_{\omega}^{-1}\rho_{\mp}.$$ The transient solution can be written as $\rho_{\pm}(\tilde{y},\tilde{t})=\rho_{\pm}^{(0)}(\tilde{y})+\epsilon\rho_{\pm}^{(1)}(\tilde{y},\tilde{t})+O(\epsilon^2)$. The zeroth order solution satisfies $$\frac{d}{d\tilde{y}}\left[(\pm1-\gamma_f{\,\textrm{sgn}}(\tilde{y}))\rho_{\pm}^{(0)}\right]=-\gamma_{\omega}^{-1/2}\rho_{\pm}^{(0)}+\gamma_{\omega}^{-1/2}\rho_{\mp}^{(0)}. \label{eq:0th order eq}$$ For $\tilde{y}<0$, $$\frac{d}{d\tilde{y}}{\def\arraystretch{1.2}\begin{pmatrix}\rho_+^{(0)}\\\rho_-^{(0)}\end{pmatrix}}=\gamma_{\omega}^{-1/2}{\def\arraystretch{1.2}\begin{bmatrix}-\delta_+&\delta_+\\-\delta_-&\delta_-\end{bmatrix}}{\def\arraystretch{1.2}\begin{pmatrix}\rho_+^{(0)}\\\rho_-^{(0)}\end{pmatrix}},$$ where $\delta_{\pm}=1/(1\pm\gamma_f)$. Requiring that $\lim\limits_{\tilde{y}\rightarrow-\infty}\rho=0$, we obtain $${\def\arraystretch{1.2}\begin{pmatrix}\rho_+^{(0)}\\\rho_-^{(0)}\end{pmatrix}}=a{\def\arraystretch{1.2}\begin{pmatrix}\delta_+\\\delta_-\end{pmatrix}}e^{\xi_0\tilde{y}},$$ where $\xi_0=2\gamma_f\gamma_{\omega}^{-1/2}/(1-\gamma_f^2)$. For $\tilde{y}>0$, simply take $\gamma_f\rightarrow-\gamma_f$ and we get $${\def\arraystretch{1.2}\begin{pmatrix}\rho_+^{(0)}\\\rho_-^{(0)}\end{pmatrix}}=b{\def\arraystretch{1.2}\begin{pmatrix}\delta_-\\\delta_+\end{pmatrix}}e^{-\xi_0\tilde{y}}.$$ To determine the coefficients $a$ and $b$, we integrate Eq. (\[eq:0th order eq\]) across the origin to obtain the condition $\delta_{\pm}^{-1}\rho_{\pm}^{(0)}(0^-)=\delta_{\mp}^{-1}\rho_{\pm}^{(0)}(0^+)$. Note that this is just continuity of current. We find $a=b=\mathcal{N}/2$ for some normalization $\mathcal{N}$. The first order correction can be written as $\rho_{\pm}^{(1)}=p_{\pm}(\tilde{y})e^{i\tilde{t}}+p_{\pm}^*(\tilde{y})e^{-i\tilde{t}}=2{\,\textrm{Re}}\left\{p_{\pm}(\tilde{y})e^{i\tilde{t}}\right\}$, where $p_{\pm}^*$ is the complex conjugate of $p_{\pm}$. Using this, we get $$\frac{d}{d\tilde{y}}\left[(\pm1-\gamma_f{\,\textrm{sgn}}(\tilde{y}))p_{\pm}\right]=-\gamma_{\omega}^{-1/2}(1+i\gamma_{\omega})p_{\pm}+\gamma_{\omega}^{-1/2}p_{\mp}+\frac{\gamma_{\omega}^{1/2}}{2}\frac{d\rho_{\pm}^{(0)}}{d\tilde{y}}. \label{eq:1st order eq}$$ For $\tilde{y}<0$, $$\frac{d}{d\tilde{y}}{\def\arraystretch{1.2}\begin{pmatrix}p_+\\p_-\end{pmatrix}}=\gamma_{\omega}^{-1/2}{\def\arraystretch{1.2}\begin{bmatrix}-\delta_+(1+i\gamma_{\omega})&\delta_+\\-\delta_-&\delta_-(1+i\gamma_{\omega})\end{bmatrix}}{\def\arraystretch{1.2}\begin{pmatrix}p_+\\p_-\end{pmatrix}}+\frac{\mathcal{N}\gamma_{\omega}^{1/2}\xi_0}{4}{\def\arraystretch{1.2}\begin{pmatrix}\delta_+^2\\-\delta_-^2\end{pmatrix}}e^{\xi_0\tilde{y}}.$$ Once again requiring $\lim\limits_{\tilde{y}\rightarrow-\infty}\rho=0$, we obtain $${\def\arraystretch{1.2}\begin{pmatrix}p_+\\p_-\end{pmatrix}}=(a_+-c_+)\boldsymbol{v}_+e^{\lambda_+\tilde{y}}+(c_+\boldsymbol{v}_++c_-\boldsymbol{v}_-)e^{\xi_0\tilde{y}},$$ where $$\lambda_{\pm}=\frac{\gamma_f(1+i\gamma_{\omega})\pm\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}}{(1-\gamma_f^2)\gamma_{\omega}^{1/2}},$$ $$\boldsymbol{v}_{\pm}={\def\arraystretch{1.2}\begin{pmatrix}\delta_-(1+i\gamma_{\omega})-\gamma_{\omega}^{1/2}\lambda_{\pm}\\\delta_-\end{pmatrix}},$$ $$c_{\pm}=\mp\frac{i\mathcal{N}\gamma_f\gamma_{\omega}^{-1/2}\left(\gamma_f+i\gamma_{\omega}\pm\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}\right)}{4(1-\gamma_f^2)\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}},$$ As before, take $\gamma_f\rightarrow-\gamma_f$ for $\tilde{y}>0$ and so $${\def\arraystretch{1.2}\begin{pmatrix}p_+\\p_-\end{pmatrix}}=(b_--d_-)\boldsymbol{u}_-e^{\kappa_-\tilde{y}}+(d_+\boldsymbol{u}_++d_-\boldsymbol{u}_-)e^{-\xi_0\tilde{y}},$$ where $b_-$ is undetermined and $$\kappa_{\pm}=\frac{-\gamma_f(1+i\gamma_{\omega})\pm\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}}{(1-\gamma_f^2)\gamma_{\omega}^{1/2}},$$ $$\boldsymbol{u}_{\pm}={\def\arraystretch{1.2}\begin{pmatrix}\delta_+(1+i\gamma_{\omega})-\gamma_{\omega}^{1/2}\kappa_{\pm}\\\delta_+\end{pmatrix}},$$ $$d_{\pm}=\mp\frac{i\mathcal{N}\gamma_f\gamma_{\omega}^{-1/2}\left(\gamma_f-i\gamma_{\omega}\mp\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}\right)}{4(1-\gamma_f^2)\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}}.$$ Integrating Eq. (\[eq:1st order eq\]) gives the condition $$\frac{1}{\delta_{\pm}}p_{\pm}(0^-)\mp\frac{\gamma_{\omega}^{1/2}}{2}\rho_{\pm}^{(0)}(0^-)=\frac{1}{\delta_{\mp}}p_{\pm}(0^+)\mp\frac{\gamma_{\omega}^{1/2}}{2}\rho_{\pm}^{(0)}(0^+).$$ This system gives $$\begin{aligned} a_+=\frac{i\mathcal{N}\gamma_f\gamma_{\omega}^{1/2}\left[\gamma_f^2-\gamma_f+i\gamma_{\omega}-\gamma_{\omega}^2+(1-\gamma_f+i\gamma_{\omega})\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}\right]}{4(1-\gamma_f^2)\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}}, \end{aligned}$$ $$\begin{aligned} b_-=\frac{i\mathcal{N}\gamma_f\gamma_{\omega}^{1/2}\left[\gamma_f^2+\gamma_f+i\gamma_{\omega}-\gamma_{\omega}^2-(1+\gamma_f+i\gamma_{\omega})\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}\right]}{4(1-\gamma_f^2)\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}}. \end{aligned}$$ Finally, noting that $\mathcal{N}=\rho_0(1-\gamma_f^2)$ and $\lambda_+=-\kappa_-=\xi$, we have $$\begin{aligned} \begin{split} p&=p_++p_-=\frac{i\rho_0\gamma_f\gamma_{\omega}^{-1/2}{\,\textrm{sgn}}(\tilde{y})}{1-\gamma_f^2}\left[e^{-\xi_0\left|\tilde{y}\right|}\vphantom{\frac{1}{2}}\right.\\ &\hspace{1in}\left.-\frac{1}{2}\left(2-\gamma_f^2+i\gamma_{\omega}+\gamma_f\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}\right)e^{-\xi\left|\tilde{y}\right|}\right] \end{split} \end{aligned}$$ and $$p_+-p_-=\frac{i\rho_0\gamma_f^2\gamma_{\omega}^{-1/2}}{1-\gamma_f^2}\left(-\frac{\xi}{\xi_0}e^{-\xi|\tilde{y}|}+e^{-\xi_0|\tilde{y}|}\right).$$ \[subapp:rnt, ramp\]RnT particles, ramp potential ------------------------------------------------- Taking $\Phi'(y)=f\theta(y)$, Eq. (\[eq:rnt pde rest dim\]) becomes $$\frac{\partial\rho_{\pm}}{\partial\tilde{t}}=-\frac{\partial}{\partial\tilde{y}}\left[\left(\pm\gamma_{\omega}^{-1/2}-\epsilon\cos\tilde{t}-\gamma_f\gamma_{\omega}^{-1/2}\theta(\tilde{y})\right)\rho_{\pm}\right]-\gamma_{\omega}^{-1}\rho_{\pm}+\gamma_{\omega}^{-1}\rho_{\mp}.$$ To zeroth order, $$\frac{d}{d\tilde{y}}\left[\left(\pm1-\gamma\theta(\tilde{y})\right)\rho_{\pm}^{(0)}\right]=-\gamma_{\omega}^{-1/2}\rho_{\pm}^{(0)}+\gamma_{\omega}^{-1/2}\rho_{\mp}^{(0)}.$$ For $\tilde{y}<0$, $$\frac{d}{d\tilde{y}}{\def\arraystretch{1.2}\begin{pmatrix}\rho_+^{(0)}\\\rho_-^{(0)}\end{pmatrix}}=\gamma_{\omega}^{-1/2}{\def\arraystretch{1.2}\begin{bmatrix}-1 & 1\\-1 & 1\end{bmatrix}}{\def\arraystretch{1.2}\begin{pmatrix}\rho_+^{(0)}\\\rho_-^{(0)}\end{pmatrix}}.$$ The solution with $\lim\limits_{\tilde{y}\rightarrow-\infty}\rho=\rho_0$ is $${\def\arraystretch{1.2}\begin{pmatrix}\rho_+^{(0)}\\\rho_-^{(0)}\end{pmatrix}}=\frac{\rho_0}{2}{\def\arraystretch{1.2}\begin{pmatrix}1\\1\end{pmatrix}}.$$ For $\tilde{y}>0$, $$\frac{d}{d\tilde{y}}{\def\arraystretch{1.2}\begin{pmatrix}\rho_+^{(0)}\\\rho_-^{(0)}\end{pmatrix}}={\def\arraystretch{1.2}\begin{bmatrix}-\delta_- & \delta_-\\-\delta_+ & \delta_+\end{bmatrix}}{\def\arraystretch{1.2}\begin{pmatrix}\rho_+^{(0)}\\\rho_-^{(0)}\end{pmatrix}}$$ where $\delta_{\pm}=1/(1\pm\gamma_f)$. The solution satisfying $\lim_{\tilde{y}\rightarrow\infty}\rho=0$ is $${\def\arraystretch{1.2}\begin{pmatrix}\rho_+^{(0)}\\\rho_-^{(0)}\end{pmatrix}}=b{\def\arraystretch{1.2}\begin{pmatrix}\delta_-\\\delta_+\end{pmatrix}}e^{\kappa_0\tilde{y}}$$ where $\kappa_0=-2\gamma_f\gamma_{\omega}^{-1/2}/(1-\gamma_f^2)$. At $\tilde{y}=0$, the condition $\rho_{\pm}^{(0)}(0^-)=\delta_{\mp}^{-1}\rho_{\pm}(0^+)$ gives $b=\rho_0/2$. Using $\rho_{\pm}^{(1)}=p_{\pm}e^{i\tilde{t}}+p_{\pm}^*e^{i\tilde{t}}$, we have to first order $$\frac{d}{d\tilde{y}}\Big[\left(\pm1-\gamma_f\theta(\tilde{y})\right)p_{\pm}\Big]=-\gamma_{\omega}^{-1/2}(1+i\gamma_{\omega})p_{\pm}+\gamma_{\omega}^{-1/2}p_{\mp}+\frac{\gamma_{\omega}^{1/2}}{2}\frac{d\rho^{(0)}_{\pm}}{d\tilde{y}}.$$ For $\tilde{y}<0$, $$\frac{d}{d\tilde{y}}{\def\arraystretch{1.2}\begin{pmatrix}p_+\\p_-\end{pmatrix}}=\gamma_{\omega}^{-1/2}{\def\arraystretch{1.2}\begin{bmatrix}-(1+i\gamma_{\omega}) & 1\\-1 & 1+i\gamma_{\omega}\end{bmatrix}}{\def\arraystretch{1.2}\begin{pmatrix}p_+\\p_-\end{pmatrix}}.$$ The solution satisfying $\lim\limits_{\tilde{y}\rightarrow-\infty}\rho=0$ is $${\def\arraystretch{1.2}\begin{pmatrix}p_+\\p_-\end{pmatrix}}=\frac{\rho_0}{4}a_+\boldsymbol{v}_+e^{\lambda_+\tilde{y}}, \label{eq:rnt, ramp, y<0}$$ where $\lambda_+=\sqrt{2i-\gamma_{\omega}}$ and $$\boldsymbol{v}_+={\def\arraystretch{1.2}\begin{pmatrix}1+i\gamma_{\omega}-\gamma_{\omega}^{1/2}\lambda_+\\1\end{pmatrix}}.$$ For $\tilde{y}>0$, $$\frac{d}{d\tilde{y}}{\def\arraystretch{1.2}\begin{pmatrix}p_+\\p_-\end{pmatrix}}=\gamma_{\omega}^{-1/2}{\def\arraystretch{1.2}\begin{bmatrix}-\delta_-(1+i\gamma_{\omega}) & \delta_-\\-\delta_+ & \delta_+(1+i\gamma_{\omega})\end{bmatrix}}{\def\arraystretch{1.2}\begin{pmatrix}p_+\\p_-\end{pmatrix}}+\frac{\gamma_{\omega}^{1/2}\kappa_0}{2}{\def\arraystretch{1.2}\begin{pmatrix}\delta_-^2\\-\delta_+^2\end{pmatrix}}e^{\kappa_0\tilde{y}}.$$ The solution satisfying $\lim\limits_{\tilde{y}\rightarrow\infty}\rho=0$ is $${\def\arraystretch{1.2}\begin{pmatrix}p_+\\p_-\end{pmatrix}}=\frac{\rho_0}{4}(b_--d_-)\boldsymbol{u}_-e^{\kappa_-\tilde{y}}+\frac{\rho_0}{4}\Big(d_+\boldsymbol{u}_++d_-\boldsymbol{u}_-\Big)e^{\kappa_0\tilde{y}},$$ where $$\kappa_{\pm}=\frac{-\gamma_f(1+i\gamma_{\omega})\pm\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}}{(1-\gamma_f^2)\gamma_{\omega}^{1/2}},$$ $$\boldsymbol{u}_{\pm}={\def\arraystretch{1.2}\begin{pmatrix}\delta_+(1+i\gamma_{\omega})-\gamma_{\omega}^{1/2}\kappa_{\pm}\\\delta_+\end{pmatrix}},$$ $$d_{\pm}=\mp\frac{i\gamma_f\gamma_{\omega}^{-1/2}\left(\gamma_f-i\gamma_{\omega}\mp\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}\right)}{(1-\gamma_f^2)\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}}.$$ At $\tilde{y}=0$, we have the condition $$\pm p_{\pm}(0^-)-\frac{\gamma_{\omega}^{1/2}}{2}\rho_{\pm}^{(0)}(0^-)=\pm\frac{1}{\delta_{\mp}}p_{\pm}(0^+)-\frac{\gamma_{\omega}^{1/2}}{2}\rho_{\pm}^{(0)}(0^+)$$ which gives $$a_+=-\frac{\delta_-(1-\delta_-)+(1-\delta_+)\left(\delta_+(1+i\gamma_{\omega})-\gamma_{\omega}^{1/2}\kappa_-\right)-(\kappa_+-\kappa_-)d_+}{\delta_-\lambda_++\kappa_+},$$ $$b_-=-\frac{\delta_-(1-\delta_-)+(1-\delta_+)\delta_-\left(1+i\gamma_{\omega}-\gamma_{\omega}^{1/2}\lambda_+\right)+(\delta_-\lambda_++\kappa_-)d_+}{\delta_-\lambda_++\kappa_+}.$$ \[app:Response Function\]Expression for $B_{\omega}$ ==================================================== Using $\Delta\rho=2\epsilon{\,\textrm{Re}}\left\{p(y)e^{i\omega t}\right\}$, the additional force can be written as $$\Delta F(t)=-\int_{-\infty}^{\infty}\Phi'\Delta\rho dy=-\epsilon\left(R_{\omega}\sin\omega t+I_{\omega}\cos\omega t\right),$$ where $R_{\omega}=\int2\Phi'{\,\textrm{Re}}\left\{ip\right\}dy$ and $I_{\omega}=\int2\Phi'{\,\textrm{Im}}\left\{ip\right\}dy$. Thus, $\Delta F_{\omega'}=B_{\omega'}a_{\omega'}$ becomes $$\begin{aligned} \begin{split} &i\epsilon R_{\omega}\left[\delta(\omega-\omega')-\delta(\omega+\omega')\right]-\epsilon I_{\omega}\left[\delta(\omega-\omega')+\delta(\omega+\omega')\right]\\ &\hspace{1in}=-iB_{\omega'}a\left[\delta(\omega-\omega')-\delta(\omega+\omega')\right]. \end{split} \end{aligned}$$ Matching coefficients of the delta functions, we find $$\begin{aligned} B_{\omega}&=-\frac{\epsilon}{a}(R_{\omega}+iI_{\omega}),\\ B_{-\omega}&=-\frac{\epsilon}{a}(R_{\omega}-iI_{\omega}). \end{aligned}$$ To verify that $B_{-\omega}$ is indeed the complex conjugate of $B_{\omega}$, note that the transformation $\omega\rightarrow-\omega$ takes $a\sin\omega t\rightarrow-a\sin\omega t$ or $\Delta F\rightarrow-\Delta F$, and so we must have $R_{-\omega}=R_{\omega}$ and $I_{-\omega}=-I_{\omega}$, as expected in linear response theory for in-phase and out-of-phase responses. $$B_{\omega}=-\frac{\epsilon}{a}\left(R_{\omega}+iI_{\omega}\right)=-\frac{2i}{l_{\textrm{diffusion}}}\int_{-\infty}^{\infty}\Phi'(y)p(y)dy.$$ Note that we may rewrite the force as $$\Delta F=aB_{\omega}'\sin\omega t+aB_{\omega}''\cos\omega t.$$ \[app:Variable Definitions\]$B_{\omega}$ for RnT particles in a ramp potential ============================================================================== For RnT particles in a ramp potential, the response function is $$B_{\omega}=\frac{i\rho_0f}{2}\left[(b_--d_-)\frac{s_-}{\kappa_-}+(d_+s_++d_-s_-)\frac{1}{\kappa_0}\right], \label{eq:response ramp potential}$$ where $$b_-=-\frac{\delta_-(1-\delta_-)+(1-\delta_+)\delta_-\left(1+i\gamma_{\omega}-\gamma_{\omega}^{1/2}\lambda_+\right)+(\delta_-\lambda_++\kappa_-)d_+}{\delta_-\lambda_++\kappa_+}$$ $$d_{\pm}=\mp\frac{i\gamma_f\gamma_{\omega}^{-1/2}\left(\gamma_f-i\gamma_{\omega}\mp\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}\right)}{(1-\gamma_f^2)\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}},$$ $$\kappa_{\pm}=\frac{-\gamma_f(1+i\gamma_{\omega})\pm\sqrt{\gamma_f^2+2i\gamma_{\omega}-\gamma_{\omega}^2}}{(1-\gamma_f^2)\gamma_{\omega}^{1/2}},$$ $s_{\pm}=\delta_+(2+i\gamma_{\omega})-\gamma_{\omega}^{1/2}\kappa_{\pm}$, $\delta_{\pm}=1/(1\pm\gamma_f)$, $\kappa_0=-2\gamma_f\gamma_{\omega}^{-1/2}/(1-\gamma_f^2)$, and $\lambda_+=\sqrt{2i-\gamma_{\omega}}$. The level sets of $B_{\omega}$ for the ramp potential are shown in Fig. \[fig:Bw contour ramp\]. ![Level sets of $B_{\omega}$ for RnT particles in the ramp potential. **Top:** Real part. **Bottom:** Imaginary part. Just like in the V-shaped potential, there are two crossovers indicated by the black dashed lines: $\gamma_{\omega}\sim\gamma_f^2$, which divides the slow and passive fast regimes; and $\gamma_{\omega}\sim1$, which divides the passive fast and active fast regimes.[]{data-label="fig:Bw contour ramp"}](reimBw_active_ramp.png) \[app:Currents\]Currents and dissipation ======================================== We here calculate the particle currents, which allows us to more easily interpret the dissipation. The total current of passive particles relative to the static fluid (i.e. without the fictitious drift) is given by $$J_x(y,t)=-\mu\Phi'(y)\rho-D\frac{\partial\rho}{\partial y}=v_x(y,t)\rho(y,t), \label{eq:def current}$$ where $v_x(y,t)$ is the velocity field in the lab frame as a function of the co-moving coordinate $y$. Substituting the solutions found in Appendix \[app:Solution Details\], we find for the V-shaped potential $$v_x=-\epsilon\mu f{\,\textrm{Re}}\left\{i(\xi-\gamma)e^{-(\xi-\gamma)|\tilde{y}|}e^{i\tilde{t}}\right\},$$ and for the ramp potential $$v_x=-\epsilon\mu f{\,\textrm{Re}}\begin{cases} i^{3/2}\frac{(\xi-\gamma)}{\xi-\gamma+\sqrt{i}}e^{\sqrt{i}\tilde{y}}e^{i\tilde{t}}, & \tilde{y}<0\\ i^{3/2}\frac{(\xi-\gamma)}{\xi-\gamma+\sqrt{i}}e^{-(\xi-\gamma)\tilde{y}}e^{i\tilde{t}}, & \tilde{y}>0 \end{cases}$$ where $\xi=\frac{1}{2}\left(\gamma+\sqrt{\gamma^2+4i}\right)$ and $\gamma=\mu f/\sqrt{\omega D}$ (Eq. (\[eq:def passive params\])). With this, the dissipation rate of the entire system is $$\dot{Q}=\int_{-\infty}^{\infty}\rho(y,t)\frac{v_x(y,t)^2}{\mu}dy. \label{eq:Q dot integral}$$ Note that if the particles have a characteristic drift velocity $V$ and a characteristic decay length for the current $L$, the average rate of dissipation goes as $$\langle\dot{Q}\rangle\sim\rho_0L\frac{V^2}{\mu}.$$ \[subapp:Currents, Slow\]Slow drive ----------------------------------- For slow drive, we have $\gamma\gg1$. For the **V-shaped potential**, the velocity field is $$v_x\approx a\omega{\,\textrm{Re}}\left\{e^{-\frac{1}{\gamma^3}|\tilde{y}|}e^{i\left(-\frac{1}{\gamma}|\tilde{y}|+i\tilde{t}\right)}\right\}.$$ The characteristic velocity is $V\sim a\omega$. Note that since $\rho=\rho_0e^{-\gamma|\tilde{y}|}$, variations in the velocity field occur on length scales much greater than the decay length of the density $L\sim D/\mu f$. To leading order, the dissipation rate is $$\dot{Q}_{\textrm{V-shaped}}=\int_{-\infty}^{\infty}\rho\frac{v_x^2}{\mu}dy\approx\rho_0\frac{2D}{\mu f}\frac{(a\omega)^2}{\mu}\cos^2\omega t.$$ Time averaging, we have $$\langle\dot{Q}\rangle_{\textrm{V-shaped}}\approx\rho_0\frac{D}{\mu f}\frac{(a\omega)^2}{\mu}.$$ For the **ramp potential**, the velocity field is $$v_x\approx a\omega{\,\textrm{Re}}\begin{cases} e^{\sqrt{i}\tilde{y}}e^{i\tilde{t}}, & \tilde{y}<0\\ e^{-\frac{1}{\gamma^3}\tilde{y}}e^{i\left(-\frac{1}{\gamma}\tilde{y}+i\tilde{t}\right)}, & \tilde{y}>0 \end{cases}$$ The characteristic velocity is $V\sim a\omega$. In this case, the dominant contribution to the dissipation will be from the bulk $\tilde{y}<0$ since $l_{\textrm{diffusion}}\gg l_{\textrm{penetration}}$. The dissipation rate is $$\dot{Q}_{\textrm{ramp}}\approx\frac{1}{4\sqrt{2}}\rho_0\sqrt{\frac{D}{\omega}}\frac{(a\omega)^2}{\mu}(2+\cos2\omega t+\sin2\omega t).$$ The average dissipation is then $$\langle\dot{Q}\rangle_{\textrm{ramp}}\approx\frac{1}{2\sqrt{2}}\rho_0\sqrt{\frac{D}{\omega}}\frac{(a\omega)^2}{\mu}.$$ The average dissipation rates only differ by multiplication by a constant. \[subapp:Currents, Fast\]Fast drive, diffusive particles -------------------------------------------------------- For fast drive, $\gamma\ll1$. The velocity field for the **V-shaped potential** is $$v_x\approx a\omega\gamma e^{-|\tilde{y}|}\sin\left(-|\tilde{y}|+\omega t+\frac{\pi}{4}\right).$$ For the **ramp potential**, the velocity field is $$v_x\approx\frac{1}{2}a\omega\gamma e^{-|\tilde{y}|}\sin\left(-|\tilde{y}|+\omega t+\frac{\pi}{4}\right).$$ The factor of $1/2$ is due to the diffusive flux (Sec. \[subsec:Fast drive\]). In this fast drive case, the decay length of the velocity field for both potentials dominates over the decay length of the density; we can treat the density as constant over this distance. Thus, $$\begin{aligned} \dot{Q}\approx\rho_0\int_{-\infty}^{\infty}\frac{v_x^2}{\mu}dy\approx\left(\frac{1}{4}\right)\frac{1}{\sqrt{2}}\rho_0\sqrt{\frac{D}{\omega}}\frac{(a\omega\gamma)^2}{\mu}\left[1+\frac{1}{\sqrt{2}}\sin\left(2\omega t-\frac{\pi}{4}\right)\right], \end{aligned}$$ or $$\langle\dot{Q}\rangle\approx\left(\frac{1}{4}\right)\frac{1}{\sqrt{2}}\rho_0\sqrt{\frac{D}{\omega}}\frac{(a\omega\gamma)^2}{\mu},$$ where the factor of $1/4$ corresponds to the ramp potential. \[subapp:Currents, Persistent\]Fast drive, persistent particles --------------------------------------------------------------- For RnT particles, the current is instead given by $$J_x(y,t)=J_++J_-=(\langle v\rangle-\mu f{\,\textrm{sgn}}(y))\rho,$$ where $\langle v\rangle=v(\rho_+-\rho_-)/\rho$ and $\rho=\rho_++\rho_-$. The particles responsible for the current have a velocity field $$v_x(y,t)=\frac{J_x}{\rho}=\langle v\rangle-\mu f{\,\textrm{sgn}}(y).$$ Substituting the solution for the V-shaped potential, we obtain $$\begin{aligned} v_x=\epsilon\mu f\frac{\gamma_f}{\gamma_{\omega}^{1/2}}{\,\textrm{Re}}\left\{i\left(1-\sqrt{1+2i\frac{\gamma_{\omega}}{\gamma_f^2}-\frac{\gamma_{\omega}^2}{\gamma_f^2}}\right)e^{-(\xi-\xi_0)|\tilde{y}|}e^{i\omega t}\right\}. \end{aligned}$$ Taking $\gamma_{\omega}\gg1$ or $\omega\gg\alpha$, we get $$v_x\approx a\omega\gamma_f e^{-\gamma_{\omega}^{-1/2}|\tilde{y}|}\cos(-\gamma_{\omega}^{1/2}|\tilde{y}|+\omega t),$$ which using Eq. (\[eq:Q dot integral\]) gives $$\langle\dot{Q}\rangle_{\textrm{V-shaped}}\approx\rho_0\frac{a^2\mu f^2\alpha}{v}\left(\frac{\omega}{\alpha}\right)^2.$$ The expression of velocity for the ramp potential is cumbersome. However, taking $\gamma_{\omega}\gg1$, we find $$\langle\dot{Q}\rangle_{\textrm{ramp}}\approx\frac{1}{4}\rho_0\frac{a^2\mu f^2\alpha}{v}\left(\frac{\omega}{\alpha}\right)^2.$$ The dissipation rates only differ by multiplication by a constant. [10]{} M. C. Marchetti, J. F. Joanny, S. Ramaswamy, T. B. Liverpool, J. Prost, M. Rao, R. A. Simha, Hydrodynamics of soft active matter, Rev. Mod. Phys. **85** 1143 (2013) C. Bechinger, R. Di Leonardo, H. Löwen, C. Reichhardt, G. Volpe, G. Volpe, Active particles in complex and crowded environments, Rev. Mod. Phys. **88** 045006 (2016) H. C. Berg, D. A. Brown, Chemotaxis in Escherichia coli analysed by three-dimensional tracking, Nature **239** 500 (1972) J. Palacci, S. Sacanna, S. H. Kim, G. R. Yi, D. J. Pine, P. M. Chaikin, Light-activated self-propelled colloids, Phil. Trans. Royal Soc. A: Math. Phys. **372** 20130372 (2014) W. F. Paxton, K. C. Kistler, C. C. Olmeda, A. Sen, S. K. St. Angelo, T. Cao, T. E. Mallouk, P. E. Lammert, V. H. Crespi, Catalytic nanomotors: Autonomous movement of striped nanorods, J. Am. Chem. Soc. **126** 13424 (2004) R. D. Astumian, Thermodynamics and kinetic of a Brownian motor, Science **276** 917 (1997) F. Jülicher, A. Ajdari, J. Prost, Modeling molecular motors, Rev. Mod. Phys. **69** 1269 (1997) M. E. Cates, J. Tailleur, Motility-induced phase separation, Annu. Rev. Condens. Matter Phys. **6** 219 (2015) G. S. Redner, M. F. Hagan, A. Baskaran, Structure and dynamics of a phase-separating active colloidal fluid, Phys. Rev. Lett. **110** 055701 (2013) Y. Fily, M. C. Marchetti, Athermal phase separation of self-propelled particles with no alignment, Phys. Rev. Lett. **108** 235702 (2012) A. Wysocki, R. G. Winkler, G. Gompper, Cooperative motion of active Brownian spheres in three-dimensional dense suspensions, EPL **105** 48004 (2014) J. Bialké, H. Löwen, T. Speck, Microscopic theory for the phase separation of self-propelled repulsive disks, EPL **103** 30008 (2013) P. Galajda, J. Keymer, P. M. Chaikin, R. Austin, A wall of funnels concentrates swimming bacteria, J. Bacteriol. **189** 8704 (2007) R. Di Leonardo, L. Angelani, D. Dell’Arciprete, G. Ruocco, V. Lebba, S. Schippa, M. P. Conte, F. Mecarini, F. De Angelis, E. Di Fabrizio, Bacterial ratchet motors, Proc. Natl. Acad. Sci. USA **107** 9541 (2010) G. Vizsnyiczai, G. Frangipane, C. Maggi, F. Saglimbeni, S. Bianchi, R. Di Leonardo, Light controlled 3D micromotors powered by bacteria, Nat. Comm. **8** 15874 (2017) A. Sokolov, I. S. Aranson, Reduction of viscosity in suspension of swimming bacteria, Phys. Rev. Lett. **103** 148101 (2009) S. Rafaï, L. Jibuti, P. Peyla, Effective viscosity of microswimmer suspensions, Phys. Rev. Lett. **104** 098102 (2010) Y. Hatwalne, S. Ramaswamy, M. Rao, R. A. Simha, Rheology of active-particles suspensions, Phys. Rev. Lett. **92** 118101 (2004) H. Turlier, D. A. Fedosov, B. Audoly, T. Auth, N. S. Gov, C. Sykes, J. F. Joanny, G. Gompper, T. Betz, Equilibrium physics breakdown reveals the active nature of red blood cell flickering, Nat. Phys. **12** 513 (2016) F. Y. Chu, S. C. Haley, A. Zidovska, On the origin of shape fluctuations of the cell nucleus, Proc. Natl. Acad. Sci. USA **114** 10338 (2017) É. Fodor, M. Guo, N. S. Gov, P. Visco, D. A. Weitz, F. van Wijland, Activity-driven fluctuations in living cells, EPL **110** 48005 (2015) D. Bi, X. Yang, M. C. Marchetti, M. L. Manning, Motility-driven glass and jamming transitions in biological tissues, Phys. Rev. X **6** 021011 (2016) L. Caprini, U. M. B. Marconi, A. Vulpiani, Linear response and correlation of a self-propelled particle in the presence of external fields, J. Stat. Mech. **2018** 033203 (2018) A. P. Solon, Y. Fily, A. Baskaran, M. E. Cates, Y. Kafri, M. Kardar, J. Tailleur, Pressure is not a state function for generic active fluids, Nat. Phys. **11** 673 (2015) U. M. B. Marconi, A. Sarracino, C. Maggi, A. Puglisi, Self-propulsion against a moving membrane: Enhanced accumulation and drag force, Phys. Rev. E **96** 032601 (2017) R. Sheshka, P. Recho, L. Truskinovsky, Rigidity generation by nonthermal fluctuations, Phys. Rev. E **93** 052604 (2016) M. J. Schnitzer, Theory of continuum random walks and application to chemotaxis, Phys. Rev. E **48** 2553 (1993) Joseph Fourier, Théory Analytique de la Chaleur, Chez Firmin Didot, Paris (1822) Joseph Fourier, Analytical Theory of Heat, Cambridge University Press, Cambridge (1878)
{ "pile_set_name": "ArXiv" }
--- abstract: 'Given $g_1, \dots , g_s \in {\mathbb R}[X] = {\mathbb R}[X_1, \dots , X_n]$ such that the semialgebraic set $K := \{ x \in {\mathbb R}^n \mid g_i(x) \geq 0$ for all $i \}$ is compact. Schmüdgen’s Theorem says that if $f \in {\mathbb R}[X]$ such that $f > 0$ on $K$, then $f$ is in the preordering in ${\mathbb R}[X]$ generated by the $g_i$’s, i.e., $f$ can be written as a finite sum of elements ${\sigma}g_1^{e_1} \dots g_s^{e_s}$, where ${\sigma}$ is a sum of squares in ${\mathbb R}[X]$ and each $e_i \in \{0,1\}$. Putinar’s Theorem says that under a condition stronger than compactness, any $f > 0$ on $K$ can be written $f = {\sigma}_0 + {\sigma}_1 g_1 + \dots + {\sigma}_s g_s$, where ${\sigma}_i \in {\mathbb R}[X]$. Both of these theorems can be viewed as statements about the existence of certificates of positivity on compact semialgebraic sets. In this note we show that if the defining polynomials $g_1, \dots , g_s$ and polynomial $f$ have coefficients in ${\mathbb Q}$, then in Schmüdgen’s Theorem we can find a representation in which the ${\sigma}$’s are sums of squares of polynomials over ${\mathbb Q}$. We prove a similar result for Putinar’s Theorem assuming that the set of generators contains $N - \sum X_i^2$ for some $N \in {\mathbb N}$.' author: - 'Victoria Powers [^1]' title: Rational Certificates of Positivity on Compact Semialgebraic Sets --- Introduction ============ We write ${\mathbb N}$, ${\mathbb R}$, and ${\mathbb Q}$ for the set of natural, real, and rational numbers. Let $n \in {\mathbb N}$ be fixed and let ${\mathbb R}[X]$ denote the polynomial ring ${\mathbb R}[X_1, \dots , X_n]$. We denote by $\sum {\mathbb R}[X]^2$ the set of sums of squares in ${\mathbb R}[X]$. For $S = \{ g_1, \dots , g_s \} {\subseteq}{\mathbb R}[X]$, the [*basic closed semialgebraic set*]{} generated by $S$, denoted $K_S$, is $$\{ x \in {\mathbb R}^n \mid g_1(x) \geq 0, \dots , g_s(x) \geq 0 \}.$$ Associated to $S$ are two algebraic objects: The [*quadratic module generated by $S$*]{}, denoted $M_S$, is the set of $f \in {\mathbb R}[X]$ which can be written $$f = {\sigma}_0 + {\sigma}_1 g_1 + \dots + {\sigma}_s g_s,$$ where each ${\sigma}_i \in \sum {\mathbb R}[X]^2$, and the [*preordering generated by $S$*]{}, denoted $T_S$, is the quadratic module generated by all products of elements in $S$. In other words, $T_S$ is the set of $f \in {\mathbb R}[X]$ which can be written as a finite sum of elements ${\sigma}g_1^{e_1} \dots g_s^{e_s}$, where ${\sigma}\in {\mathbb R}[X]$ and each $e_i \in \{ 0,1 \}$. A polynomial $f \in \sum {\mathbb R}[X]^2$ is obviously globally nonnegative in ${\mathbb R}^n$ and writing $f$ explicitly as a sum of squares gives a “certificate of positivity" for the fact that $f$ takes only nonnegative values in ${\mathbb R}^n$. (Note: To avoid having to write “nonnegative or positive" we use the term “positivity" to mean either.) More generally, for a basic closed semialgebraic set $K_S$, if $f \in T_S$ or $f \in M_S$, then $f$ is nonnegative on $K_S$ and an explicit representation of $f$ in $M_S$ or $T_S$ gives a certificate of positivity for $f$ on $K_S$. In 1991, Schmüdgen [@schm] showed that if the semialgebraic set $K_S$ is compact, then any $f \in {\mathbb R}[X]$ which is strictly positive on $K_S$ is in the preordering $T_S$. In 1993, Putinar [@put] showed that under a certain condition which is stronger than compactness any $f \in {\mathbb R}[X]$ which is strictly positive on $K_S$ is in the quadratic module $M_S$. In other words, these results say that under the given conditions a certificate of positivity for $f$ on $K_S$ exists. Recently, techniques from semidefinite programming combined with Schmüdgen’s and Putinar’s theorems have been used to give numerical algorithms for applications such as optimization of polynomials on semialgebraic sets. However since these algorithms are numerical they might not produce exact certificates of positivity. With this in mind, Sturmfels asked whether any $f \in {\mathbb Q}[X]$ which is a sum of squares in ${\mathbb R}[X]$ is a sum of squares in ${\mathbb Q}[X]$. In [@hil], Hillar showed that the answer is “yes" in the case where $f$ is known to be a sum of squares over a field $K$ which is Galois over ${\mathbb Q}$. The general question remains unsolved. It is natural to ask a similar question for Schmüdgen’s Theorem and Putinar’s Theorem: If the polynomials defining the semialgebraic set and the positive polynomial $f$ have rational coefficients, is there a certificate of positivity for $f$ in which the sums of squares have rational coefficients? In this note, we show that in the case of Schmüdgen’s Theorem the answer is “yes". This follows from an algebraic proof of the theorem, originally due to T. Wörmann [@wor]. In the case of Putinar’s Theorem, we show that the answer is also “yes" as long as the generating set contains $N - \sum X_i^2$ for some $N \in {\mathbb N}$. This follows easily from an algorithmic proof of the theorem due to Schweighofer [@schw2]. For Lasserre’s method for optimization of polynomials on compact semialgebraic sets, see [@LAS], in concrete cases it is usual to add a polynomial of the type $N - \sum X_i^2$ to the generators in order to insure that Putinar’s Theorem holds. Thus our assumption in this case is reasonable. Rational certificates of for Schmüdgen’s Theorem ================================================ Fix $S = \{ g_1, \dots , g_s \} {\subseteq}{\mathbb R}[X]$ and define $K_S$ and $T_S$ as above. (Schmüdgen) Suppose that $K_S$ is compact. If $f \in {\mathbb R}[X]$ and $f > 0$ on $K_S$, then $f \in T_S$. In this section we show that if $f$ and the generating polynomials $g_1, \dots , g_s$ are in ${\mathbb Q}[X]$, then $f$ has a representation in $T_S$ in which all sums of squares $\sigma_{\epsilon}$ are in $\sum {\mathbb Q}[X]^2$. This follows from T. Wörmann’s algebraic proof of the theorem using the classical Abstract Positivstellensatz, and a generalization of Wörmann’s crucial lemma due to M. Schweighofer. [**The Abstract Positivstellensatz**]{}. The Abstract Positivstellensatz is usually attributed to Kadison-Dubois, but now thought to be proven earlier by Krivine or Stone. For details on the history of the result, see [@PD Section 5.6]. The setting is preordered commutative rings. Let $A$ be a commutative ring with ${\mathbb Q}{\subseteq}A$. A subset $T {\subseteq}A$ is a [ *preordering*]{} if $T + T {\subseteq}T$, $T \cdot T {\subseteq}T$, and $-1 \not \in T$. For $S = \{ a_1, \dots , a_k \} {\subseteq}A$, we define the [*preordering generated by $S$*]{}, $T_S$, exactly as for $A={\mathbb R}[X]$. An [*ordering*]{} in $A$ is a preordering $P$ such that $P \cup -P = A$ and $P \cap -P$ is a prime ideal. Any $a \in A$ has a unique sign in $\{ -1,0,1 \}$ with respect to a fixed ordering $P$ and we use the notation $a \geq_P 0$ if $a \in P$, $a >_P 0$ if $a \in P \setminus (P \cap -P)$, etc. Fix a preordered ring $(A,T)$ and denote by ${\text{Sper } A}$ the real spectrum of $(A,T)$, i.e., the set of orderings of $A$ which contain $T$. Then define $$H(A) = \{ a \in A \mid \text{ there exists }n \in {\mathbb N}\text{ with } n \pm a \geq_P 0 \text{ for all } P \in {\text{Sper } A}\},$$ the [*ring of geometrically bounded elements in $(A,T)$*]{}, and $$H'(A) = \{ a \in A \mid \text{ there exists }n \in {\mathbb N}\text{ with }n \pm a \in T \},$$ the [*ring of arithmetically bounded elements in $(A,T)$*]{}. Clearly, $H'(A) {\subseteq}H(A)$. The preordering $T$ is [*archimedean*]{} if $H'(A) = A$. The following version of the Abstract Positivstellensatz is [@schw Theorem 1]: \[ks\] Given the preordered ring $(A,T)$ as above and suppose $A = H'(A)$. For any $a \in A$, if $a >_P 0$ for all $P \in {\text{Sper } A}$, then $a \in T$. Consider the case where $A = {\mathbb R}[X]$ and $T = T_S$ for $S = \{ g_1, \dots , g_s \} {\subseteq}{\mathbb R}[X]$. Let $K = K_S$, then $K$ embeds densely in ${\text{Sper } A}$ and hence $H(A) = \{ f \in {\mathbb R}[X] \mid f$ is bounded on $S \}$. If $S$ is compact, this implies $H(A) = A$ and Schmüdgen’s Theorem follows from the following lemma [@bw Lemma 1]: \[wor\] With $A$, $T$, and $S$ as above, if $H(A) = A$ then $H'(A) = A$. Our result follows from a generalization of Lemma \[wor\], which is [@schw Theorem 4.13]: \[sch\] Let $F$ be a subfield of ${\mathbb R}$ and $(A,T)$ a preordered $F$-algebra such that $F {\subseteq}H'(A)$ and $A$ has finite transcendence degree over $F$. Then $$A = H(A) \Rightarrow A = H'(A).$$ We can now prove the existence of rational certificates of positivity in Schmüdgen’s Theorem. The argument is exactly that of the proof of the general theorem above. Given $S = \{ g_1, \dots , g_s \} {\subseteq}{\mathbb Q}[X]$ and suppose $K_S {\subseteq}{\mathbb R}^n$ is compact. Then for any and $f \in {\mathbb Q}[X]$ such that $f > 0$ on $K_S$, there is a representation of $f$ in the preordering $T_S$, $$f = \sum_{e \in \{0,1\}^s} {\sigma}_e g_1^{e_1} \dots g_s^{e_s},$$ with all ${\sigma}_e \in \sum {\mathbb Q}[X]^2$. Let $T$ be the preordering in ${\mathbb Q}[X]$ generated by $S$. Since $K_S$ is compact, every element of ${\mathbb Q}[X]$ is bounded on $K_S$. Then $K_S$ dense in ${\text{Sper } A}$ implies that $H({\mathbb Q}[X])= {\mathbb Q}[X]$, hence by Theorem \[sch\] we have ${\mathbb Q}[X] = H'(A)$. Note that the condition $F {\subseteq}H'(A)$ holds in this case since ${\mathbb Q}^+ = \sum {\mathbb Q}^2$. The result follows from Theorem \[ks\]. Rational certificates for Putinar’s Theorem =========================================== Given $S = \{ g_1, \dots, g_s \}$, recall that the quadratic module generated by $S$, $M_S$, is the set of elements in the preordering $K_S$ with a “linear" representation, i.e., $$M_S = \{ {\sigma}_0 + {\sigma}_1 g_1 + \dots {\sigma}_s g_s \mid {\sigma}_i \in \sum {\mathbb R}[X]^2\}.$$ In order to guarantee representations of positive polynomials in the quadratic module, we need a condition stronger than compactness of $K_S$, namely, we need $M_S$ to be archimedean. The quadratic module $M_S$ is archimedean if all elements of ${\mathbb R}[X]$ are bounded by a positive integer with respect to $M_S$, i.e., if for every $f \in {\mathbb R}[X]$ there is some $N \in {\mathbb N}$ such that $N - f \in M_S$. It is not too hard to show that $M_S$ is archimedean if there is some $N \in {\mathbb N}$ such that $N - \sum X_i^2 \in M_S$. Clearly, if $M_S$ is archimedean, then $K_S$ is compact; the polynomial $N - \sum X_i^2$ can be thought of as a “certificate of compactness". However, the converse is not true, see [@PD Example 6.3.1]. The key to the algebraic proof of Schmüdgen’s Theorem from the previous section is showing that in the case of the preordering generated by a finite set of elements from ${\mathbb R}[X]$, the compactness of the semialgebraic set implies that the corresponding preordering is archimedian. In 1993, Putinar [@put] showed that that if the quadratic module $M_S$ is archimedean, then we can replace the preordering $T_S$ by the quadratic module $M_S$. (Putinar) Suppose that the quadratic module $M_S$ is archimedean. Then for every $f \in {\mathbb R}[X]$ with $f > 0$ on $K_S$, $f \in M_S$. Lasserre’s method for minimizing a polynomial on a compact semialgebraic set, see [@LAS], involves defining a sequence of semidefinite programs corresponding to representations of bounded degree in $M_S$ whose solutions converge to the minimum. In this context, if $M_S$ is archimedean then Putinar’s Theorem implies the convergence of the semidefinite programs. In practice, it is not clear how to decide if $M_S$ is archimedean for a given set of generators $S$, however in concrete cases a polynomial $N - \sum X_i^2$ can be added to the generators if an appropriate $N$ is known or can be computed. Using an algorithmic proof of Putinar’s Theorem due to M. Schwieghofer [@schw2] we can show that rational certificates exist for the theorem as long as we have a polynomial $N - \sum X_i^2$ as one of our generators Suppose $S = \{ g_1, \dots , g_s \} {\subseteq}{\mathbb Q}[X]$ and $N - \sum X_i^2 \in M_S$ for some $N \in {\mathbb N}$. Then given any $f \in {\mathbb Q}[X]$ such that $f > 0$ on $K_S$, there exist ${\sigma}_0 \dots {\sigma}_s, {\sigma}\in \sum {\mathbb Q}[X]^2$ so that $$f = {\sigma}_0 + {\sigma}_1 g_1 + \dots + {\sigma}_s g_s + {\sigma}(N - \sum X_i^2).$$ The idea of Schweighofer’s proof is to reduce to Pólya’s Theorem. We follow the proof, making sure that each step preserves rationality. Let $\Delta = \{ y \in [0,\infty)^{2n} \mid y_1 + \dots + y_{2n} = 2n(N + \frac14) \} {\subseteq}{\mathbb R}^{2n}$ and let $C$ be the compact subset of ${\mathbb R}^n$ defined by $C = l(\Delta)$, where $l : {\mathbb R}^{2n} \rightarrow {\mathbb R}^n$ is defined by $$y \mapsto \left(\frac{y_1 - y_{n+1}}{2}, \dots , \frac{y_n - y_{2n}}{2}\right)$$ Scaling the $g_i$’s by positive elements in ${\mathbb Q}$, we can assume that $g_i \leq 1$ on $C$ for all $i$. The key to Schweighofer’s proof is the following observation [@schw2 Lemma 2.3]: There exists $\lambda \in {\mathbb R}^+$ such that $q := f - \lambda \sum (g_i - 1)^{2k} g_i > 0$ on $C$. Since we can always replace $\lambda$ by a smaller value, we can assume $\lambda \in {\mathbb Q}$. We have $q := f - \lambda \sum (g_i - 1)^{2k} g_i > 0$ on $C$, where $q \in {\mathbb Q}[X]$. Write $q = \sum_{i=1}^d Q_i$, where $d = \deg q$ and $Q_i$ is the homogeneous part of $q$ of degree $i$. Let $Y = (Y_1, \dots , Y_{2n})$ and define in ${\mathbb Q}[Y]$ $$F(Y_1, \dots , Y_{2n}) := \sum_{i=1}^d Q_i\left(\frac{Y_1 - Y_{n+1}}{2}, \dots , \frac{Y_n - Y_{2n}}{2}\right) \left(\frac{Y_1 + \dots + Y_{2n}}{2n(N + 1/4)}\right)^{d-i}.$$ Then $F$ is homogenous and $F > 0$ on $[0,\infty)^{2n} \setminus \{0\}$. By Pólya’s Theorem, there is some $k \in {\mathbb N}$ so that $G := \left(\frac{Y_1 + \dots + Y_{2n}}{2n(N+1/4)}\right)^k F$ has nonnegative coefficients as a polynomial in ${\mathbb R}[Y]$. Furthermore, since $F \in {\mathbb Q}[Y_1, \dots , Y_{2n}]$, it is easy to see that $G \in {\mathbb Q}[Y]$. Define $\phi : {\mathbb Q}[Y_1, \dots , Y_{2n}] \rightarrow {\mathbb Q}[X]$ by $$\phi(Y_i) = N+\frac14 + X_i, \quad \phi(Y_{n+i}) = (N+\frac{1}{4}) - X_i, i=1, \dots , n$$ and note that $\phi(G) = q$ and $$\phi(Y_i) = (N+\frac{1}{4}) \pm X_i = \sum_{j\neq i} X_j^2 + (X_i \pm \frac12)^2) + (N - \sum X_j^2) \in \sum {\mathbb Q}[X]^2 + (N - \sum X_j^2).$$ Thus $\phi(G) = q$ implies there is a representation of $q$ of the required type and then, since $f = q + \lambda \sum (g_i - 1)^{2k} g_i$ with $\lambda \in {\mathbb Q}$, we are done. In the preordering case (Schmüdgen’s Theorem), as noted above if the semialgebraic set $K_S$ is compact, then it follows that the preordering $T_S$ in ${\mathbb Q}[X]$ is archimedean. However it is more subtle in the quadratic module case since it is not always clear how to decide if $M_S$ is archimedean for a given set of generators $S$. Thus an open question is the following: Suppose $S {\subseteq}{\mathbb Q}[X]$ is a finite set of polynomials and $M_S$ is archimedean as a quadratic module in ${\mathbb R}[X]$. Is it true that $M_S$ is archimedean as a quadratic module in ${\mathbb Q}[X]$? To put it more concretely, suppose $S = \{ g_1, \dots , g_s \} {\subseteq}{\mathbb Q}[X]$ and we know that there is some $N \in {\mathbb N}$ such that $$N - \sum X_i^2 = {\sigma}_0 + {\sigma}_1 g_1 + \dots + {\sigma}_s g_s,$$ with ${\sigma}_i \in \sum {\mathbb R}[X]^2$. Does there exist a representation with ${\sigma}_i \in \sum {\mathbb Q}[X]^2$? Equivalently, does there exist $N \in {\mathbb N}$ such that for each $i = 1, \dots , n$ we can write $$N \pm X_i = {\sigma}_0 + {\sigma}_1 g_1 + \dots + {\sigma}_s g_s,$$ with ${\sigma}_i \in \sum {\mathbb Q}[X]^2$? [1]{} R. Berr and T. Wörmann, *Positive polynomials on compact sets*, Manuscripta Math. **104** (2001), 135–143. C. Hillar, *Sums of polynomial squares over totally real fields are rational sums of squares*, Prod. Amer. Math. Soc. **137** (2009), 921–930. J.-B. Lasserre, *Global optimization with polynomials and the problem of moments*, SIAM J. Optimization **11** (2001), no. 3, 796–817. A. Prestel and C.N. Delzell, *Positive [P]{}olynomials – [F]{}rom [H]{}ilbert’s 17th [P]{}roblem to [R]{}eal [A]{}lgebra*, Springer Monographs Series, Berlin, 2001. M. Putinar, *Positive polynomials on compact semi-algebraic sets*, Indiana Univ. Math. J. **42** (1993), no. 3, 969–984. K. Schm[ü]{}dgen, *The [K]{}-moment problem for compact semi-algebraic sets*, Math. Ann. **289** (1991), 203–206. M. Schweighofer, *Iterated rings of bounded elements and generalizations of [S]{}chmüdgen’s theorem*, Ph.D. thesis, Universität Konstanz, Konstanz, Germany, 2002. [to3em]{}, *Optimization of polynomials on noncompact semialgebraic sets*, SIAM J. Optimization **15** (2005), no. 3, 805–825. T. W[ö]{}rmann, *Strikt positive polynome in der semialgebraischen geometrie*, Ph.D. thesis, Universit[ä]{}t Dortmund, 1998. [^1]: Department of Mathematics and Computer Science, Emory University, Atlanta, GA 30322. Email: vicki@mathcs.emory.edu.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present the first calculation of the electron-energy loss spectrum of infinite one-dimensional undoped CuO$_3$ chains within a multi-band Hubbard model. The results show good agreement with experimental spectra of Sr$_2$CuO$_3$. The main feature in the spectra is found to be due to the formation of Zhang-Rice singlet-like excitations. The ${\bf q}$-dependence of these excitations is a consequence of the inner structure of the Zhang-Rice singlet. This makes the inclusion of the oxygen degrees of freedom essential for the description of the relevant excitations. We observe that no enhanced intersite Coulomb repulsion is necessary to explain the experimental data.' address: 'Institut für Theoretische Physik, Technische Universität Dresden, D-01062 Dresden, Germany' author: - 'J. Richter, C. Waidacher, and K. W. Becker' title: 'The role of Zhang-Rice singlet-like excitations in one-dimensional cuprates' --- [2]{} Recently, charge excitations in the quasi one-dimensional compound Sr$_{2}$CuO$_{3}$ have been investigated both experimentally$^{1-3}$ and theoretically.$^{2-6}$ Sr$_{2}$CuO$_{3}$ is composed of chains formed by CuO$_4$ plaquettes which share the corner oxygens. The magnetic properties of these chains have been successfully described using a one-dimensional spin-$\frac{1}{2}$ Heisenberg antiferromagnet.$^{7-9}$ Experimentally, the electron-energy loss spectrum (EELS) of Sr$_{2}$CuO$_{3}$ [@neudert98] shows several interesting features (see Fig. \[spectra\]): For small momentum transfer ($q=0.08$ Å$^{-1}$) parallel to the chain direction, one observes a broad peak around $2.4$ eV energy loss, and two relatively sharp, smaller maxima at $4.5$ and $5.2$ eV. With increasing momentum transfer, the lowest-energy feature shifts towards higher energy, reaching $3.2$ eV at the zone boundary ($q=0.8$ Å$^{-1}$). Thereby its spectral width decreases. In addition, the peaks at $4.5$ and $5.2$ eV lose spectral weight as the momentum transfer increases, while some less well-defined structures emerge around $6$ eV. So far, these results have been compared only to calculations in an extended one-band Hubbard model.$^{3,6}$ From this comparison, Neudert [*et al.*]{}$^{3}$ concluded that in Sr$_{2}$CuO$_{3}$ there is an unusually strong intersite Coulomb repulsion $V$: In the one-band model it is of the order of 1 eV. It is argued that this large value of $V$ allows for the formation of excitonic states which are observed in the experiment. One of the aims of this paper is to show that no intersite Coulomb repulsion is necessary to explain the basic features of the experiment, if the O degrees of freedom are taken into account within the framework of a multi-band Hubbard model. We investigate the EELS spectrum of a one-dimensional CuO$_{3}$ chain system, using a multi-band Hubbard Hamiltonian at half-filling. In the hole picture this Hamiltonian reads $$\begin{aligned} H &=&\Delta\sum_{j\sigma}n^p_{j\sigma} +U_{d} \sum_{i}n^d_{i\uparrow}n^d_{i\downarrow}\nonumber\\ &&+t_{pd} \sum_{<ij>\sigma}\phi^{ij}_{pd} (p^\dagger_{j\sigma}d_{i\sigma}+h.c.)\nonumber\\ &&+ t_{pp} \sum_{<jj^\prime>\sigma}\phi^{jj^\prime}_{pp} p^\dagger_{j\sigma}p_{j^\prime\sigma}~\mbox{,}\label{1}\end{aligned}$$ where $d^\dagger_{i\sigma}$ ($p^\dagger_{j\sigma}$) create a hole with spin $\sigma$ in the $i$-th Cu $3d$ orbital ($j$-th O $2p$ orbital), while $n^d_{i\sigma}$ ($n^p_{j\sigma}$) are the corresponding number operators. The first and second term in Eq. (\[1\]) represent the atomic part of the Hamiltonian, with the charge-transfer energy $\Delta$, and the on-site Coulomb repulsion $U_{d}$ between Cu $3d$ holes. The last two terms in Eq. (\[1\]) are the hybridization of Cu $3d$ and O $2p$ orbitals (hopping strength $t_{pd}$) and of O $2p$ orbitals (hopping strength $t_{pp}$). The factors $\phi^{ij}_{pd}$ and $\phi^{jj^\prime}_{pp}$ give the correct sign for the hopping processes. Finally, $\langle ij \rangle$ denotes the summation over nearest neighbor pairs. The loss function in EELS experiments is directly proportional to the dynamical density-density correlation function $\chi_{\rho}(\omega,{\bf q})$, [@schnatterly77] which depends on the energy loss $\omega$ and momentum transfer ${\bf q}$. $\chi_{\rho}(\omega,{\bf q})$ is calculated from $$\chi_{\rho}(\omega,{\bf q}) = \frac{1}{i} \int_0^{\infty} dt~e^{-i\omega t} \langle\Psi| [ \rho_{-{\bf q}}(0),\rho_{\bf q}(t)]|\Psi\rangle~\mbox{,}\label{2}$$ with $$\rho_{{\bf q}} = \sum_{i\sigma} n_{i\sigma}^d e^{i{\bf q}{\bf r}_i} +\sum_{j\sigma} n_{j\sigma}^p e^{i{\bf q}{\bf r}_j}~\mbox{,}$$ where $|\Psi\rangle$ is the ground state of $H$, and $\rho_{{\bf q}}$ is the Fourier transformed hole density. The ground state $|\Psi\rangle$ is approximated as follows:[@waid2] We start from a Néel-ordered state $|\Psi_N\rangle$ with singly occupied Cu $3d$ orbitals (with alternating spin direction) and empty O $2p$ orbitals. Fluctuations are added to $|\Psi_N\rangle$ using an exponential form $$\label{3} |\Psi\rangle = \exp \left(\sum_{i\alpha}\lambda_{\alpha}F_{i,\alpha}\right) |\Psi_N\rangle~\mbox{.}$$ The fluctuation operators $F_{i,\alpha}$ describe various delocalization processes of a hole initially located in the Cu $3d$ orbital at site $i$, where a summation over equivalent final sites takes place.[@waid2] The parameters $\lambda_{\alpha}$ in Eq. (\[3\]) describe the strength of the delocalization processes and are determined self-consistently by solving the system of equations $\langle\Psi|{\cal L} F^\dagger_{0,\alpha}|\Psi\rangle= 0$, where ${\cal L}$ is the Liouville operator, defined as ${\cal L}A= [H,A]$ for any operator $A$. These equations have to hold if $|\Psi\rangle$ is the ground state of $H$. Using Eqs. (\[2\]) and (\[3\]), we calculate the EELS spectrum by means of Mori-Zwanzig projection technique.[@Mori65] For a set of operators $D_\mu$, the so-called dynamical variables, the following matrix equation approximately holds $$\begin{aligned} \label{4} \sum_{\gamma}\left[ z\delta_{\mu\gamma} - \sum_{\eta} \langle\Psi|D^\dagger_{\mu} {\cal L} D_{\eta}|\Psi\rangle \left(\langle\Psi|D^\dagger_{\eta} D_{\gamma}|\Psi\rangle\right)^{-1} \right]\times\nonumber\\ \quad \times~\langle\Psi|D^\dagger_{\gamma} \frac{1}{z-{\cal L}} D_{\nu}|\Psi\rangle = \langle\Psi|D^\dagger_{\mu} D_{\nu}|\Psi\rangle ~\mbox{,}\end{aligned}$$ where $z=\omega+i0$. In Eq. (\[4\]) the set of dynamical variables was assumed to be sufficiently large so that self-energy contributions can be neglected. The set $\{D_\mu\}$ contains the dynamical variable $D_{0}=\rho_{{\bf q}}$. Therefore, by solving Eq. (\[4\]), an approximation for Eq. (\[2\]) can be obtained. Besides $D_{0}$, the set includes $D_{\alpha}= \rho_{{\bf q}} F_{0,\alpha}$ for all $\alpha$. The $F_{0,\alpha}$ are the fluctuation operators used in the ground state Eq. (\[3\]), without the summation over equivalent final sites. We use altogether $12$ dynamical variables and observe good convergence of the spectral function. =0.4 In Fig. \[spectra\] the obtained results are compared to the experimental spectra from Ref. [@neudert98]. The parameters in the Hamiltonian are chosen as follows: $U_{d}= 8.8~\mbox{eV}$ and $t_{pp}= 0.65~\mbox{eV}$ are kept constant at typical values.[@3] The values of $\Delta=4.3~\mbox{eV}$ and $t_{pd}=1.5~\mbox{eV}$ have been adjusted to obtain the correct position of the lowest energy feature at $2.5~\mbox{eV}$ for $q=0.01$ Å$^{-1}$, and at $3.1~\mbox{eV}$ for $q=0.7$ Å$^{-1}$. Thus, we effectively use only two free parameters. It is found that the value of $\Delta$ dominates the excitation energy, which increases with increasing $\Delta$. The dispersion of the peak depends mainly on $t_{pd}$ with increasing dispersion for increasing hopping parameter. As compared to the standard value $1.3~\mbox{eV}$,[@3] $t_{pd}=1.5~\mbox{eV}$ is slightly enhanced, in agreement with recent results of band structure calculations.[@rosner99] =0.4 The theoretical spectra consist of two excitations. The dominant excitation is at $2.45~\mbox{eV}$ for $q=0.1$ Å$^{-1}$, and shifts to $3.05~\mbox{eV}$ for $q=0.7$ Å$^{-1}$. Besides, a second excitation appears at $6.4~\mbox{eV}$ which has no dispersion. The low energy peak structure is shown in more detail in Fig. 2, where a smaller peak broadening has been used. As will be explained below, mainly two different Zhang-Rice singlet-like excitations [@zhang88] lead to this peak structure. The ${\bf q}$-dependence of the spectrum is due to two effects. Firstly, one observes a shift of spectral weight with increasing ${\bf q}$ between two excitations labelled with (a) and (b) in Fig. 2. Secondly, with increasing ${\bf q}$ the energies of the two peaks shift to higher values. The shift of spectral weight can be attributed to different delocalization properties of the two final states. The excited state (a) in Fig. 2 which dominates the spectrum for small momentum transfer is rather extended, see Fig. 3(a). This state has a rather small probability for the hole at its original plaquette. With increasing [**q**]{} the spectral weight shifts to another excited state, shown in Fig. 3(b), with a higher probability for the hole on its original Cu-site. This means that the character of the excitation changes from an extended to a more localized one, while still forming a Zhang-Rice singlet. This behavior can be understood by analyzing the relevant expectation values in Eq.(5). For small values of [**q**]{} the frequency term $\langle\Psi|D^\dagger_{\mu}{\cal L} D_{\nu}|\Psi\rangle$ can be approximated by expanding $e^{i{\bf qr}} \approx 1+ i{\bf qr}$ in Eq. (\[4\]). This gives $\langle\Psi|F^\dagger_{0,\mu}{\cal L} F_{0,\nu}|\Psi\rangle\times {\bf q (r_{\mu}-r_{\nu})}$ which is proportional to the fluctuation distance, thus favoring far-reaching excitations. This picture changes for large values of[ **q**]{}, where stronger oscillations of the phase factor lead to a cancelation of extended excitations. The result is a transfer of spectral weight from delocalized towards more localized excitations with increasing [**q**]{}. =0.4 The ${\bf q}$-dependence of the energies, on the other hand, is a consequence of the inner structure of the Zhang-Rice singlet-like excitations. In both excitations (a) and (b) a hole hops onto the Cu site of its nearest neighbor plaquette, see Fig. 3. Due to the Coulomb repulsion $U_{d}$, the hole which had originally occupied this Cu site is pushed away onto the surrounding O sites. Depending on the direction of this delocalization, this process leads to a ${\bf q}$-dependence of the excitation energy. Next, we want to stress that the claim in [@neudert98] for the one-band Hubbard model that only the inclusion of the next-neighbor repulsion leads to the possibility of the formation of an excitonic state is not consistent with our results. In the one-band model such a repulsion leads to a binding energy of empty and doubly occupied sites due to the reduction of neighboring interactions. This binding energy is proportional to $V$. However, as can be seen from exact diagonalization calculations in the one-band model,[@richter] the intersite repulsion mainly leads to an energy shift of the EELS spectra. Thus, the parameter $V$ in the one-band model serves only to adjust the energetical position of the spectra, and is not necessary in more realistic models. In the multi-band model, the formation of an exciton is only driven by the energetically favored formation of a Zhang-Rice singlet, and no further inclusion of next-neighbor repulsion is necessary. The important role of the Zhang-Rice singlet formation has been studied previously also in an effective model for excitons in the CuO$_2$ plane.[@zhang98] Like the one-band model, this effective model neglects inner degrees of freedom of the Zhang-Rice singlet. If this model is reduced to the CuO$_3$ chain, ${\bf q}$-dependent energies are only possible for a non-vanishing O on-site Coulomb repulsion $U_p\neq 0$. In contrast to these results, we find ${\bf q}$-dependent energies for $U_p = 0$. As described above this effect cannot be explained in a model which neglects the inner structure of the singlet. Thus, our results show that both an inclusion of the O-sites and a complete description of the excitation is necessary to obtain the full dispersion. The O-sites are essential for the correct description of the different characters of the singlet excitations, which leads to the shift of spectral weight from one excitation to another. On the other hand, taking account of the inner degrees of freedom of the Zhang-Rice singlet leads to the ${\bf q}$-dependence of the energies. The results of the projection technique do not correctly describe the experimentally observed width of the peak for small momentum transfer. A possible explanation is that not all excitations are included in the projection space. The above discussion suggests that the width should be due to the presence of additional delocalized excitations. Processes which are neglected in the present calculation involve less important multiple excitations of holes beyond their original plaquette. Finally, although they are not the focus of this paper, we discuss some high-energy features. The excitation at 6.4 eV in the theoretical spectra is due to a local process on the plaquette itself. Here, the hole is excited to the surrounding O sites, without leaving its original plaquette, see Fig. 3(c). The energy of this structure does not shift as a function of momentum transfer. Once again, a transfer of spectral weight towards this localized excitation with increasing values of [**q**]{} is observed. The plaquette excitation has a highly local character. Therefore, its spectral weight increases as a function of [**q**]{} compared to the more delocalized Zhang-Rice singlet excitations. For small [**q**]{} the spectral weight of the plaquette peak is about 6 times smaller than that of the Zhang-Rice peak. As [**q** ]{} increases, this ratio increases to about one half. One should note that the experimental spectra show no obvious features above 6 eV. However, since many different orbitals may contribute in this energy range, we cannot expect a realistic description using a model that contains only Cu $3d$ and O $2p$ orbitals. This applies also to the experimental structure around 4.5 eV for small momentum transfer, which is not described by the present model. We assume that this feature is due to excitations which involve Sr orbitals, as has been argued before.[@neudert98] In comparison with earlier works on Cu$2p_{3/2}$ X-ray photoemission spectroscopy using the same theoretical approach,[@waid1] we find that the character of the excitations in both experiments is very similar. Zhang-Rice singlet and local excitations play an important role. In both experiments the dominant excitation at low energies is associated with a Zhang-Rice singlet formation. In conclusion, we have carried out the first calculation of the EELS-spectrum for the one-dimensional CuO$_3$ chain by using a multi-band-Hubbard model. Our results are in good agreement with experimental results for Sr$_2$CuO$_3$. We find that the main feature in the spectra is due to the formation of Zhang-Rice singlet-like excitations. The momentum dependence of the spectrum is due to two effects. First, there is a shift of spectral weight from less localized to more localized final states. Second, the excitation energies are ${\bf q}$-dependent. This ${\bf q}$-dependence is found to be a consequence of the inner structure of the Zhang-Rice singlet. Therefore, the inclusion of the O degrees of freedom is essential for the description of the relevant excitations. This has two important consequences. Firstly, only a multi-band model allows the correct description of charge excitations. And, secondly, if a multi-band model is used, no intersite Coulomb repulsion is necessary. Furthermore, we observe the existence of a local excitation at large ${\bf q}$-values. We would like to acknowledge fruitful discussions with S. Atzkern, S.-L.Drechsler, J. Fink, M. S. Golden, R. E. Hetzel, A. Hübsch, R. Neudert, and H. Rosner. This work was performed within the SFB 463. T. Böske, K. Maiti, O. Knauff, K. Ruck, M. S. Golden, G. Krabbes, J. Fink, T. Osafune, N. Motoyama, H. Eisaki, and S. Uchida, Phys. Rev. B [**57**]{}, 138 (1998). K. Maiti, D. D. Sarma, T. Mizokawa, and A. Fujimori, Euro. Phys. Lett. [**37**]{}, 359 (1997). R. Neudert, M. Knupfer, M. S. Golden, J. Fink, W. Stephan, K. Penc, N. Motoyama, H. Eisaki, and S. Uchida, Phys. Rev. Lett. [**81**]{}, 657 (1998). K. Okada, A. Kotani, K. Maiti, and D.D. Sarma, J. Phys. Soc. Jpn. [**65**]{}, 1844 (1996); K. Okada and A. Kotani, J. Electron Spectrosc. Relat. Phenom. [**86**]{}, 119 (1997). K. Karlsson, O. Gunnarsson and O. Jepsen, Phys. Rev. Lett.[**82**]{}, 3528 (1999). W. Stephan and K. Penc, Phys. Rev. B [**54**]{}, 17269 (1996). T. Ami, M. K. Crawford, R. L. Harlow, Z. R. Wang, D. C. Johnston, Q. Huang, and R. W. Erwin, Phys. Rev. B [**51**]{}, 5994 (1995). N. Motoyama, H. Eisaki, and S. Uchida, Phys. Rev. Lett. [**76**]{}, 3212 (1996). K. M. Kojima, Y. Fudamoto, M. Larkin, G. M. Luke, J. Merrin, B. Nachumi, Y. J. Uemura, N. Motoyama, H. Eisaki, S. Uchida, K. Yamada, Y. Endoh, S. Hosoya, B. J. Sternlieb, and G. Shirane, Phys. Rev. Lett. [**78**]{}, 1787 (1997). S. E. Schnatterly, Solid State Phys. [**34**]{}, 275 (1977). C. Waidacher, J. Richter, and K. W. Becker, Phys. Rev. B [**60**]{}, 2255 (1999). H. Mori, Prog. Theor. Phys. [**33**]{}, 423 (1965); R. Zwanzig, in [*Lectures in Theoretical Physics*]{} (Interscience, New York, 1961), Vol. 3. A. K. McMahan, R. M. Martin, and S. Satpathy, Phys. Rev. B [**38**]{}, 6650 (1988); M. S. Hybertsen, M. Schlüter, and N. E. Christiansen, [*ibid.*]{} [**39**]{}, 9028 (1989); J. B. Grant and A. K. McMahan, [*ibid.*]{} [**46**]{}, 8440 (1992). H. Rosner, Ph.D. thesis, Technical University Dresden, 1999. F. C. Zhang and T. M. Rice, Phys. Rev. B [**37**]{}, 3759 (1988). J. Richter, C. Waidacher, K. W. Becker, in preparation. F. C. Zhang and K. K. Ng, Phys. Rev. B [**58**]{}, 13520 (1998); Y.Y. Wang, F. C. Zhang, V. P. Dravid, K. K. Ng, M. V. Klein, S. E. Schnatterly, and L. L. Miller, Phys. Rev. Lett. [**77**]{}, 1807 (1996). C. Waidacher, J. Richter, and K. W. Becker, Europhys. Lett. [**47**]{}, 77 (1999).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Asynchronous stochastic approximations (SAs) are an important class of model-free algorithms, tools and techniques that are popular in multi-agent and distributed control scenarios. To counter Bellman’s curse of dimensionality, such algorithms are coupled with function approximations. Although the learning/ control problem becomes more tractable, function approximations affect stability and convergence. In this paper, we present verifiable sufficient conditions for stability and convergence of asynchronous SAs with biased approximation errors. The theory developed herein is used to analyze Policy Gradient methods and noisy Value Iteration schemes. Specifically, we analyze the asynchronous approximate counterparts of the policy gradient (A2PG) and value iteration (A2VI) schemes. It is shown that the stability of these algorithms is unaffected by biased approximation errors, provided they are asymptotically bounded. With respect to convergence (of A2VI and A2PG), a relationship between the limiting set and the approximation errors is established. Finally, experimental results are presented that support the theory.' author: - 'Arunselvan Ramaswamy[^1] `arunr@mail.uni-paderborn.de` [^2]' - 'Shalabh Bhatnagar `shalabh@iisc.ac.in`[^3]' - 'Daniel E. Quevedo `dquevedo@ieee.org`[^4]' bibliography: - 'AAVI.bib' title: 'Asynchronous stochastic approximations with asymptotically biased errors and deep multi-agent learning ' --- Introduction {#sec_introduction} ============ In recent years reinforcement learning (RL) algorithms such as Value Iteration, Q-learning and Policy Gradient methods have witnessed a colossal resurgence. Many of these algorithms are coupled with function approximators to solve many important problems including, but not limited to, autonomous driving in transportation, process optimization in industrial scenarios and efficient dispersal of health-care services. A neural network with several hidden layers is called a deep neural network (DNN). RL that uses a DNN for function approximation is called DeepRL. The literature around DeepRL is growing rapidly, for example see [@mnih], [@mniha], [@tamar] and [@li17]. Most modern learning and control problems have continuous state and/ or action spaces. This leads to [**Bellman’s curse of dimensionality**]{}. To overcome this curse, learning and control algorithms are coupled with function approximation. It is worth noting that the previously mentioned resurgence is partly owing to the effectiveness of DNN in function approximation. While the problem becomes tractable, the use of function approximation affects stability (almost sure boundedness) and convergence properties of the algorithms. Further, the optimality of the policies found depends on the [**approximation errors**]{}. Such issues are not well studied. [**The main contribution of this paper is a complete analysis in terms of the influence of function approximation on stability and the limiting set, in a multi-agent setting**]{}. While the theory behind traditional RL is mature, there have not been many attempts to analyze DeepRL. Munos analyzed the approximate value and policy iteration algorithms, see [@munos] and [@munos03]. However the assumptions in [@munos] and [@munos03] are rather restrictive. Ramaswamy and Bhatnagar [@ramaswamy2017] studied approximate value iteration methods under significantly weaker assumptions as compared to [@munos]. However, [@ramaswamy2017] does not consider the multi-agent scenario. In this paper, we present the framework to develop and analyze large-scale multi-agent RL algorithms. Such algorithms are applicable to industrial process-control, distributed control of microgrids and decentralized resource allocation systems, among others. *It may be noted that in the setting of distributed control and learning, the aforementioned curse of dimensionality problem is particularly pronounced*. Motivation, relevant literature and our contributions {#sec_contributions} ----------------------------------------------------- The main motivation for this paper is the development of a general framework for learning and control in large-scale multi-agent settings. In a typical multi-agent architecture, the agents involved need to work towards a common goal by cooperating with each other. Each agent may be asynchronously performing updates, i.e., according its own local clock, but not that of other agents. While the agents may act independently, their decisions are based on information from other agents. This information is often shared via wireless communication networks. Bottlenecks in communication resources may lead to (possibly unbounded) delays as well as errors in communication. Here, we focus on developing a framework which accounts for all of the above constraints. Such a framework would then guide the development of algorithms with behavioural guarantees. For this, we build on tools and techniques developed in [@Borkar_asynchronous], [@abounadi], [@perkins], [@Benaim96], [@Benaim05], [@ramaswamy2017] and [@aubin2012differential]. Traditional analyses in [@Borkar_asynchronous] and [@abounadi] do not account for the use of function approximation. Whist the more modern analysis in [@perkins] can be modified to the multi-agent setting considered herein. However, [@perkins] does not consider the important question of [**algorithmic stability**]{}. Further, the results of [@perkins] do not characterize the limiting set as a function of the approximation errors. DNN is a popular choice for function approximation among practitioners of RL. They are used to approximate objective functions such as Q-factors, value functions, policies, gradients, etc. Such a DNN is trained in an online manner to minimize its prediction errors. The network architecture is typically chosen without explicit knowledge of the objective function. Hence, one cannot expect the approximate objective function obtained using a DNN to equal the true objective function everywhere, even after sufficient training. In other words, the approximation errors are likely to be [**biased**]{} (have non-zero mean). Further, these biases may affect the stability and quality of convergence, see Remark \[asmp\_remark1\] for details. The main contributions of this paper can be summarized as follows: 1. We show that the stability of the algorithms remains unaffected by error biases, provided error biases do not grow over time. 2. We provide an explicit relation between the biases and the limiting set. 3. Although we consider approximation error biases, there are no additional restrictions on the quality of communication. In other words, we present our results under standard (yet general) assumptions on communication as used, e.g., in [@Borkar_asynchronous]. 4. Our theory is used to analyze the asynchronous approximate counterpart of value (A2VI) and policy gradient (A2PG) iterations. Tool set: Asynchronous Stochastic Approximations ------------------------------------------------ Stochastic approximation algorithms (SAs) encompass a class of iterative algorithms that are model-free and sample-based. SAs find the minimum/maximum of a given objective function through a series of approximations. Traditionally, the approximation errors are expected to vanish over time. In 1951 the first SA was developed by Robbins and Monro [@robbins] for finding a root of a given regression function. The theory of modern SAs was developed by Benaïm [@Benaim96], Benaïm and Hirsch [@BenaimHirsch] and Borkar [@Borkar99]. This theory was extended to SAs with set-valued mean-fields by Benaïm, Hofbauer and Sorin [@Benaim05] [@benaim2006stochastic], Ramaswamy and Bhatnagar [@Ramaswamy], Perkins and Leslie [@perkins], Bianchi et. al. [@bianchi2019constant], and others. The reader is referred to books by Borkar [@BorkarBook] and Kushner and Yin [@KushnerYin] for a more detailed exposition on this topic. Although the traditional SA framework can be used to develop and analyze algorithms in RL and stochastic control, it does not encompass multi-agent and distributed scenarios. The latter setting was studied by Borkar [@Borkar_asynchronous]. He considered multi-agent algorithms wherein the agents are asynchronous and communications are delayed/ erroneous. Such algorithms are called asynchronous SAs. Many RL algorithms such as Q-learning, value iteration and policy gradient methods have asynchronous counterparts. These algorithms are designed and analyzed using the theory developed in [@Borkar_asynchronous] and [@abounadi]. The stability issue of asynchronous SAs was studied by Bhatnagar [@Bhatnagar]. Value Iteration and Policy Gradient for multi-agent settings {#intro_a2} ------------------------------------------------------------ We are interested in the following adaptation of value iteration for the multi-agent setting: $$\label{intro_a2vi} \begin{split} &J_{n+1}(i) = J_n(i) + a( \nu (n, i)) I(i \in Y_n) \\ &\left[ (\mathcal{A} T)_i (J_{n - \tau_{1 i}(n)}(1), \ldots, J_{n - \tau_{d i}(n)}(d)) + M_{n+1}(i) \right], n \ge 0. \end{split}$$ In the above equation,\ (i) $1 \le i \le d$ is the agent index, and there are $d$ agents in the system.\ (ii) $J_n := (J_n(1), \ldots, J_n(d))$ is an estimate of the optimal cost-to-go vector at time-step $n$.\ (iii) $Y_n$ is a subset of $\{1, 2, \ldots, d\}$ for each $n \ge 0$. It represents the number of agents that are **active** at time $n$.\ (iv) $0 \le \tau_{ji}(n) \le n$ is the (stochastic) delay experienced by agent $i$ in receiving information from agent $j$ at time $n$. In other words, at time $n$, the information obtained by agent $i$ from agent $j$ is $\tau_{ji}(n)$ time-steps old.\ (v) $\nu(n,i)$ is the number of times that agent $i$ was active (i.e., updated its component parameter) up until time $n$. [*It may be noted that the time index $n$ in equation (\[intro\_a2vi\]) represents the global clock, and thus, grows unbounded. We analyze the algorithm with respect to this clock.*]{} Let us say that agent-2 has been updated $34$ times when the global clock has been updated $50$ times. Then we have $\nu(50,2) = 34$.\ (vi) $\mathcal{A}$ is the approximation operator (deep neural network), $\{a(n)\}_{n \ge 0}$ is the given step-size sequence and $\{M_{n+1}\}_{n \ge 0}$ is a Martingale difference noise sequence. Let us call recursion (\[intro\_a2vi\]) *asynchronous approximate value iteration* (A2VI). If the optimal cost-to-go vector associated with agent-$i$ is $J^*(i)$, then $J^* = (J^*(1), \dots, J^*(d))$ is the optimal cost-to-go vector associated with the $d$-agent system. The objective is to find $J^*$ in an “asynchronous” manner. At any step $n$, the information at agent-$i$ from agent-$j$ is $\tau_{ji}(n)$ steps old. The stochastic delay process $\tau$ could be unbounded. However, we make certain standard assumptions on their moments, see $(A2)(v)$ in Section \[sec\_delay\]. Further, we assume that the agent-updates are all in the same order of magnitude, asymptotically, see $(S2)$ in Section \[sec\_stability\_assumptions\]. Under these assumptions, we shall show that (\[intro\_a2vi\]) is stable and converges to a fixed point of the perturbed Bellman operator. The reader is referred to Section \[sec\_a2vi\] for details. *It is important to note that we do not distinguish between stochastic shortest path (with no discounting) and infinite horizon discounted cost problems. Only the definition of the Bellman operator changes accordingly.* The reader is referred to [@BertsekasBook] for details on how the Bellman operator changes. The [**policy gradient**]{} algorithm is another important reinforcement learning approach, see [@sutton2000]. This method assumes a parameterization $\theta$ of the policy space $\pi$. Finding an optimal policy amounts to finding a $\hat{\theta}$ that locally minimizes the parameterized policy function $\pi(\cdotp)$. We are interested in adapting the policy gradient algorithm to the aforementioned multi-agent setting: $$\label{intro_a2pi} \begin{split} &\theta_{n+1}(i) = \theta_n (i) - a(\nu(i,n)) I\{ i \in Y_n\} \\ &\left( (\mathcal{A}\nabla_\theta \pi)_i(\theta_{n - \tau_{1i}(n)}(1), \ldots, \theta_{n - \tau_{di}(n)}(d)) + M_{n+1}(i) \right), n \ge 0. \end{split}$$ In the above equation, $\theta$ is the parameter associated with policy $\pi$ and $\mathcal{A}$ is the approximation of the policy function gradient. There may be a multitude of reasons for using $\mathcal{A}$. Most important among these is the non-availability of gradients, $\nabla_\theta \pi(\cdotp)$, due to the use of gradient estimators or the non-differentiability of $\pi$. In the latter case, one may work with the sub-gradient and its approximations instead of gradient itself. Note that a slight visual inspection reveals the similarity in the forms of recursions (\[intro\_a2vi\]) and (\[intro\_a2pi\]). We call (\[intro\_a2pi\]) *asynchronous approximate policy gradient* (A2PG) algorithm, see Section \[sec\_a2pi\] for details. Organization ------------ The organization of the remainder of this paper is as follows: - In the following section, we list the definitions and notations used throughout. - In Section \[sec\_assumptions\] we present the assumptions involved in the analysis of asynchronous stochastic approximations with asymptotically bounded biased, errors, *i.e.,* recursion (\[asmp\_aasaa\]). - In Sections \[sec\_nodelay\], \[sec\_delay\] and \[sec\_balance\], we present a convergence analysis of (\[asmp\_aasaa\]) under the assumptions presented in Section \[sec\_assumptions\]. The main technical result of this paper, Theorem \[delay\_main\], is enunciated in Section \[sec\_delay\]. This result is then moulded through the use of Borkar’s balanced step-sizes [@Borkar_asynchronous], into the desired statement in Section \[sec\_balance\]. - In Section \[sec\_stability\], we show that *the stability of algorithms remains unaffected when the approximation errors are guaranteed to be asymptotically bounded albeit non-diminishing and possibly biased*. - In Section \[sec\_a2vi\] our theory is used to understand the long-term behavior of A2VI. *We show that A2VI converges to a fixed point of the perturbed Bellman operator, when Borkar’s balanced step-sizes are utilized. We also establish a relationship between these fixed points and the approximation errors.* - In Section \[sec\_a2pi\] we briefly outline a similar analysis for A2PG. *We show that A2PG converges to a small neighborhood of local minima of the parameterized policy function $\pi(\cdotp)$. This neighborhood is shown to be related to the approximation errors.* - In Section \[sec\_exp\] we present experimental results to support our theory. - In Section \[sec\_verify\] we discuss the verifiability of assumption $(S5)$. Finally, we summarize our contributions in Section \[sec\_conclusion\]. Definitions and Notations {#sec_definitions} ========================= General ------- - ****\[Set closure\]**** Given $A \subset \mathbb{R}^d$, then $\overline{A}$ is used to represent the closure of $\mathcal{A}$. - ****\[Limiting set\]**** Given $\{ x_n \}_{n \ge 0} \subset \mathbb{R}^d$, its limiting set is given by $\underset{n \ge 0}{\bigcap} \overline{\{x_n \mid n \ge N\}}$. - ****\[Distance between point and set\]**** Given $x \in \mathbb{R}^d$ and $A \subseteq \mathbb{R}^d$, the distance between $x$ and $A$ is given by: $d(x, A) : = \inf \{\lVert a- y \rVert \ | \ y \in A\}$. - ****\[$\delta$-open neighborhood ($N^\delta(\cdotp)$)\]**** We define the $\delta$-*open neighborhood* of $A \subset \mathbb{R}^d$ by $N^\delta (A) := \{x \ |\ d(x,A) < \delta \}$. - ****\[Balls of radius $r$ $(B_r(0)$ and $\overline{B}_r(0))$\]**** The open ball of radius $r$ around the origin is represented by $B_r(0)$, while the closed ball is represented by $\overline{B}_r(0)$. - ****\[Projection map\]**** Given $\mathcal{B}$ and $\mathcal{C}$ subsets of $\mathbb{R}^d$, the projection map ${\text{{\LARGE $\sqcap$}}}_{\mathcal{B,C}}: \mathbb{R}^d \to \{\text{subsets of }\mathbb{R}^d\}$ is given by: $${\text{{\LARGE $\sqcap$}}}_{\mathcal{B,C}}(x) := \begin{cases} \{x\} \text{, if $x \in \mathcal{C}$} \\ \{y \mid d(y, x) = d(x, \overline{\mathcal{B}}), \ y \in \overline{\mathcal{B}} \} \text{, otherwise}. \end{cases}.$$ Related to norms and function spaces ------------------------------------ - ****\[Euclidean norm ($\lVert \cdotp \rVert$)\]**** Given $x \in \mathbb{R}^d$, $\lVert x \rVert$ is used to represent the Euclidean norm of $x$, i.e., $\lVert x \rVert = \sqrt{x_1 ^2 + \ldots + x_d ^2}$. - **\[Weighted max-norm ($\lVert \cdotp \rVert_{\nu}$)\]** Given $\nu = (\nu_1, \ldots, \nu_d)$ such that $\nu_i > 0$ for $1 \le i \le d$, the weighted max-norm of any $x = (x_1, x_2, \ldots, x_d) \in \mathbb{R}^d$ is given by: $\lVert x \rVert_\nu := \max \left\{ \frac{|x_i|}{\nu_i} \mid 1 \le i \le d \right\}$.\ - **\[Weighted p-norm ($\lVert \cdotp \rVert_{\omega, p}$)\]** Given $\omega = (\omega_1, \ldots, \omega_d)$ such that $\omega_i > 0$ for $1 \le i \le d$, and $p \ge 1$, the weighted p-norm of any $x \in \mathbb{R}^d$ is defined by: $\lVert x \rVert_{\omega, p} := \left( \sum \limits_{i=1}^d \lvert \omega_i x_i \rvert^p \right)^{1/p}.$ - ****\[Square integrable functions\]**** $\mathbb{L}^2([0,T], \mathbb{R}^d)$ is used to represent the set of all square integrable functions with domain $[0,T]$ and range $\mathbb{R}^d$. In other words, $$\mathbb{L}^2([0,T], \mathbb{R}^d) = \left\{ f:[0,T] \to \mathbb{R}^d \ \mathlarger{\mathlarger{\mid}} \int_0 ^T \lVert f(t) \rVert ^2 dt < \infty \right\}.$$ - ****\[Càdlàg functions\]**** $D([0,T], \mathbb{R}^d)$ is used to represent the set of all Càdlàg functions with domain \[0,T\] and range $\mathbb{R}^d$. This is the set of all functions that are right continuous with left limits. - ****\[Lipschitz continuity\]**** A function $f: \mathbb{R}^n \to \mathbb{R}^m$ is Lipschitz continuous [*iff*]{} $\exists L > 0$ such that $\forall x, y \in \mathbb{R}^n$ $\lVert f(x) - f(y) \rVert \le L \lVert x - y \rVert$. - ****\[Upper-semicontinuous map\]**** We say that a set-valued map $H$ is upper-semicontinuous, if for given sequences $\{ x_{n} \}_{n \ge 1}$ (in $\mathbb{R}^{n}$) and $\{ y_{n} \}_{n \ge 1}$ (in $\mathbb{R}^{m}$) such that $x_{n} \to x$, $y_{n} \to y$ and $y_{n} \in H(x_{n})$, $n \ge 1$, we have $y \in H(x)$. Related to differential inclusions ---------------------------------- - ****\[Marchaud Map\]**** A set-valued map $H: \mathbb{R}^n \to \{\text{subsets of }\mathbb{R}^m$} is called *Marchaud* if it satisfies the following properties: **(i)** for each $x$ $\in \mathbb{R}^{n}$, $H(x)$ is a convex and compact set; **(ii)** *(point-wise boundedness)* for each $x \in \mathbb{R}^{n}$, $\underset{w \in H(x)}{\sup}$ $\lVert w \rVert$ $< K \left( 1 + \lVert x \rVert \right)$ for some $K > 0$; **(iii)** $H$ is *upper-semicontinuous*. Let $H$ be a Marchaud map on $\mathbb{R}^d$. The differential inclusion (DI) given by $$\label{di} \dot{x} \ \in \ H(x)$$ is guaranteed to have at least one solution that is absolutely continuous. The reader is referred to [@aubin2012differential] for more details. We say that $\textbf{x} \in \sum$ if $\textbf{x}$ is an absolutely continuous map that satisfies (\[di\]). The *set-valued semiflow* $\Phi$ associated with (\[di\]) is defined on $[0, + \infty) \times \mathbb{R}^d$ as:\ $\Phi_t(x) = \{\textbf{x}(t) \ | \ \textbf{x} \in \sum , \textbf{x}(0) = x \}$.\ For $B \times M \subset [0, + \infty) \times \mathbb{R}^d$, we define $$\nonumber \Phi_B(M) = \underset{t\in B,\ x \in M}{\bigcup} \Phi_t (x).$$ - ****\[Invariant set\]**** $M \subseteq \mathbb{R}^d$ is *invariant* for the DI if for every $x \in M$ there exists a trajectory, $\textbf{x} \in \sum$, such that for $\textbf{x}(0) = x (\in M)$, $\textbf{x}(t) \in M$, for all $t > 0$. Note that the definition of invariant set used in this paper, is the same as that of positive invariant set in [@Benaim05] and [@BorkarBook]. - ****\[Internally chain transitive set\]**** An invariant set $M \subset \mathbb{R}^{d}$ is said to be internally chain transitive if $M$ is compact and, for every $x, y \in M$, $\epsilon >0$ and $T > 0$, we have the following: There exist $n$ and $\Phi^{1}, \ldots, \Phi^{n}$ that are $n$ solutions to the differential inclusion $\dot{x}(t) \in H(x(t))$, points $x_1(=x), \ldots, x_{n+1} (=y) \in M$ and $n$ real numbers $t_{1}, t_{2}, \ldots, t_{n}$ greater than $T$ such that: $\Phi^i_{t_{i}}(x_i) \in N^\epsilon(x_{i+1})$ and $\Phi^{i}_{[0, t_{i}]}(x_i) \subset M$ for $1 \le i \le n$. The sequence $(x_{1}(=x), \ldots, x_{n+1}(=y))$ is called an $(\epsilon, T)$ chain in $M$ from $x$ to $y$. - ****\[Attracting set & fundamental neighborhood\]**** $A \subseteq \mathbb{R}^d$ is *attracting*, if it is compact and there exists a neighborhood $U$ such that for any $\epsilon > 0$, $\exists \ T(\epsilon) \ge 0$ with $\Phi_{[T(\epsilon), +\infty)}(U) \subset N^{\epsilon}(A)$. Such a $U$ is called the *fundamental neighborhood* of $A$. The *basin of attraction* of $A$ is given by $B(A) = \{x \ | \ \underset{t \ge 0}{ \cap} \overline{\Phi_{[t, \infty)}(x)} \subset A\}$. - ****\[Attractor set\]**** In addition to being compact, if the *attracting set* is also invariant, then it is called an *attractor*. - ****\[Inward directing sets, [@ramaswamy2017]\]**** Given a differential inclusion $\dot{x}(t) \in H(x(t))$, an open set $\mathcal{O}$ is said to be an inward directing set with respect to the aforementioned differential inclusion, if $\Phi_t(x) \subseteq \mathcal{O}$, $t >0$, whenever $x \in \overline{\mathcal{O}}$. Specifically, if $\mathcal{O}$ is inward directing, then any solution to the DI with starting point at the boundary of $\mathcal{O}$ is “directed inwards”, into $\mathcal{O}$. Assumptions for convergence analysis {#sec_assumptions} ==================================== In this paper we are interested in the complete analysis of [**asynchronous SAs with non-diminishing biased additive errors**]{}. The general iterative structure of such algorithms is given by: $$\label{asmp_aasaa} \begin{split} &x_{n+1}(i) = x_n(i) + a( \nu (n, i)) I(i \in Y_n) \\ &\left[ (\mathcal{A} f)_i (x_{n - \tau_{1 i}(n)}(1), \ldots, x_{n - \tau_{d i}(n)}(d)) + M_{n+1}(i) \right], \text{ where} \end{split}$$ 1. $x_n = (x_n(1), \ldots, x_n(d)) \in \mathbb{R}^d$, $n \ge 0$. 2. $f: \mathbb{R}^d \to \mathbb{R}^d$ is a Lipschitz continuous objective function. The terms $\tau_{ji}(n)$, $Y_n$, $\mathcal{A}$, $\{a(n)\}_{n \ge 0}$ and $M_{n} = (M_n(1), \ldots, M_n (d))$ are as defined for equation in Section \[intro\_a2\]. It is worth noting that A2VI, , and A2PG, , are structurally identical to . We first present an analysis of . Later, this analysis is transcribed to obtain the desired theory for A2VI and A2PG. Additionally, stronger conclusions are drawn that are specific to A2VI and A2PG. Before proceeding with the analysis, the assumptions involved in the convergence analysis of (\[asmp\_aasaa\]) are listed. - [*$f: \mathbb{R}^d \to \mathbb{R}^d$ is a Lipschitz continuous function with Lipschitz constant $L$. Further, $\mathcal{A}$ is such that $\mathcal{A}f(x_n) \in f(x_n) + \overline{B}_\epsilon (0)$ for all $n \ge N$, where $N$ may be sample path dependent. Note that $\overline{B}_\epsilon(0)$ is a closed ball of radius $\epsilon$ centered at the origin. Here $\epsilon > 0$ is a fixed upper bound on the norm of the asymptotic approximation errors.*]{} - [*The step-size sequence $\{a(n)\}_{n \ge 0}$ satisfies the following conditions:*]{} - [*$\sum \limits_{n \ge 0} a(n) = \infty$ and $\sum \limits_{n \ge 0} a(n) ^2 < \infty$.*]{} - [*$\limsup \limits_{n \to \infty} \sup \limits_{y \in [x, 1]} \frac{a( \lfloor yn \rfloor)}{a(n)} < \infty$ for $0 < x \le 1$.*]{} - [*$\frac{n - \tau_{ij}(n)}{n} \to 1$ a.s. for every $1 \le i < j \le d$.*]{} - [*$\sup \limits_{n \ge 0} \lVert x_n \rVert < \infty$ a.s.*]{} - [*$\{M_{n+1}\}_{n \ge 0}$ is a square integrable martingale difference sequence such that*]{} - [*$E \left[ M_{n+1}(i) \mid \mathcal{F}_n\right] = 0$.*]{} - [*$E \left[ \lVert M_{n+1}(i) \rVert ^2 \mid \mathcal{F}_n\right] \le K(1 + \sup \limits_{m \le n} \lVert x_m \rVert^2 ),$ where*]{} [*$\mathcal{F}_n := \sigma \left\langle x_m, M_m, Y_m, \tau_{ij}(m); 1 \le i, j \le d, m \le n \right\rangle$, $1 \le i \le d$, $n \ge 0$ and $K >0$ is some fixed constant.*]{} We assume that all the agents are asynchronous. However, if we want the algorithm to learn effectively, then certain causal assumptions are necessary. $(A3)$ is one such assumption. Colloquially, $(A3)$ requires that the information delay between agents at time $n$ is in $o(n)$, where $o(\cdotp)$ is the standard *Little-O* notation. Without loss of generality, we assume that $\tau_{ii}(n) = 0$ for all $i$ and $n$. In other words, we assume that an agent does not experience delays in accessing its own local information. Brief overview of the steps involved in our analysis {#brief-overview-of-the-steps-involved-in-our-analysis .unnumbered} ---------------------------------------------------- - In Section \[sec\_convergence\], convergence properties of are analyzed under the almost sure boundedness assumption, i.e., $(A4)$. This analysis is presented in two stages. In the first stage, presented in Section \[sec\_nodelay\], it is assumed that $\tau_{ij}(n) = 0$ for all $i,\ j$ and $n$, *i.e.,* [**there are no communication delays**]{}. In the second stage, presented in Section \[sec\_delay\], the effect of communication delays is considered. Specifically, it is shown that the errors due to delayed communications do not affect the analysis in Section \[sec\_nodelay\]. - In Section \[sec\_stability\], we swap-out $(A4)$ in favor of verifiable conditions which guarantee stability of (\[asmp\_aasaa\]). [**We do this by presenting assumptions that imply $(A4)$**]{}. These assumptions are compatible with the conditions listed earlier in this section. Put together, they constitute an analytic framework for studying stability and convergence of . \[asmp\_remark1\] In typical DeepRL applications, the approximation operator $\mathcal{A}$ is a DNN. The objective function $f$ is typically one among the following: value function, Q-value function, policy function and Bellman operator. The operator $\mathcal{A}$ is trained in an online manner using loss functions that reduce the “prediction errors”. The neural network architecture is fixed by the experimenter without complete knowledge of $f$. This certainly limits how well the chosen neural network can approximate $f$. In other words, there may not exist a set of network weights such that the approximation errors are arbitrarily small. Hence, it is reasonable to merely hope that the errors do not grow over time. This is codified in $(A1)$ as $\limsup \limits_{n \to \infty} \ \lVert \mathcal{A}f(x_n) - f(x_n) \rVert \le \epsilon$ a.s. for some $\epsilon > 0$. Convergence analysis {#sec_convergence} ==================== We are now ready to analyze the convergence of (\[asmp\_aasaa\]) under $(A1)$-$(A5)$. We begin our analysis by making the additional assumption that there are no communication delays, i.e., $\tau_{ji}(n) = 0 \ \forall i, j, n$. This allows us to focus on the effect of asynchronicity between agents. Then, in Section \[sec\_delay\] we show that the analysis in Section \[sec\_nodelay\] is unaffected by the errors due to delayed communications. Analysis with no delays {#sec_nodelay} ----------------------- Assuming $\tau_{ii}(n) = 0$ $\forall i,n$, equation becomes: $$\label{nodelay_sa} \begin{split} x_{n+1}(i) &= x_n (i) + a( \nu(n, i)) I(i \in Y_n) \\ &\left[(\mathcal{A}f)_i(x_n(1), \ldots, x_n(d)) + M_{n+1}(i) \right]. \end{split}$$ For $n \ge 0$, define $\overline{a}(n) := \max \limits_{i \in Y_n} a(\nu(n, i))$ and $q(n, i) := \frac{a(\nu(n, i))}{\overline{a}(n)} I(i \in Y_n)$. It can be shown that $\sum \limits_{n \ge 0} \overline{a}(n) = \infty$ and $\sum \limits_{n \ge 0} \overline{a}(n)^2 < \infty$. Equation (\[nodelay\_sa\]) can be further rewritten as follows: $$\begin{split} x_{n+1}(i) &= x_n(i) + \overline{a}(n) q(n, i) \\ &\left[ f_i(x_n(1), \ldots, x_n(d)) + \epsilon_n(i) + M_{n+1}(i)\right], \end{split}$$ where, $\epsilon_n = (\epsilon_n(1), \ldots, \epsilon_n(d))$ is the approximation error at stage $n$, *i.e.,* $\epsilon_n = \mathcal{A}f(x_n) - f(x_n)$. It follows from $(A1)$ that $\limsup \limits_{n \to \infty} \lVert \epsilon_n \rVert \le \epsilon$ for a certain $\epsilon > 0$ fixed. Without loss of generality, we may say that $\lVert \epsilon_n \rVert \le \epsilon$ for all $n \ge 0$, even though we only have $\lVert \epsilon_n \rVert \le \epsilon$ for all $n \ge N$ (for sample path dependent $N$). This is because we are only interested in the asymptotic behaviour of . In other words, $\{x_n\}_{n \ge 0}$ and $\{x_n\}_{n \ge N}$ (subsequence starting at a sample point dependent $N$) have identical asymptotic properties. For $n \ge 1$, define $t(0) := 0$, $t(n) := \sum \limits_{m = 0}^{n -1} \overline{a}(m)$. For $t \in [t(n), t(n+1))$, define $\overline{x}(t) := x_n$, $\lambda(t) := diag(q(n, 1), \ldots, q(n, d))$ and $\overline{\epsilon}(t) = \epsilon_n$ for $t \in [t(n), t(n+1))$. The notation $diag(a_1, \ldots, a_d)$ is used to denote the diagonal $d \times d$ matrix given by $$\begin{bmatrix} a_1 & 0 & \dots \\ \vdots & \ddots & \\ 0 & & a_d \end{bmatrix}.$$ In the above, $\{\overline{a}(n)\}_{n \ge 0}$ is used to divide the time-axis. The quantity $\sum \limits_{m=0}^n q(m, i)$ captures the fraction of time $\left(\sum \limits_{m=0}^n \overline{a}(m) \right)$ that agent $i$ is active. Thus, $q(m, \cdotp)$ captures the relative frequency of the agent updates. For more details the reader is referred to Borkar [@Borkar_asynchronous]. Let us recall in the following useful form: $$\label{nodelay_useful} \begin{split} x_{n+1} =\ &x_n + \overline{a}(n)\\ &\lambda(t(n)) \left[ f(x_n) + \epsilon_n + M_{n+1}\right]. \end{split}$$ It follows from $(A4), \ (A5)$ and $\sum \limits_{n=0}^{\infty} \overline{a}(n)^2 < \infty$, that $\sum \limits_{n \ge 0} \lVert \overline{a}(n) M_{n+1} \rVert ^2 < \infty$. In other words, the quadratic variation process associated with $\xi_n := \sum \limits_{m = 0}^n \overline{a}(m) \lambda(t(m)) M_{m+1}$, $n \ge 0$, is bounded almost surely. From this we may conclude that the martingale noise sequence, $\{\xi_n\}_{n \ge 0}$, is convergent almost surely. For a proof of the aforementioned, the reader is referred to *Chapter 2* of Borkar [@BorkarBook]. Given the above, the following lemma is immediate. \[nodelay\_noise\] $\lim \limits_{n \to \infty} \xi_{n} < \infty$ a.s., where\ $\xi_n = \sum \limits_{m = 0}^n \overline{a}(m) \lambda(t(m)) M_{m+1}$. In other words, the martingale difference noise sequence is convergent. For $s \ge 0$, define $$x^s(t) := \overline{x}(s) + \int \limits_s ^{s+t} \left( \lambda(\tau) f(\overline{x}(\tau)) + \epsilon(\tau) \right) \ d \tau.$$ Then $x^s (\cdotp)$ is a solution to the non-autonomous DI $\dot{x}(t) \in \lambda(t + s) f(x(t)) + \overline{B}_\epsilon (0)$, with $\overline{x}(s)$ as its starting point. It follows from the definitions of $\overline{x}(\cdotp)$, $x^s(\cdotp)$, and from Lemma \[nodelay\_noise\] that $$\label{nodelay_aa_eq} \lim \limits_{s \to \infty} \sup \limits_{t \in [s, s+T]} \lVert \overline{x}(t) - x^s(t) \rVert = 0\ a.s.$$ Therefore, the asymptotic behavior of and can be determined by studying the family of functions given by $\left\{ x^s([0, T]) {\Large \mid} s \ge 0, \ T > 0 \right\}$. For any fixed $T > 0$, the set $\{x^s([0,T]) \mid s \ge 0\}$ can be viewed as a subset of $D([0,T], \mathbb{R}^d)$, equipped with the Skorohod topology. It follows from the Arzela-Ascoli theorem for $D([0,T], \mathbb{R}^d)$ that the aforementioned subset is relatively compact. For details on Càdlàg spaces, Skorohod topology and the Arzela-Ascoli theorem, the reader is referred to Billingsley [@Billingsley]. It now follows from (\[nodelay\_aa\_eq\]) that $\{x^s([0,T]) \mid s \ge 0\}$ and $\{\overline{x}([s, s+T]) \mid s \ge 0\}$ have the same limit points in $D([0,T], \mathbb{R}^d)$. Hence, to find any subsequential limit of $\{\overline{x}(s + \cdotp) \mid s \ge 0\}$, we merely need to consider the corresponding subsequence in $\{x^s([0,T]) \mid s \ge 0\}$. Finally, since $T$ is arbitrary, $\{\overline{x}(s + \cdotp) \mid s \ge 0\}$ is relatively compact in $D([0, \infty), \mathbb{R}^d)$. \[nodelay\_limitset\] Almost surely any limit point of $\{\overline{x}(s + \cdotp) \mid s \ge 0\}$ in $D([0, \infty), \mathbb{R}^d)$ is a solution to the non-autonomous $DI$ $\dot{x}(t) \in \Lambda(t) f(x(t)) + \overline{B}_\epsilon(0)$, where $\Lambda(\cdotp)$ is a $d \times d$-dimensional diagonal matrix-valued measurable function with diagonal entries in $[0,1]$. As in the proof of *Theorem 2, Chapter 7* of Borkar [@BorkarBook], we view $\lambda(\cdotp)$ as an element of $\mathcal{V}$, where $\mathcal{V}$ is the space of measurable maps $y(\cdotp): [0, \infty) \to [0, 1]^d$ with the coarsest topology that renders continuous, the maps $$y(\cdotp) \to \int \limits_0 ^t \langle g(s), y(s) \rangle ds,$$ for all $t > 0$, $g(\cdotp) \in L_2 ([0,T], \mathbb{R}^d)$. Define $\hat{\epsilon}_s (t) := \lambda(t) \overline{\epsilon}(t)$ for all $t \ge 0$. Since $\hat{\epsilon}_s (\cdotp)$ is measurable for every $s \ge 0$ and $\sup \limits_{s \ge 0} \lVert \hat{\epsilon}_s \rVert < \infty$, we obtain that $\{\hat{\epsilon}_s([0, T]) \mid s \ge 0\}$ is relatively compact in $L_2([0, T], \mathbb{R}^d)$. If necessary, by choosing a common subsequence of $\{\hat{\epsilon}_s([0,T]) \mid s \ge 0\}$ and $\{\lambda([s, s+T]) \mid s \ge 0\}$, we can show that any limit of $\{\overline{x}(s + \cdotp) \mid s \ge 0\}$, in $D([0, T], \mathbb{R}^d)$, is of the form: $$x(t) = x(0) + \int \limits_0 ^t \Lambda(\tau) f(x(\tau)) d\tau + \int \limits_0 ^t \epsilon(\tau) d \tau$$ $$\text{\textbf{or}}$$ $$x(t) = x(0) + \int \limits_0 ^t \left[ \Lambda(\tau) f(x(\tau)) + \epsilon(\tau) \right] d \tau,$$ where $\epsilon(\cdotp)$ and $\Lambda(\cdotp)$ are the subsequential limits of $\{\hat{\epsilon}_s([0,T]) \mid s \ge 0\}$ and $\{\lambda([s, s+T]) \mid s \ge 0\}$ respectively. Note that $\lVert \epsilon(t) \rVert \le \epsilon$, for $t \ge 0$, and that $\epsilon(\cdotp)$ is the weak limit in $L_2([0, T], \mathbb{R}^d)$, as $s \to \infty$. Also note that $\Lambda(\cdotp)$ is the limit in $\mathcal{V}$, equipped with the coarsest topology described above. The above lemma states that algorithm tracks a solution to the non-autonomous DI given by $\dot{x}(t) \in \Lambda(t)f(x(t)) + \overline{B}_\epsilon(0)$. We needed to associate a DI and not an o.d.e. since the algorithm allows for asymptotically biased approximation errors. The non-autonomous $\Lambda(\cdotp)$ is a consequence of asynchronicity. It is clear from the above Proof of Lemma \[nodelay\_limitset\] that $\Lambda(\cdotp)$ captures the relative update frequencies of the various agents involved in a limiting sense (proof of Lemma \[nodelay\_limitset\]). Extension to account for delays {#sec_delay} ------------------------------- In this section, we show that the statement of Lemma \[nodelay\_limitset\] is true even when $\tau_{ij}(n) \neq 0$. We present additional assumptions on the step-size sequence and the delay process $\tau$. Under these additional assumptions we show that the analysis in Section \[sec\_nodelay\] remains unaffected by the errors that arise from delayed communications. Before proceeding, we note that a methodology to deal with the effect of delays separately, was developed by Borkar in 1998, see [@Borkar_asynchronous]. We use similar techniques here. In order to avoid redundancies, we only provide additional details and a brief outline of the proof. The reader is referred to [@Borkar_asynchronous] or [@BorkarBook] for details. Recall the algorithm under consideration: $$\label{delay_aa_eq} \begin{split} &x_{n+1}(i) = x_n(i) + a( \nu (n, i)) I(i \in Y_n) \\ &\left[ (\mathcal{A} f)_i (x_{n - \tau_{1 i}(n)}(1), \ldots, x_{n - \tau_{d i}(n)}(d)) + M_{n+1}(i) \right]. \end{split}$$ The previously mentioned assumptions are listed below as additional refinements in $(A2)$:\ [**(A2)(iii)**]{} $\sup \limits_{n \ge 0} a(n) \le 1$.\ [**(A2)(iv)**]{} For $m \le n$, we have $a(n) \le \kappa a(m)$, where $\kappa > 0$.\ [**(A2)(v)**]{} There exists $\eta > 0$ and a non-negative integer-valued random variable $\overline{\tau}$ such that: - $a(n) = o(n ^{- \eta})$. - $\overline{\tau}$ stochastically dominates all $\tau_{kl}(n)$ and satisfies $$E\left[ \overline{\tau}^{1/\eta} \right] < \infty.$$ In Lemma \[nodelay\_limitset\], we showed that (\[delay\_aa\_eq\]) tracks a solution to the non-autonomous DI: $$\label{delay_NADI} \dot{x}(t) \in \Lambda(t) f(x(t)) + \overline{B}_\epsilon(0).$$ In what follows we outline the proof of why (\[delay\_aa\_eq\]) still tracks a solution to (\[delay\_NADI\]) even in the presence of delayed communications. Specifically, it is shown that the “effect” due to delays vanishes in the order of the step-sizes. Let us consider the following quantity: $$\begin{split} &a(\nu(n, i)) I(i \in Y_n) \\ &\left| f_i(x_{n - \tau_{1i}(n)}(1), \ldots, x_{n - \tau_{di}(n)}(d)) - f_i(x_n(1), \ldots, x_n(d)) \right|. \end{split}$$ There are no error terms due to the approximation operator $\mathcal{A}$, since they are already considered in the analysis presented in Section \[sec\_nodelay\]. Since $f$ is Lipschitz continuous, it is enough to find bounds for the terms $$a(\nu(n, i)) \left| x_n(j) - x_{n - \tau_{ji}(n)}(j) \right| \text{ for every $i$ and $j$.}$$ Clearly, the above term is bounded by $$a(\nu(n,i)) \sum \limits_{m = n - \tau_{ji}(n)}^{n - 1} \left| x_{m+1}(j) - x_m(j) \right|.$$ Using (\[delay\_aa\_eq\]) and the Lipschitz property of $f$, we get the following bound: $$a(\nu(n,i)) \sum \limits_{m = n - \tau_{ji}(n)}^{n - 1} C a(m) \le C a(\nu(n,i)) \tau_{ji}(n),$$ for some constant $C > 0$. Our task is now reduced to showing that $a(\nu(n,i)) \tau_{ji}(n) = o(1)$, which in turn follows from $$P(\tau_{ji}(n) > n ^\eta \ i.o.) = 0.$$ The above equation follows from $(A2)(v)$ and the Borel-Cantelli lemma. The following theorem is an immediate consequence of the analysis done hitherto. \[delay\_main\] Under assumptions $(A1)$-$(A5)$, the asynchronous approximation algorithm given by (\[asmp\_aasaa\]) has the same limiting set as the non-autonomous DI given by $\dot{x}(t) \in \Lambda(t) f(x(t)) + \overline{B}_\epsilon(0)$, where $\Lambda(t)$ is some matrix-valued measurable process. Further, for every $t \ge 0$, $\Lambda(t)$ is a diagonal matrix with entries in $[0,1]$. Balanced step-size sequences {#sec_balance} ---------------------------- A drawback in applying the above theorem to practical applications is the fact that the $DI$ (\[delay\_NADI\]) is non-autonomous. Further, $\Lambda(\cdotp)$ is not exactly known. Borkar [@Borkar_asynchronous] solved this problem through the use of “balanced step-size sequences”. Note that a step-size sequence $\{a(\nu(n,i))\}_{n \ge 0, 1 \le i \le d}$ is balanced if there exist $a_{ij} > 0$ for every pair of $i$ and $j$ such that $$\lim \limits_{n \to \infty} \frac{\sum \limits_{m=0}^n a(\nu(m,i))}{\sum \limits_{m=0}^n a(\nu(m,j))} = a_{ij}.$$ Typical diminishing step-size sequences are balanced provided all agents update their parameters with the same frequency. Since in this paper we assume that all agents are updated with the same frequency, diminishing step-size sequences are balanced. When balanced special step-sizes are used, one obtains $\Lambda(t) = diag(1/d, \ldots, 1/d)$ for all $t \ge 0$, see Theorem 3.2 of [@Borkar_asynchronous] for details. The tracking DI, (\[delay\_NADI\]), of Theorem \[delay\_main\] then becomes $$\label{balance_DI} \dot{x}(t) \in diag(1/d, \ldots, 1/d) f(x(t)) + \overline{B}_\epsilon (0) .$$ As noted in [@abounadi], the qualitative behaviors of $\dot{x}(t) = f(x(t))$ and $\dot{x}(t)$ $= diag(1/d, \ldots, 1/d) f(x(t))$ are similar since they only differ in scale. Further, it follows from the upper semi-continuity of chain recurrent sets that the long-term behavior of is similar to that of $\dot{x}(t) = diag(1/d, \ldots, 1/d) f(x(t))$ for small enough $\epsilon$. In other words, the long-term behavior of (\[balance\_DI\]) approximates that of $\dot{x}(t) = f(x(t))$. [*In this section, we have shown that asynchronous SAs with asymptotically bounded biased errors track a solution to , when balanced step-sizes are used.*]{} [^5] Stability analysis {#sec_stability} ================== The foregoing analysis required that the iterates be bounded in an almost sure sense. This requirement is hard to ensure when function approximation is used. It is well known that unbounded approximation errors can affect the stability of the algorithm, see [@BertsekasBook]. [*In this section, we present a set of sufficient conditions which ensure the following: suppose the errors are asymptotically bounded and possibly biased, then is stable.*]{} Stability assumptions {#sec_stability_assumptions} --------------------- Given $n \ge 0$ and $T > 0$, define $m_T(n) := \max \{m \mid m \ge n,\ t(m) - t(n) \le T \}$. The goal of this section is to replace the stability assumption, $(A4)$, from Section \[sec\_assumptions\] with verifiable conditions. These will be combined with the other assumptions to provide a complete analysis of stability and convergence. - - [*The step-size sequence is eventually decreasing, i.e., $\exists \ N$ such that $a(n)\ge a(m)$ for all $N \le n \le m$.*]{} - [*$\lim \limits_{n \to \infty} \frac{\sum \limits_{m = 0}^{\lfloor x n \rfloor} a(m) }{\sum \limits_{m = 0}^{n} a(m)} = 1$ uniformly in $x \in [y, 1]$, where $0 < y \le 1$.*]{} - - [*$\liminf \limits_{n \to \infty} \frac{\nu(n, i)}{n+1} \ge \tau$, for some $\tau > 0$.*]{} - [*$\lim \limits_{n \to \infty} \frac{\sum \limits_{m = \nu(n,i)}^{\nu(m_T(n), i)} a(m) }{\sum \limits_{m = \nu(n, j)}^{\nu(m_T(n), j)} a(m)}$ exists for all $i, j$.*]{} - - [*For all $n \ge 0$, we have $\lVert M_{n+1} \rVert \le D$ a.s.*]{} - [*$\lim \limits_{n \to \infty} \sum \limits_{m = n}^{m_T(n)} a(m) M_{l(m) + 1} = 0$, where $\{l(m)\}_{m \ge 0}$ is an increasing sequence of non-negative integers satisfying $l(m) \ge m$.*]{} [*Assumption $(S3)$, is a stricter version of $(A5)$. For the analysis in this section, we use $(S3)$ instead of $(A5)$. This is only done for the sake of clarity. Later, the analysis is shown to be true merely assuming $(A5)$.*]{} - [*Associated with $\dot{x}(t) = f(x(t))$ is a compact set $\Lambda$, a bounded open neighborhood $\mathcal{U}$ $\left( \Lambda \subseteq \mathcal{U}\subseteq \mathbb{R}^d \right)$ and a function $V: \overline{\mathcal{U}} \to \mathbb{R}^+$ such that*]{} - [*$\forall t \ge 0$, $\Phi_t (\mathcal{U}) \subseteq \mathcal{U}$ *i.e.,* $\mathcal{U}$ is strongly positively invariant.*]{} - [*$V^{-1} (0) = \Lambda$.*]{} - [*$V$ is a continuous function such that for all $x \in \mathcal{U} \setminus \Lambda$ and $y \in \Phi_t (x)$ we have $V(x) > V(y)$, for any $t > 0$.*]{} - [*$\hat{\mathbb{A}}$ is the global attractor of $\dot{x}(t) = f(x(t))$.*]{} For our subsequent analysis we need that one among (S4) and (S4a) is satistfied, not both. $(S4)$ and its variant $(S4a)$ are the key to our stability analysis. The two variants are overlapping yet qualitatively different, thereby covering a multitude of scenarios. Note that the above Lyapunov-based stability conditions are devised based on the ones in [@ramaswamy2017]. If $(S4)$ is satisfied, then *Proposition 3.25* of Benaïm, Hofbauer and Sorin [@Benaim05] implies that $\dot{x}(t) = f(x(t))$ has an attractor set $\hat{\mathbb{A}} \subseteq \Lambda$. It also implies that $V^{-1}([0, r])$ is a fundamental neighborhood of $\hat{\mathbb{A}}$, for small values of $r$. On the other hand, if $(S4a)$ is satisfied, then any compact neighborhood of $\hat{\mathbb{A}}$ is a fundamental neighborhood of it. ***In both cases we can associate an attractor, $\hat{\mathbb{A}}$, and a fundamental neighborhood, $\overline{\mathcal{N}}$, to $\dot{x}(t) = f(x(t))$***. Given $\delta > 0$, $\exists \ \epsilon(\delta) > 0$ such that $\dot{x}(t) \in f(x(t)) + \overline{B}_{\epsilon(\delta)}(0)$ has an attractor $\mathbb{A} \subseteq N^\delta (\hat{\mathbb{A}})$ *with fundamental neighborhood $\overline{\mathcal{N}}$*. This is a consequence of the upper semicontinuity of attractor sets, see [@aubin2012differential] or [@Benaim05] for details. It will be shown that converges to a neighborhood of a local/global attractor of $\dot{x}(t) = f(x(t))$, such as $\hat{\mathbb{A}}$. Further, this neighborhood depends on the approximation errors. Typically, the experimenter decides on the expected accuracy of the algorithm. This accuracy is quantified by $\delta$. Once this accuracy is fixed, the function approximator (DNN) is trained to control the asymptotic errors to $\epsilon(\delta)$. Then one can show that converges to $N^{\delta}(\hat{\mathbb{A}})$. Before we proceed, we associate the following Lyapunov function to $\dot{x}(t) \in f(x(t)) + \overline{B}_\epsilon(0)$: $\tilde{V}: \overline{\mathcal{N}} \to \mathbb{R}_+$ such that $ \tilde{V}(x) := \max \left\{ d(y, \mathbb{A}) g(t) \mid y \in \Phi_t(x), t \ge 0 \right\} $ and $c \le g(t) \le d$ is a strictly increasing function with $c > 0$. Since $\overline{\mathcal{N}}$ is a fundamental neighborhood of $\mathbb{A}$, it follows that $\sup \limits_{x \in \overline{\mathcal{N}}} \tilde{V}(x) < \infty$. The stability analysis requires choosing two bounded open sets, say $\mathcal{B}$ and $\mathcal{C}$, such that $\mathcal{C}$ is inward directing and $\mathbb{A} \subset \mathcal{B} \subset \overline{\mathcal{B}} \subset \mathcal{C}$. Recall that $\mathbb{A}$ is an attractor of $\dot{x}(t) \in f(x(t)) + \overline{B}_\epsilon(0)$ obtained from the definition of $\hat{\mathbb{A}}$ $\left(\text{see }(S4a)\right)$. First, we choose $\mathcal{V}_r$ as $\mathcal{C}$ such that $\overline{\mathcal{V}_r} \subset \mathcal{U}$. This is possible for small values of $r$. Next, we choose an open $\mathcal{B}$ such that $\mathbb{A} \subset \mathcal{B} \subset \overline{\mathcal{B}} \subset \mathcal{C}$. This is possible since $\Lambda$ is compact and $\mathcal{C}$ is open. The following two propositions are necessary for our stability analysis. The reader is referred to [@ramaswamy2017] for their proofs. \[stability\_prop1\] For any $r < \sup \limits_{u \in \mathcal{N}} \tilde{V}(u)$, the set $\mathcal{V}_r := \{x \mid \tilde{V}(x) < r\}$ is open relative to $\overline{\mathcal{N}}$. Further, $\overline{\mathcal{V}}_r = \{x \mid \tilde{V}(x) \le r \}$. \[stability\_prop2\] $\mathcal{C}$ is an inward directing set associated with $\dot{x}(t) \in f(x(t)) + \overline{B}_\epsilon(0)$. Traditionally, the stability of algorithms such as is ensured by projecting the iterates onto a compact set at every stage. If this set is not carefully chosen, then the algorithm may not converge, or converge to an undesirable set. Using the previously constructed $\mathcal{B}$ and $\mathcal{C}$, we obtain the following [**projective counterpart**]{} of : $$\label{sa_proj_new} \hat{x}_{n+1} = z_n \text{ such that } z_n \in {\text{{\LARGE $\sqcap$}}}_{\mathcal{B,C}}(\tilde{x}_n).\\$$ In the above equation, $\hat{x}_{0} = z_0$ such that $z_0 \in {\text{{\LARGE $\sqcap$}}}_{\mathcal{B,C}}(x_0)$ and $\tilde{x}_{n}(i) = x_n(i) + a( \nu (n, i)) I(i \in Y_n) \left[ (\mathcal{A} f)_i (x_{n - \tau_{1 i}(n)}(1), \ldots, x_{n - \tau_{d i}(n)}(d)) + M_{n+1}(i) \right]$. From the above set of equations, it is clear that the projective iterates $\{\hat{x}_n\}_{n \ge 0} \subseteq \mathcal{C}$. Since $\mathcal{C}$ is bounded by construction (see above), $\sup \limits_{n \ge 0} \lVert \hat{x}_n \rVert < \infty$ a.s. The realization of the projective scheme given by depends on finding sets $\mathcal{B}$ and $\mathcal{C}$. However, from the previous discussion it is clear that $\mathcal{B}$ and $\mathcal{C}$ surely exist but may be unknown. In other words, is only used in the stability analysis of and need not be realizable. Below we state our final stability assumption. - [*$\sup \limits_{n \ge N} \lVert x_n - \tilde{x}_n \rVert < \infty$ a.s. for a sample path dependent $N$.*]{} The projective counterpart of {#sec_projective} ------------------------------ In this section we begin the study of by analysing its projective counterpart given by . In the previous section we used $ \hat{x}_n$ to represent the projected iterates, to distinguish from the original iterates generated by . [**In this and the following couple of sections, we simply use $x_n$ for the projected iterates, instead of $\hat{x}_n$s, to reduce clutter.**]{} We consider the following useful equivalent to : $$\label{projective_proj1} \begin{split} \tilde{x}(n+1) &= x_n + D_n \left[\mathcal{A}f(x_n) + M_{n+1} \right], \\ x_{n+1} &= z_n, \text{ where } z_n \in {\text{{\LARGE $\sqcap$}}}_{\mathcal{B,C}}(\tilde{x}_{n+1}), \end{split}$$ where $D_n = diag\left(a(\nu(n,1)I(1 \in Y_n), \ldots, a(\nu(n, d))I(d \in Y_n)) \right)$. Note that (\[projective\_proj1\]) does not account for delayed communications. However, the modifications neccesary to account delays were presented in Section \[sec\_delay\] and can be applied here as well. Without loss of generality, assume that $Y_n$ has cardinality one for all $n \ge 0$. This is a useful trick from Abounadi et al., [@abounadi]. There is no loss of generality because the agents being updated at time $n$ can be viewed as being updated serially. In other words, $Y_n = \{ \phi_n\}$ with $\phi_n \in \{1,\ldots, d\}$ for all $n \ge 0$. We may rewrite (\[projective\_proj1\]) as: $$\label{projective_proj} x_{n+1} = x_n + D_n \left[ f(x_n) + \epsilon_n + M_{n+1}\right] + g_n,$$ where $ g_n = {\text{{\LARGE $\sqcap$}}}_{\mathcal{B,C}} \left( D_n \left[ f(x_n) + \epsilon_n + M_{n+1}\right] \right) - \left( D_n \left[ f(x_n) + \epsilon_n + M_{n+1}\right] \right).$ Define $\mu_n := \left[I(\phi_n = 1), \ldots, I(\phi_n = d) \right]$, $\overline{a}(n, i) := a(\nu(n,i))$, $\hat{a}(n) := \overline{a}(n, \phi_n)$, $t(0) := 0$ and $t(n) := \sum \limits_{m=0}^{n-1} \hat{a}(m)$ for $n \ge 1$. Below we define the trajectories necessary for our analysis. It is suggested that the reader skip these definitions and refer back when required. 1. $\mu(t) := \mu_n$ for $t \in [t(n), t(n+1))$, 2. $D_c(t) := D_n$ for $t \in [t(n), t(n+1))$, 3. $X_c(t) := x_n$ for $t \in [t(n), t(n+1))$, 4. $Y_c(t) := \mathcal{A}f(x_n)$ for $t \in [t(n), t(n+1))$, 5. $G_c(t) := \sum \limits_{m=0}^{n-1} g_m$ for $t \in [t(n), t(n+1))$, 6. $\epsilon_c(t) := \mu_n \epsilon_n$ for $t \in [t(n), t(n+1))$, 7. $X_l(t) := \begin{cases} x_n \text{ for $t = t(n)$} \\ \left(1 - \frac{t - t_n}{\hat{a}(n)} \right) X_l(t(n)) + \\ \left(\frac{t - t_n}{\hat{a}(n)} \right) X_l(t(n+1)) \text{ for $t \in [t(n), t(n+1)),$} \end{cases}$ 8. $W_l(t) := \begin{cases} \sum \limits_{m=0}^{n-1} D_m M_{m+1} \text{ for $t = t(n)$} \\ \left(1 - \frac{t - t_n}{\hat{a}(n)} \right) W_l(t(n)) +\\ \left(\frac{t - t_n}{\hat{a}(n)} \right) W_l(t(n+1)) \text{ for $t \in [t(n), t(n+1)).$} \end{cases}$ We also define the following left-shifted trajectories: 1. $X_l^n(t) := X_l(t + t(n))$, 2. $X_c^n(t) := X_c(t + t(n))$, 3. $Y_c^n(t) := Y_c(t + t(n))$, 4. $W_l^n(t) := W_l(t + t(n))$, 5. $G_c^n(t) := G_c(t + t(n)) - G_c(t(n))$, 6. $\epsilon_c^n(t) := \epsilon_c (t + t(n))$, 7. $\mu^n(t) := \mu(t + t(n))$, 8. $D_c ^n(t) := D_c(t + t(n))$. Note that $D^n_c(t) \le 1$ and $\lVert \epsilon_c^n(t) \rVert \le \epsilon$ for all $t \ge 0$ and $n \ge 0$. Hence $\{D_c^n([0,T]) \mid n \ge 0\}$ and $\{\epsilon_c^n([0,T]) \mid n \ge 0\}$ are relatively compact in $\mathbb{L}_2([0,T], \mathbb{R}^d)$. One may view $\{X_l^n([0,T]) \mid n \ge 0\}$ and $\{G_c^n([0,T]) \mid n \ge 0\}$ as subsets of $D([0,T], \mathbb{R}^d)$ equipped with the Skorohod topology. In Lemma \[projective\_rc\], we show that the aforementioned families of trajectories are relatively compact. As in *Lemma 2* of [@ramaswamy2017] we only need to show that these families are point-wise bounded and that any two discontinuities are separated by at least $\Delta > 0$. \[projective\_rc\] $\{X_l^n([0,T]) \mid n \ge 0\}$ and $\{G_c^n([0,T]) \mid n \ge 0\}$ are relatively compact in $D([0,T], \mathbb{R}^d)$, equipped with the Skorohod topology. As stated earlier, we only need to show that the aforementioned families of trajectories are point-wise bounded and that any two discontinuities are separated by at least $\Delta > 0$. From $(S3)(i)$ we have that $\lVert M_{n+1} \rVert \le D$ a.s. for all $n \ge 0$. Since $f$ is Lipschitz continuous, $F(x) := f(x) + \overline{B}_\epsilon (0)$ is Marchaud. Clearly, $\mathcal{A}f(x_n) \in F(x_n)$ for all $n \ge 0$. We have the following: $$\sup \limits_{x \in \overline{C}, y \in F(x)} \lVert y \rVert \le C_1 \text{ for some }C_1 > 0$$ $$\implies \sup \limits_{n \ge 0}\ \lVert \tilde{x}_{n + 1} - x_n \rVert \le \left( \sup \limits_{n \ge 0} a(n) \right) (C_1 + D)$$ $$\implies \sup \limits_{n \ge 0}\ \lVert g(n) \rVert \le \sup \limits_{n \ge 0} \left(\lVert \tilde{x}_{n + 1} - x_n \rVert + d(x_n, \mathcal{B}) \right) \le C_2$$ for some $0 < C_2 < \infty$ that is independent of $n$. Now that the point-wise boundedness property has been proven, it is left to show that any two discontinuities are separated by some $\Delta >0$. Using arguments similar to the ones found in the proof of *Lemma 2* in [@ramaswamy2017], we can show that such a constant is given by $$\Delta = \frac{d }{2\left(D + \sup \limits_{x \in \overline{C}, y \in F(x)} \lVert y \rVert \right)},$$ where $d$ is the number of agents in the multi-agent system at hand. Overview of the strategy involved in stability analysis ------------------------------------------------------- Note that $T$ in the above lemma is arbitrary. Hence, the sets $\{X_l^n([0,\infty)) \mid n \ge 0\}$ and $\{G_c^n([0,\infty)) \mid n \ge 0\}$ are also relatively compact in $D([0,\infty), \mathbb{R}^d)$. It follows from $(S3)$ that $\{W_l ^n([0,\infty)) \mid n \ge 0 \}$ is relatively compact in $D([0,\infty)), \mathbb{R}^d)$, and that all limits equal the constant-$0$-function. If we consider a subsequence $\{m(n)\} \subset \{n\}$ such that $M_{m(n)}$ is convergent, then $X_l^{m(n)}([0,T])$ and $ X_l ^{m(n)}(0) + \int \limits_{0}^T \left( \mu ^{m(n)} (s) f(X_c ^{m(n)} (s)) + \epsilon_c ^{m(n)}(s) \right) ds + G_c ^{m(n)}(T)$ have identical limits. Consider a subsequence $\{m(n)\}_{n \ge 0} \subseteq \mathbb{N}$ such that $\{\epsilon_c^{m(n)}([0,T]) \mid n \ge 0\}$ is weakly convergent in $\mathbb{L}_2([0,T])$, and such that $\{X_l^{m(n)}([0,T]) \mid n \ge 0\}$ and $\{G_c ^{m(n)}([0,T]) \mid n \ge 0\}$ are convergent in $D([0,T], \mathbb{R}^d)$. In addition, this subsequence satisfies the condition that $g_{m(n) - 1} = 0$ for all $n \ge 0$. Now, let us suppose that the limit of $\{G_c^{m(n)}([0,T])\}_{n \ge 0}$ is the constant-$0$-function. Using arguments from Section \[sec\_nodelay\], we show that the limit of $\{X_l^{m(n)}([0,T]) \mid n \ge 0\}$ is given by: $$X(0) + \int \limits_0^t \left( \lambda(s) f(X(s)) + \epsilon(s) \right) ds,$$ such that $X(0) \in \overline{\mathcal{C}}$. Hence, the projective scheme (\[projective\_proj1\]) tracks a solution to $\dot{x}(t) \in \lambda(t) f(x(t)) + \overline{B}_\epsilon (0)$, where $\lambda(\cdotp)$ is some measurable matrix-valued process with only diagonal entries. If balanced step-sizes (see *Theorem 3.2* of Borkar [@Borkar_asynchronous]) are used, then (\[projective\_proj1\]) tracks a solution to $\dot{x}(t) \in 1/d\ f(x(t)) + \overline{B}_\epsilon(0)$. The asymptotic behaviors of $\dot{x}(t) = f(x(t))$ and $\dot{x}(t) = (1/d)\ f(x(t))$ are similar, *i.e.,* any solution trajectory of both o.d.e.’s, with starting points in $\overline{\mathcal{C}}$, will converge to the attractor $\hat{\mathbb{A}}$. Consequently, any solution trajectory of $\dot{x}(t) \in (1/d)\ f(x(t)) + \overline{B}_\epsilon (0)$ converges to $\mathbb{A}$, provided the starting point is inside $\mathcal{C}$. Recall that $\mathbb{A}$ is an attractor of $\dot{x}(t) \in f(x(t)) + \overline{B}_{\epsilon}(0)$ with fundamental neighborhood $\overline{\mathcal{N}}$ such that $\mathcal{C} \subset \overline{\mathcal{N}}$. In other words, the projective scheme (\[projective\_proj1\]) converges to $\mathbb{A}$ almost surely. Stability of the algorithm under consideration, (\[asmp\_aasaa\]), follows from $(S5)$. To summarize, there are two important steps in proving stability:\ (Step-1) Any limit of $\{X_l^n([0,T])\}_{n \ge 0}$ is of the form $$X(t) = X(0) + \int \limits_0 ^t \left(\mu^* f(X(s)) + \epsilon(s) \right) ds + G(t) \text{ for } t \in [0,T],$$ where $\mu ^* = diag(1/d, \ldots, 1/d)$ and $X(0) \in \overline{C}$.\ (Step-2) Show that any limit of $\{G_c ^{m(n)}([0,T]) \mid n \ge 0\}$ is the constant $0$ function, provided $g_{m(n) - 1} = 0$ for all $n \ge 0$. Stability theorem ----------------- Define $K := \{n \mid g_{n - 1 } = 0\}$. The premise of the following two lemmas is that balanced step-sizes (of *Theorem 3.2*, [@Borkar_asynchronous]) are used. \[projective\_gis0\_1\] Without loss of generality, let $\{\epsilon_c ^n ([0,T])\}_{n \in K}$ be (weakly) convergent in $\mathbb{L}_2 ([0,T], \mathbb{R}^d)$, with weak limit $\epsilon(\cdotp)$. Also let $\{X_l^n([0,T])\}_{n \in K}$ and $\{G_c^n([0,T])\}_{n \in K}$ be convergent in $D([0,T], \mathbb{R}^d)$ as $n \to \infty$, with limits $X(\cdotp)$ and $G(\cdotp)$ respectively. Then, for $t \in [0,T]$ $$\label{gis0_1_eq} \begin{split} X_l^n(t) \to X(0) + \int \limits_0 ^t \left(\mu^* f(X(s)) + \epsilon(s) \right) ds + G(t). \end{split}$$ Since $X_c^n(t) \to X(t)$ for $t \in [0,T]$, we get $$\int \limits_0^t \mu^* f(X_c ^n(s)) ds \to \int \limits_0 ^t \mu^* f(X(s)) ds.$$ Note that we have $$\begin{split} X_l^n(t) = &X_l^n(0) + \int \limits_0^t diag(\mu^n_c(s)) f(X_c^n(s)) ds\ +\\ &W_l^n(t) + G_c^n(t) + \int_0 ^t \epsilon_c ^n(s) ds. \end{split}$$ Adding and subtracting $\int \limits_0^t \mu^* f(X_c^n(s)) ds$ in the above equation, we obtain: $$\label{projective_gis0_3} \begin{split} X_l^n(t) = &X_l^n(0) + \int \limits_0^t \mu^* f(X_c^n(s)) ds\ + \\ &W_l^n(t) + G_c^n(t) + \int_0 ^t \epsilon_c ^n(s) ds + \eta_n(t), \end{split}$$ where $\eta_n(t) = \int \limits_0^t diag(\mu^n_c(s)) f(X_c^n(s)) ds - \int \limits_0^t \mu^* f(X_c^n(s)) ds$. From Assumption $(S3)$ it follows that $\lim \limits_{n \to \infty} \sup \limits_{t \in [0,T]} \lVert W_l^n(t) \rVert = 0$. Suppose we show that $\lim \limits_{n \to \infty} \sup \limits_{t \in [0,T]} \lVert \eta_n(t) \rVert = 0$, then we may use the previously mentioned observations to conclude that (\[projective\_gis0\_3\]) converges to $$\begin{split} X(t) = X(0) &+ \int \limits_0^t \mu^* f(X(s)) ds + G(t) + \\ &\int_0 ^t \epsilon(s) ds \text{ as $n \to \infty$}. \end{split}$$ Recall that $\epsilon(\cdotp)$ is the weak limit of $\{\epsilon_c ^n ([0,T])\}_{n \in K}$. Thus, it is left to show that $\lim \limits_{n \to \infty} \sup \limits_{t \in [0,T]} \lVert \eta_n(t) \rVert = 0$. The proof of this is along the lines of the proof of *Lemma 3.5* in Abounadi et al., [@abounadi]. \[projective\_gis0\_2\] The $G(\cdotp)$ of Lemma \[projective\_gis0\_1\] is the constant $0$ function. As a consequence the projective scheme (\[projective\_proj\]) converges to $\mathbb{A}$. For a proof of this lemma the reader is referred to the proof of *Lemma 3* of [@ramaswamy2017]. We are now ready to state the other main result of this paper. Again, we assume that balanced step-sizes are used. \[projective\_main\] Under $(A1)$-$(A3)$ and $(S1)$-$(S5)$, the iteration given by (\[asmp\_aasaa\]) is stable ($\sup \limits_{n \ge 0}\ \lVert x_n \rVert < \infty$ a.s.) and converges to a closed connected internally chain transitive invariant set associated with $\dot{x}(t) \in \mu^* f(x(t)) + \overline{B}_\epsilon(0)$. The reader may recall that $\mu^* = diag(1/d, \ldots, 1/d)$. It follows from Lemma \[projective\_gis0\_2\] that the associated projective iterates, say $\{\hat{x}_n\}_{n \ge 0}$, corresponding to $\{x_n\}_{n \ge 0}$ converge to $\mathbb{A}$. In other words, there exists $N$, possibly sample path dependent, such that $\hat{x}_n \in \mathcal{C}$ for $n \ge N$. It follows from $(A5)$ that $\sup \limits_{n \ge N} \lVert x_n \rVert < \infty$ a.s. The second part of the statement directly follows from Theorem \[delay\_main\]. Stability assuming $(A5)$ instead of $(S3)$ ------------------------------------------- The statement of Theorem \[projective\_main\] is true when the weaker $(A5)$ is assumed instead of the stricter $(S3)$. The details involved (in a related setup) can be found in *Section 6* of [@ramaswamy2017]. We merely present the steps involved without any proofs and refer the reader to *Section 6* of [@ramaswamy2017] for details. The purpose of $(S3)$ is to show that any two discontinuities of $\{X_l^n([0,T]) \mid n \ge 0\}$ and $\{G_c^n([0,T]) \mid n \ge 0\}$ are at least $\Delta$ apart. An important step in proving the aforementioned claim with $(A5)$ replacing $(S3)$ is the following auxiliary lemma. \[projective\_gn\_1\] Let $\{t_{m(n)}, t_{l(n)}\}_{n \ge 0}$ be such that $t_{l(n)} > t_{m(n)}$, $t_{m(n+1)} > t_{l(n)}$ and $\underset{n \to \infty}{\lim} \left(t_{l(n)} - t_{m(n)} \right) = 0$. Fix an arbitrary $c > 0$ and consider the following: $$\psi_n := \left \lVert \sum \limits_{i = m(n)}^{l(n)-1} a(i) M_{i+1} \right \rVert.$$ Then $P \left( \{\psi_n > c\}\ i.o. \right) = 0$ within the context of the projective scheme given by (\[projective\_proj\]). Colloquially, Lemma \[projective\_gn\_1\] states the following: After a lapse of considerable time, there are no significant contributions to jumps in $X_l^n(\cdotp)$ or $G_c^n(\cdotp)$ from the Martingale difference noise sequence within shrinking time intervals. If we are unable to find a separating $\Delta$, then it can be shown that Lemma \[projective\_gn\_1\] is contradicted. Therefore, Theorem \[projective\_main\] is true under the standard, weak assumption on noise imposed by $(A5)$. As a consequence, the following modification of Theorem \[projective\_main\] is immediate. \[projective\_main\_1\] Under $(A1)$-$(A3)$, $(A5)$ and $(S1), (S2), (S4)$ and $(S5)$, the iteration given by (\[asmp\_aasaa\]) is bounded almost surely (stable) and converges to a closed connected internally chain transitive invariant set associated with $\dot{x}(t) \in \mu^* f(x(t)) + \overline{B}_\epsilon(0)$. Applications {#sec_applications} ============ Value and policy iterations are popular reinforcement learning algorithms that are at once effective and easy to implement. As explained in Section \[intro\_a2\], value and policy iterations are coupled with function approximation to counter Bellman’s curse of dimensionality, arising in large-scale (continuous state and action spaces) learning and control problems. In related work documented in [@ramaswamy2017], value iteration with function approximation is analyzed. However, it does not consider the multi-agent setting. Abounadi et. al. [@abounadi] analyzed the asynchronous version of Q-learning, but without function approximation. In this section, the theory hitherto developed is used to present a complete analysis of A2VI and A2PG. A2VI and A2PG are the multi-agent counterparts of value iteration and policy gradient scheme, respectively, that also account for function approximation. Asynchronous approximate value iteration (A2VI) {#sec_a2vi} ----------------------------------------------- Recall that we are interested in the recursion: $$\label{a2vi_a2vi} \begin{split} &J_{n+1}(i) = J_n(i) + a( \nu (n, i)) I(i \in Y_n) \\ &\left[ (\mathcal{A} T)_i (J_{n - \tau_{1 i}(n)}, \ldots, J_{n - \tau_{d i}(n)}) + M_{n+1}(i) \right], \text{ where} \end{split}$$ 1. $T$ is the Bellman operator, 2. Let $\epsilon_n = (\mathcal{A}T)J_n - TJ_n$ be the approximation error at stage $n$. The approximation operator $\mathcal{A}$ could be a deep neural network, or some other function approximator. We do not distinguish between stochastic shortest path and infinite horizon discounted cost problems. The definition of the Bellman operator $T$ changes appropriately based on the choice of problem [@BertsekasBook]. The following assumptions are natural: - The Bellman operator $T$ is contractive with respect to some weighted max-norm, $\lVert \cdotp \rVert_\nu$, *i.e.,* $\lVert Tx - Ty \rVert_\nu \le \alpha \lVert x - y \rVert_\nu$ for some $0 < \alpha < 1$. - $T$ has a unique fixed point $J^*$ and $J^*$ is the unique globally asymptotically stable equilibrium point of $\dot{J}(t) = TJ(t) - J(t)$. - $\limsup \limits_{n \to \infty}\ \lVert \epsilon_n \rVert_\nu \le \epsilon$ for some fixed $\epsilon > 0$. Given $x \in \mathbb{R}^d$ we make the following simple observations:\ (i) $\lVert x \rVert_\nu \le \frac{1}{\min \limits_{i} \nu_i} \lVert x \rVert$.\ (ii) $\lVert x \rVert \le \frac{d}{\min \limits_{i} \nu_i} \lVert x \rVert_\nu$.\ The following claim is an immediate consequence of these observations. \[a2vi\_lipschitz\] $T$ is Lipschitz continuous with some Lipschitz constant $0 < L < \infty$. The only difference between (\[a2vi\_a2vi\]) and (\[asmp\_aasaa\]) is that in (\[a2vi\_a2vi\]) the approximation errors are bounded in the weighted max-norm sense. It is worth noting that the errors could be more generally bounded in the weighted p-norm ($\lVert \cdotp \rVert_{\omega, p}$) sense. However it can be easily shown that $C_l \lVert x \rVert_\nu \le \lVert x \rVert_{\omega, p} \le C_u \lVert x \rVert_\nu$, for some $C_l, C_u > 0$, $x \in \mathbb{R}^d$. Hence it is sufficient to work with errors that are bounded in the weighted max-norm sense. In $(AV3)$ we assume $\limsup \limits_{n \to \infty}\ \lVert \epsilon_n \rVert_\nu \le \epsilon$ a.s., while in $(A1)$ we assume $\limsup \limits_{n \to \infty}\ \lVert \epsilon_n \rVert \le \epsilon$ a.s. Since $B^\epsilon := \{ y \mid \lVert y \rVert_\nu \le \epsilon\}$ is a convex compact subset of $\mathbb{R}^d$ (see *Lemma 7.2* of [@ramaswamy2017]), the analyses presented in Sections \[sec\_convergence\] and \[sec\_stability\] carry forward verbatim, with $B^\epsilon$ replacing $B_\epsilon(0)$. It follows directly from $(AV2)$ that $(S4a)$ is satisfied. If we show that (\[a2vi\_a2vi\]) also satisfies $(S5)$, then we may conclude that the iterates are stable and convergent. For this purpose, we compare the iterates $\{J_n\}_{n \ge 0}$, from (\[a2vi\_a2vi\]), to their projective counterparts $\{\hat{J}_n \}_{n \ge 0}$. We can show that $\hat{J}_n \to \mathbb{A}$, where $\mathbb{A}$ is an attractor of $\dot{x}(t) \in \mu^* (TJ(t) - J(t)) + B^\epsilon$, contained within a small neighborhood of $J^*$ and $\mu^* = diag(1/d, \ldots, 1/d)$. This neighborhood is dependent on the approximation errors. Since $\hat{J}_n \to \mathbb{A}$, $\exists N$, possibly sample path dependent, such that $\hat{J}_n \in \mathcal{C}$ for all $n \ge N$. Following the arguments presented in the proof of *Theorem 3* in [@ramaswamy2017] we can show that $$\lVert J_n - \hat{J}_n \rVert_\nu \le \lVert J_N - \hat{J}_N \rVert_\nu \vee \left( \frac{2 \epsilon}{1 - \alpha} \right),$$ where $\alpha$ is the “contraction constant” associated with the Bellman operator $T$. In other words, we get that (\[a2vi\_a2vi\]) satisfies $(S5)$. Supposing balanced step-sizes are used, the following result is immediate. \[a2vi\_main\] Under $(AV1)$-$(AV3)$, $(A5)$, $(S1)$ and $(S2)$, (\[a2vi\_a2vi\]) is stable and converges to some point in $\left \{J \mid \lVert TJ - J \rVert_\nu \le d \epsilon \right\}$, where $\epsilon$ is the norm-bound on the approximation errors. From the above discussion, it is clear that A2VI is bounded a.s. (stable). Since balanced step-sizes are used, to study the long-term behavior of A2VI one needs to study $\dot{J}(t) \in \mu^* ((TJ)(t) - J(t)) + B^\epsilon$, where $\mu^* = diag(1/d, \ldots, 1/d)$. It follows from *Theorem 2* of *Chapter 6* in [@aubin2012differential] that any solution to the aforementioned DI will converge to an equilibrium point of $T(\cdotp) + B^{d \epsilon}$, where $B^{d \epsilon} := \{d x \mid x \in B^\epsilon \}$. This is because $\dot{J}(t) \in \mu^* ((TJ)(t) - J(t) + B^{d \epsilon})$ and $\dot{J}(t) \in TJ(t) - J(t) + B^{d \epsilon}$ are qualitatively similar and only differ in scale. The equilibrium points of $T + B^{d \epsilon}$ are given by $\left \{J \mid \lVert TJ - J \rVert_\nu \le d \epsilon \right\}$. For more details the reader is referred to *Section 7* of [@ramaswamy2017]. We have shown that A2VI is stable as long as the approximation errors are asymptotically bounded. We do not distinguish between biased and unbiased errors. Further, we show that A2VI converges to a fixed point of a scaling of the perturbed Bellman operator $\mu^* TJ + B^\epsilon$. Asynchronous approximate policy gradient iteration (A2PG) {#sec_a2pi} --------------------------------------------------------- Policy gradient method is an important reinforcement learning algorithm developed by Sutton *et al.*, in 2000 [@sutton2000]. This method relies on a parameterization of the policy space, say $\pi(\theta)$. This parameterization is typically through the use of a deep neural network. Once a parameterization is determined, one seeks out a local minimizer $\hat{\theta}$ in the parameter space, in order to find the optimal policy. However, there are several situations wherein one either cannot calculate or does not wish to calculate the exact gradient $\nabla_\theta \pi(\theta_n)$ at every stage. This could be due to the use of a non-differentiable activation function or it could be a consequence of using gradient estimators such as $SPSA$-$C$ [@ramaswamy2017analysis] (simultaneous perturbation stochastic approximation with a constant sensitivity parameter) or other finite difference methods. In these cases, one has to deal with a policy gradient scheme with non-diminishing approximation errors. In the present work, we are interested in policy gradient methods within the setting of large-scale distributed systems. A general form of approximate policy gradient methods which satisfy all these conditions is given below: $$\label{a2pi_a2pi} \begin{split} &\theta_{n+1}(i) = \theta_n (i) - a(\nu(i,n)) I\{ i \in Y_n\} \\ &\left( (\mathcal{A}\nabla_\theta \pi)_i(\theta_{n - \tau_{1i}(n)}(1), \ldots, \theta_{n - \tau_{di}(n)}(d)) + M_{n+1}(i) \right). \end{split}$$ We call the above scheme as asynchronous approximate policy gradient iteration or A2PG. As in Section \[sec\_a2vi\], we can impose natural conditions on the gradient ($\nabla \pi(\cdotp)$), the noise and other parameters of (\[a2pi\_a2pi\]). Suppose the approximation errors are asymptotically bounded, then one can show that the iterates converge to a neighborhood of some local minimizer $\hat{\theta}$. Further, this neighborhood is a function of the approximation errors. For details on the relationship between the neighborhood and approximation errors, the reader is referred to [@ramaswamy2017analysis]. Experimental results {#sec_exp} -------------------- In this section, we consider an asynchronous algorithm (given by eq. ) to find the minimum of $F: \mathbb{R}^d \to \mathbb{R}^d$, where $d \ge 2$. The function $F$ is defined as $F(x_1, \ldots, x_d) :=$ $\left(F_1(x_1, \ldots, x_d), \ldots, F_d(x_1, \ldots, x_d)\right)$, where $F_1, \ldots F_d: \mathbb{R}^d \to \mathbb{R}$. [**\[Experimental set-up\]**]{} For better exposition, we consider an iteration in dimension $2$, i.e., $d = 2$. The function $F$ is defined as follows: $F_1(x) := \frac{1}{2} (x^{ T}Ax) (1)$, $F_2(x) := \frac{1}{2} (x^{ T}Bx) (2)$ and $F(x) := (F_1(x), F_2(x))$. The matrices $A$ and $B$ are randomly constructed positive definite matrices of dimension $2 \times 2$. A random error vector of norm less than $\epsilon \neq 0$ is added to the gradient at every step. Each component of this error vector is independent and uniformly distributed in $[0, \epsilon/ 2]$. It may be noted that $\nabla _x F_1(x) = Ax$ and $\nabla _x F_2(x) = Bx$ for $x \in \mathbb{R}^d$. Agent-1 runs the following: $$x_{n+1}(1) = x_n (1) - a(n) \left[ A \begin{bmatrix} x_n(1) \\ x_{n - \tau_{2,1}}(2) \end{bmatrix}(1) + \epsilon_1 \right],$$ while agent-2 runs the following: $$x_{n+1}(2) = x_n (2) - a(n) \left[ B \begin{bmatrix} x_{n - \tau_{2,1}}(1) \\ x_n(2) \end{bmatrix}(2) + \epsilon_2 \right].$$ The above distributed algorithm was run for $1000$ iterations using the step-size seqeunce $\{\nicefrac{1}{(n+10)}\} _{n=1} ^{1000}$. Since the matrices $A$ and $B$ are positive definite, we expect the limit to be the origin. At every step the two agents exchange (state) information with probability $p_c$, and with probability $1 - p_c$ the agents use old (state) information. In other words, $p_c$ represents the communication probability in our experiments. Note that we have used symmetric delays for simplicity and the experiments can be easily repeated with asymmetric delays. Results from the experiments are summarized in the two figures below. In both the figures, $\lVert (\epsilon_1, \epsilon_2) \rVert$ is plotted along $x$-axis and $\log( \lVert x_{1000} \rVert)$ is plotted along $y$-axis. In other words, any point in the plot represents $(\lVert (\epsilon_1, \epsilon_2) \rVert,\ \log( \lVert x_{1000} \rVert))$. Each figure has five differently colored line graphs to represent the five sample runs of the algorithm. For each sample run, the parameters $(x_1(1), x_1(2))$ (initial point) and the matrices $A$ and $B$ are randomly chosen, and the norm bound on additive errors $(\epsilon_1$, $\epsilon_2)$ is varied from $0.2$ to $3$ in steps of 0.1. Fig. \[fig05\] illustrates all the experiments with $p_c = 0.4$, and Fig. \[fig08\] illustrates all the experiments with $p_c = 0.8$. ![Five random sample runs with $p_c = 0.4.$ $\lVert (\epsilon_1, \epsilon_2) \rVert$ is plotted along $x$-axis and $\log( \lVert x_{1000} \rVert)$ is plotted along $y$-axis.[]{data-label="fig05"}](Figure_04nl){width="2.5in"} ![Five random sample runs with $p_c = 0.8.$ $\lVert (\epsilon_1, \epsilon_2) \rVert$ is plotted along $x$-axis and $\log( \lVert x_{1000} \rVert)$ is plotted along $y$-axis.[]{data-label="fig08"}](Figure_08nl){width="2.5in"} In Figures \[fig05\] and \[fig08\], the agents exchange data $40\%$ and $80\%$ of the time, respectively. It can be seen that the algorithm converged farther from the origin when the additive errors are larger. When $p_c = 0.8$, the algorithm converged to a point close to the origin, even for large additive errors, as compared to $p_c = 0.4$. [**The experiments seem to suggest that frequent communications should be used to counter the effect of biased additive errors.**]{} Verifiability of assumption $(S5)$ {#sec_verify} ================================== In this section, we address the verifiability of assumption $(S5)$. We do not discuss other assumptions, since they deal with the objective function, step-sizes or noise, in a manner that is standard to literature. However, to ensure $(S5)$, one needs to compare the algorithm iterates with a projective scheme. Further, the experimenter is typically uninterested in the projective scheme itself. In this section, we show that $(S5)$ is satisfied for fixed point finding algorithms such as A2VI, provided the objective function is non-expansive. Recall A2VI and its projective counterpart: $$\label{verify_iterate} \begin{split} J_{n+1} &= J_n + a(n) D_n \left[ TJ_n - J_n + \epsilon_n \right] ,\\ \hat{J}_{n+1} &\in {\text{{\LARGE $\sqcap$}}}_{\mathcal{B}, \mathcal{C}} \left(\hat{J}_n + a(n) D_n \left[ T\hat{J}_n - J_n + \hat{\epsilon}_n \right] \right). \end{split}$$ Unlike in Section $\ref{sec_a2vi}$, we assume here that $T$ is non-expansive with respect to some norm $p$, i.e., $p(Tx - Ty) \le p(x - y)$. For all $x, y$ it follows from Lemma \[projective\_gis0\_2\] that the projective scheme converges to $\mathbb{A}$ almost surely. In other words, there exists a sample path dependent $N$ such that $\{\hat{J}_n\}_{n \ge N} \subseteq \mathcal{C}$ a.s. Further, $\hat{J}_{n+1} = \hat{J}_n + a(n) D_n \left[ T\hat{J}_n + \hat{\epsilon}_n \right]$ for all $n \ge N$. For $n \ge N$, first, we take the difference between the two iterations in . Then, we take the norms on both sides, to get the following: $$\begin{split} p(J_{n +1} - \hat{J}_{n +1}) &\le (1 - a(n)) p(J_n - \hat{J}_n) \\ &+ a(n) p (TJ_n - T\hat{J}_n) + a(n) p(\epsilon_n - \hat{\epsilon}_n) \end{split}$$ Since $T$ is non-expansive we get: $$p(J_{n +1} - \hat{J}_{n +1}) \le p(J_{n} - \hat{J}_{n}) + a(n) p(\epsilon_n - \hat{\epsilon}_n).$$ For $k \ge 1$, we have: $$p(J_{N +k} - \hat{J}_{N +k}) \le p(J_{N} - \hat{J}_{N}) + \sum \limits_{n=N}^{N + k-1}a(n) p(\epsilon_{n} - \hat{\epsilon}_n).$$ As long as $p(\epsilon_{n} - \hat{\epsilon}_n) \in o(a(n))$ ($o(\cdotp)$, we get: $$p(J_{N +k} - \hat{J}_{N +k}) \le p(J_{N} - \hat{J}_{N}) + \sum \limits_{n=0}^{\infty}a(n)^2 < \infty.$$ It may be noted that many important RL and MDP algorithms such as Q-learning and Value Iteration are fixed point finding algorithms. In [@abounadi], the objective function of Q-learning is shown to be non-expansive. To summarize, the above set of arguments can be used to verify S(5) for approximate asynchronous fixed point finding algorithms with non-expansive objective functions. Summary of our contributions and conclusions {#sec_conclusion} ============================================ In this paper, we considered a natural extension of asynchronous stochastic approximation algorithms that accommodates the use of function approximations. For this purpose, we considered asynchronous stochastic approximations with asymptotically bounded, and possibly biased, approximation errors. The assumptions and the analyses presented are motivated by the need to understand the current crop of deep reinforcement learning algorithms. We are particularly interested in these algorithms when used within the setting of multi-agent learning and control. Our framework allows for complete asynchronicity in that each agent is guided by its own local clock. Although the agents are fully asynchronous, we require that the agents are updated, roughly, the same number of times, in the long run. Our framework can be used to analyze asynchronous approximate value iteration (A2VI). A2VI is an adaptation of regular value iteration with noise to the setting of large-scale multi-agent learning and control. Here, we showed that A2VI converges to a fixed point of the perturbed Bellman operator when balanced step-sizes are used. We also established a relationship between these fixed points and the approximation errors. Note that the use of function approximators required us to consider the perturbed Bellman operator. We further analyzed a similar adaptation, A2PG, of the classical policy gradient iteration to the multi-agent setting. We briefly discussed how A2PG converges to a small neighborhood of local minima of the parameterized policy function. Again, this neighborhood is directly related to the approximation errors. *An important consequence of our theory is the following: stability of the aforementioned algorithms remains unaffected when the approximation errors are asymptotically bounded, although possibly biased. Since a function approximator (eg. DNN) is continuously trained, it is reasonable to expect the errors to diminish asymptotically, even though they may not vanish completely.* *It is worth noting that ours is one of the first theoretical results that can be used to understand the long-term behavior of deep reinforcement learning algorithms within the setting of multi-agent learning and control.* In the future, we would like to make a two-fold extension to our analysis: (i) Allow for multiple timescales and (ii) allow for objective functions that are driven by controlled Markov processes. This will help us analyze other popular algorithms such as Deep Q-Network, deep temporal difference learning and deep deterministic policy gradient (a popular actor-critic algorithm). When implementing DeepRL algorithms in an online setting, the learning rate is generally fixed. To this end, we would also wish to explore one and two timescale algorithms with constant step-sizes and function approximations. [^1]: *supported by the German Research Foundation (DFG) - 315248657.* [^2]: Department of Electrical Engineering and Information Technology, Universität Paderborn, Paderborn - 33098, Germany. [^3]: Department of Computer Science and Automation, Indian Institute of Science, Bangalore - 560012, India. [^4]: Department of Electrical Engineering and Information Technology, Universität Paderborn, Paderborn - 33098, Germany. [^5]: Recall that $\epsilon$ of (\[balance\_DI\]) is the norm-bound on the approximation errors.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The application and analysis of the Cut-and-Choose technique in protocols secure against quantum adversaries is not a straightforward transposition of the classical case, among other reasons due to the difficulty to use “rewinding” in the quantum realm. We introduce a Quantum Computation Cut-and-Choose (QC-CC) technique which is a generalisation of the classical Cut-and-Choose in order to build quantum protocols secure against *quantum covert adversaries*. Such adversaries can essentially deviate arbitrarily provided that their deviation is not detected with high probability. As an application of the QC-CC technique we give a protocol for securely performing a two-party quantum computation with classical input and output. As a basis we use the concept of secure delegated quantum computing [@bfk], and in particular the protocol for quantum garbled circuit computation of [@KW16] that has been proven secure against only a weak specious adversaries (defined in [@DNS10]). A unique property of these protocols is the separation between classical and quantum communications and the asymmetry between client and server, which enables us to sidestep the issues linked to quantum rewinding. This opens the possibility of applying the QC-CC technique to other quantum protocols that have this separation. In our proof of security we adapt and use (at different parts of the proof) two quantum rewinding techniques, namely Watrous’ oblivious quantum rewinding [@Wat09] and Unruh’s special quantum rewinding [@Unr12]. Our protocol achieves the same functionality as in the previous work on secure two-party quantum computing such as the one in [@DNS12], however using the Cut-and-Choose technique on the protocol from [@KW16] leads to the following key improvements: (i) only one-way offline quantum communication is necessary , (ii) only one party (server) needs to have involved quantum technological abilities, (iii) only minimal extra cryptographic primitives are required, namely one oblivious transfer for each input bit and quantum-safe commitments.' author: - 'Elham Kashefi$^{1,2}$, Luka Music$^2$ and Petros Wallden$^1$' bibliography: - 'biblio.bib' title: 'The Quantum Cut-and-Choose Technique and Quantum Two-Party Computation' --- Introduction ============ A key task in modern cryptography is to compute a function of many inputs given by different parties that do not trust each other and wish to maintain the privacy of their input. This is called secure multi-party computation (to name some examples: millionaire’s problem, coin tossing, voting schemes, etc). The field started with the seminal paper of Yao [@Yao86], where two-parties that do not trust each other (they are “honest-but-curious”) compute a function of their joint inputs. This protocol was later made secure against malicious adversaries by employing standard (classical) techniques for boosting the security of honest-but-curious protocols to the malicious adversarial setting (e.g. using the GMW compiler as in [@GMW87]). Another such technique is the Cut-and-Choose technique first used in this context in [@LinPin07]. The quantum analogue (secure two-party quantum computation, or 2PQC) involves the computation of a function using a quantum computer and was first examined in [@DNS10] for a quantum honest-but-curious adversaries (called specious) and later made secure against more malicious adversaries in [@DNS12]. The latter protocol, did not use any of the standard boosting classical techniques, but instead used a stepwise quantum authentication protocol, where two-ways online quantum communication was required. Moreover, both protocols use extra classical cryptographic primitives, which in the case of the malicious [@DNS12] is a full actively secure *classical* two-party computation primitive. The use of classical boosting techniques (such as Cut-and-Choose) for quantum protocols is complicated not only because specific care is needed when defining quantum analogues, for example, for garbled circuits but also for technical reasons since the rewinding method for proving security cannot be directly used in quantum protocols (as demonstrated for zero knowledge proofs [@Wat09] and zero knowledge proofs of knowledge [@Unr12]). #### Our Contribution 1. We introduce a Quantum Computation Cut-and-Choose technique. Application of this technique is made possible because of the unique decomposition of quantum computation into a classical control and a quantum resource in the measurement-based quantum computing models such as gate teleportation. This separation furthermore provides a platform for a client-server setting for secure delegated computing [@bfk]. 2. We give a protocol for 2PQC with classical input and output[^1] which is secure against “quantum covert” adversaries, a notion of strong adversaries similar with the classical covert adversaries [@AumLin07] (see below for formal definition and motivation). We use the aforementioned QC-CC technique and address the subtleties in the security proof due to rewinding. Our protocol, which builds on the work of [@KW16] that gave a protocol for 2PQC secure against weak specious adversaries, resembles the original protocol by Yao [@Yao86] (e.g. asymmetry between the two parties) and in particular the one in [@LinPin07], where Yao’s protocol is boosted to the malicious case using the classical Cut-and-Choose technique. 3. A key obstruction when using classical techniques for boosting the security of quantum protocols is that in general rewinding the *quantum* adversary during the simulation is *not* possible. There are two known cases where rewinding can be used for quantum adversaries, namely Watrous’ oblivious rewinding [@Wat09] and Unruh’s special rewinding [@Unr12]. We adapt and use both methods in different places in order to construct the simulators and prove the security of our protocol. This is one of the few protocols in which quantum rewinding is explicitly used and the only one, to our knowledge, that uses two types of quantum rewinding. 4. [@DNS12; @DNS10] describe 2PQC protocols in a setting where the two parties are symmetric. Our protocol crucially differs in a number of points (other than using the Cut-and-Choose technique): (i) There is only one-way offline quantum communication between the parties, (ii) only one party (“server”) needs involved quantum technological abilities, while the other (“client”) only needs to prepare offline single qubits, (iii) minimal classical cryptographic primitives are required, namely oblivious transfer for input bits and quantum-safe commitments. In Section \[prelims\] we present background material and introduce the notion of *quantum covert adversaries* (similarly with [@AumLin07]). In Section \[quantum cc\] we introduce the quantum-computation Cut-and-Choose technique. In Section \[prot\] we give the protocol for 2PQC and prove its security in Appendix \[sec proofs\]. #### Related works. The field of secure two (and multi) party (classical) computation started with Yao’s paper [@Yao86], which was proven secure against malicious adversaries in [@GMW87] using generic Zero-Knowledge proofs and in [@LinPin07] with the Cut-and-Choose technique. Covert adversaries were introduced in [@AumLin07] where again the Cut-and-Choose technique was used to achieve an even more efficient protocol. Yao’s protocol has been used for a number of other functionalities, such as constructing non-interactive verifiable computing [@GGB10]. In the early days of quantum computation, researchers believed that quantum properties could lead to a breakthrough and achieve, with unconditional security, several (clasical) multi-party cryptographic primitives. However a series of no-go theorems, first proving that bit commitment is impossible [@commit1; @commit2] , then oblivious transfer [@OT_nogo] and finally [@SSS09] showed that any non-trivial functionality leaks some information to adversaries. Since then, it is established that any such protocol is either only computationally secure or requires the existence of certain (quantum secure) simple cryptographic primitives. Closely related is the question of what assumptions are required if one wants to perform a secure *quantum* computation involving multiple parties. The case of 2PQC was addressed in [@DNS10] for quantum honest-but-curious and in [@DNS12] for malicious adversaries. The case of multiple parties was addressed in [@Ben-Or2006; @multipartyqc] where an honest majority was required. In this work we use as basis the universal blind quantum computation protocol [@bfk] and its verifiable version [@fk]. For the case of weak specious adversaries, the 2PQC was addressed in [@KW16] while the multiparty quantum computation was also addressed in [@KP16], again in a restricted setting. While we use measurement-based quantum computation (MBQC) [@onewaycomputer], similar blind and verification protocols exist in the teleportation model [@B2015] and 2PQC or MPQC protocols could be explored for that case as well as other blind verification protocols such as [@hm2015]. Preliminaries and Security Definitions {#prelims} ====================================== Verifiable Blind Quantum Computation ------------------------------------ The model for quantum computations used in our contribution is MBQC [@onewaycomputer]. In this section we introduce MBQC and revise protocols for blind quantum computation (server performs computation without learning input/output or computation) [@bfk] and verifiable blind quantum computation (client can also verify that the computation was performed correctly) [@fk] which are based on it. The MBQC model of computation is equivalent to the circuit model as it is based on the gate teleportation principle. One starts with a large, generic entangled state (represented by a graph) and, by choosing suitable single qubit measurements, can perform any quantum computation (circuit). The computation is fully characterised by the graph and default measurement angles (see below) and is called a measurement pattern. See an example of a universal set of gates expressed as MBQC measurement patterns in Appendix \[example mbqc\]. For our purpose it will be simpler to consider a client-server setting. The client can prepare single qubits while the server can perform any general quantum computation. The client prepares and sends qubits in the $\ket{+}$ state and the server entangles them according to a certain computation graph by performing $\mathrm{controlled-}Z$ gates between all qubits corresponding to adjacent vertices on the graph, resulting in a *graph state* (details can be found in [@hein2004multiparty]). The computation is defined by a default measurement angle $\phi_i$ (which depends only on the desired computation). It is carried out by having the server measure single qubits in an order defined by the flow. The actual angle of each measurement depends on $\phi_i$ and on the outcomes of previous measurements. In our setting, the client is responsible for these classical calculations to adjust the angle and therefore the server returns to the client the result of each measurement, for more details see [@bfk]. Let $I$ and $O$ be respectively the sets of input and output qubits. A flow is defined by a function ($f : O^c \rightarrow I^c$) from measured qubits to non-input qubits and a partial order $(\preceq)$ over the vertices of the graph such that $\forall i, i \preceq f(i)$ and $\forall j \in N_G(i), f(i) \preceq j$, where $N_G(i)$ denotes the neighbours of $i$ in graph $G$. Each qubit $i$ is $X$-dependent on $X_i = f^{-1}(i)$ and $Z$-dependent on all qubits $j$ such that $i \in N_G(j)$ (this set is called $Z_i$). The existence of such a flow in all the graphs used for computations in MBQC patterns guarantees that the number of dependencies does not blow up. Given the sets $X_i$ and $Z_i$ the computation angle for qubit $i$ needs to be adjusted as such : let $s_i^X = \oplus_{j \in D_i^X} s_j$ and $s_i^Z = \oplus_{j \in D_i^Z} s_j$, where $s_j$ corresponds to the outcome of the measurement on qubit $j$ and $D_i^X$ and $D_i^Z$ are subsets of $X_i$ and $Z_i$ respectively (these qubits in $D_i^X$ and $D_i^Z$ have all already been measured as they belong to the past neighbours and past neighbours of past neighbours, see the flow construction in [@DK2006]). Then the corrected angle (the one that is actually measured) is $\phi'_i = (-1)^{s_i^X}\phi_i + s_i^Z\pi$. The computation can be totally hidden from the server due to the following observation: if instead of sending $\ket{+}$ states the client chooses at random and sends $\ket{+_\theta}=1/\sqrt{2}(\ket{0}+e^{i\theta}\ket{1})$ with $\theta \in \{0, \pi/8, 2\cdot\pi/8,\ldots, 7\cdot\pi/8\}$ then measuring the qubits in a similarly rotated basis has the same result as the initial non-rotated computation. If the client keeps the angle $\theta$ hidden from the server, the server is completely blind on what computation is being performed. To ensure that no information is leaked from the measurement outcome, we add another parameter $r_i$ for each qubit, which serves as a One-Time-Pad for the measurement outcome. The resulting measurement angle with all parameters taken into account is then $\delta_i = C(\phi_i, s_i^X, s_i^Z, \theta_i, r_i) = \phi'_i(\phi_i, s_i^X, s_i^Z) + \theta_i + r_i\pi$. In short, the client sends rotated qubits (to become the resource state once entangled) and then guides the computation with a set of classical instructions. It is the combination of these two parts (quantum state preparation and classical instructions) that leads to the desired blind computation. This idea was formalised in the *universal blind quantum computation* (UBQC) protocol in [@bfk]. We denote $ran_i$ collectively all the parameters that $\delta_i$ depends on other than $\phi_i$. We use bold and suppress the subscript that refers to a particular qubit in the graph to denote a full string, e.g. $\boldsymbol{\phi}:=\{\phi_1,\cdots,\phi_N\},\mathbf{ran}=\{ran_1,\cdots,ran_N\}$. We then define the past of qubits, which allows us to calculate upper bounds on the number of dependencies for various computation graphs. We define $P_i = Z_i \cup X_i$ to be the set of qubits $j$ that have $X$ or $Z$ dependency on $i$. We define influence-past $c_i$ of qubit $i$ to be an assignment of an outcome $b_j \in \{0, 1\}$ for all qubits $j \in P_i$. For example the brickwork state [@bfk], which can be used for universal quantum computation, has for each qubit a single X-dependent qubit and at most two Z-dependent qubits, and so the cardinality $\abs{P_i}$ is at most $3$ for all $i$. To each influence-past $c_i$ corresponds a unique value of $\delta_i$ (the corrected measurement angle for this influence-past). Note that we will denote $\boldsymbol{\delta}$ the set of instructions that include the measurement angles of each qubit, for all alternative influence-pasts, i.e. this is not a string of $N$ measurement angles $\delta_i$ but of $\sum_{i=1}^N|P_i|$ measurement angles of the form $\delta_i(c_i)$, where we have $|P_i|$ angles per qubit. In UBQC, the server is not forced to follow the instructions and the client cannot verify if the computation is done correctly. One can modify the protocol to allow for such verification (see Theorem \[vbqc verif\]), as was first done in [@fk]. The central idea is to include trap qubits at positions unknown to the server. The client can send states from $\{\ket{0},\ket{1}\}$ (called dummies), which have the effect of breaking the graph at this vertex, removing it along with any attached edges. This can be used to generate isolated qubits in the graph in a way that is undetectable by the server. These isolated qubits do not affect the computation while they have deterministic outcome if measured in the correct basis. They can therefore be used as traps: a client can easily detect if one of them has been measured incorrectly but the server is ignorant of their position in the graph. This idea was introduced in [@fk] and later optimised by different protocols, such as [@KW15] which we use here. The reason to use [@KW15], other than efficiency, is because the construction is “local” and the server can obtain some information about the true graph (needed for 2PQC) without compromising the security. The construction of the resource given a base-graph $G$ (graph that the UBQC computation without traps requires), that has vertices $v\in V(G)$ and edges $e\in E(G)$, is the following: 1. For each vertex $v_i$, we define a set of three new vertices $P_{v_i}=\{p^{v_i}_1,p^{v_i}_3,p^{v_i}_3\}$. These are called *primary* vertices. 2. Corresponding to each edge $e(v_i,v_j)\in E(G)$ of the base-graph that connects the base vertices $v_i$ and $v_j$, we introduce a set of nine edges $E_{e(v_i,v_j)}$ that connect each of the vertices in the set $P_{v_i}$ with each of the vertices in the set $P_{v_j}$. 3. We replace every edge in the resulting graph with a new vertex connected to the two vertices originally joined by that edge. The new vertices added in this step are called *added* vertices. This is the *dotted triple-graph* $DT(G)$. The edge or vertex of the initial graph that each vertex $v\in DT(G)$ belongs to is called its *base-location*. We can see that by inserting dummy qubits among the added qubits we can break the $DT(G)$ in three copies of the same base-graph: one will be used for the computation while the other two can be used as traps. Furthermore for each vertex base-location the choice of where to break the graph is independent from other vertex base-locations and can be made in advance by the client. The server remains totally ignorant of this choice. This choice is called *trap-colouring*. \[trap colouring\] We define trap-colouring to be an assignment of one colour to each of the vertices of the dotted triple-graph that is consistent with the following conditions: 1. Primary vertices are coloured in one of the three following colours: white or black (for traps), or green (for computation). 2. Added vertices are coloured in one of the four following colours: white, black, green or red. 3. In each primary set $P_v$ there is exactly one vertex of each colour. 4. Colouring the primary vertices fixes the colours of the added vertices: added vertices that connect primary vertices of different colour are red, added vertices that connect primary vertices of the same colour get that colour. ![Dotted-triple-graph for one-dimensional base graph of four qubits. Circles: primary vertices (base-location : vertex of the base-graph); Squares: added vertices (base-location : edge of the base-graph). (a) Trap-colouring. Green: computation qubits; White/black: trap qubits; Red: dummy qubits. Client chooses the trap-colouring, prepares each qubit individually and sends them one by one for the server to entangle according to the generic construction. (b) After entangling, the breaking operation defined by the dummy qubits will reduce the graph in (a) to the computation graph and for each vertex a corresponding trap/tag qubits.[]{data-label="figure3"}](figure3.jpg){width="1\columnwidth"} The flow, Past of qubit $i$ and Influence-past of qubit $i$ can all be extended to the Dotted-Triple-Graph construction, with the result that each qubit still depends on a constant number of previous measurements (see [@KW15]). For completeness we give the verification protocol from [@KW15] that we use: We assume that a standard labeling of the vertices of the dotted triple-graph $DT(G)$ is known to both the client and the server. The number of qubits is at most $3N(3c+1)$ where $c$ is the maximum degree of the base graph $G$.\ $\bullet$ **Client’s resources**\ – Client is given a base graph $G$. The corresponding dotted graph state $\ket{D(G)}$ is generated by graph $D(G)$ that is obtained from $G$ by replacing every edge with a new vertex connected to the two vertices originally joined by that edge.\ – Client is given an MBQC measurement pattern $\bbbm_{\textrm{Comp}}$ which when applied to the dotted graph state $\ket{D(G)}$ performs the desired computation, in a fault-tolerant way that can detect or correct errors fewer than $\delta/2$.\ – Client generates the dotted triple-graph $DT(G)$ and selects a trap-colouring according to definition \[trap colouring\], which is done by choosing independently the colours for each set $P_v$.\ – Client for all red vertices will send dummy qubits and thus perform break operation.\ – Client chooses the green graph to perform the computation.\ – Client for the white graph will send dummy qubits for all added qubits $a^{e}_w$ and thus generate white isolated qubits at each primary vertex set $P_{v}$. Similarly for the black graph the client will send dummy qubits for the primary qubits $p^v_b$ and thus generate black isolated qubits at each added vertex set $A_{e}$.\ – The set $D$ of the positions of dummy qubits is chosen as defined above (fixed by the trap-colouring).\ – A binary string $\mathbf{s}$ of length at most $3N(3c+1)$ represents the measurement outcomes. It is initially set to all zeros.\ – A sequence of measurement angles, $\phi=(\phi_i)_{1 \leq i \leq 3N(3c+1)}$ with $\phi_i \in A = \{0, \pi/4, \cdots, 7\pi/4\}$, consistent with $\bbbm_{\textrm{Comp}}$. We define $\phi_i'(\phi_i,\mathbf{s})$ to be the measurement angle in MBQC, when corrections due to previous measurement outcomes $\mathbf{s}$ are taken into account (the function depends on the specific base-graph and its flow, see e.g. [@bfk]). We also set $\phi'_i = 0$ for all the trap and dummy qubits. – The Client chooses a measurement order on the dotted base-graph $D(G)$ that is consistent with the flow of the computation (this is known to the Server). The measurements within each set $P_v$ and $A_e$ of $DT(G)$ are ordered randomly.\ – $3N (3c+1)$ random variables $\theta_i$ with values taken uniformly at random from $A$.\ – $3N (3c+1)$ random variables $r_i$ and $|D|$ random variable $d_i$ with values taken uniformly at random from $\{0,1\}$.\ – A fixed function $C(i, \phi_i, \theta_i, r_i, \mathbf{s}) = \phi_i'(\phi_i,\mathbf{s}) +\theta_i + r_i \pi$ that for each non-output qubit $i$ computes the angle of the measurement of qubit $i$ to be sent to the Server. (Continues on next page) $\bullet$ **Initial Step**\ – **Client’s move:** Client sets all the value in $\mathbf{s}$ to be $0$ and prepares the input qubits: [$$\begin{array}[c]{lllllllllllllll} \ket e = X^{x_1} Z(\theta_1) \otimes \ldots \otimes X^{x_l} Z(\theta_l) \ket I \end{array}$$]{} and the remaining qubits in the following form [$$\begin{array}[c]{lllllllllllllll} \forall i\in D &\;\;\;& \ket {d_i} \\ \forall i \not \in D &\;\;\;& \prod_{j\in N_G(i) \cap D} Z^{d_j}\ket {+_{\theta_i}} \end{array}$$]{} and sends the Server all the $3N (3c+1)$ qubits in the order of the labelling of the graph. – **Server’s move:** Server receives $3N(3c+1)$ single qubits and entangles them according to $DT(G)$. $\bullet$ **Step $i : \; 1 \leq i \leq 3N (3c+1)$** – **Client’s move:** Client computes the angle $\delta_i = C(i, \phi_i, \theta_i, r_i, \mathbf{s})$ and sends it to the Server.\ – **Server’s move:** Server measures qubit $i$ with angle $\delta_i$ and sends the result $b_i$ to the Client.\ – **Client’s move:** Client sets the value of $s_i$ in $\mathbf{s}$ to be $b_i + r_i$.\ $\bullet$ **Final Step:** – **Server’s move:** Server returns the last layer of qubits (output layer) to the Client.\ $\bullet$ \[step:Alice-prep\] **Verification**\ – After obtaining the output qubits from the Server, the Client measures the output trap qubits with angle $\delta_t = \theta_t + r_t \pi$ to obtain $b_t$. – Client accepts if $b_i = r_i$ for all the white (primary) and black (added) trap qubits $i$. – Client applies corrections according to measurement outcomes $b_i$ and secret parameters $\theta_i, r_i$ at the output layer green qubits and obtains the final output. \[vbqc verif\] Protocol \[prot:KW15\] is $\epsilon_{2}$-verifiable for the Client, where $\epsilon_{2} = \qty(\frac{8}{9})^{d}$ for $d = \ceil*{\frac{\delta}{2(2c+1)}}$. $\epsilon_2$-verifiability means that for any strategy of the server the real state is $\epsilon_2$-close (in trace distance) to $\rho_{ideal}(\rho_{in},p_{ok})$ for some $0\leq p_{ok}\leq 1$ where $\rho_{in}$ is the initial state and $\rho_f:=U(\rho_{in})$ is the desired final state: [$$\begin{array}[c]{lllllllllllllll}\rho_{ideal}(\rho_{in},p_{ok})&:=&p_{ok}\rho_f +(1-p_{ok})([\mathsf{abort}_1])\end{array}$$]{} Standard Definitions of Security for Quantum Two-Party Computations ------------------------------------------------------------------- Following [@DNS10], we have two parties $A$ and $B$ with quantum registers $\mathcal{A}$ and $\mathcal{B}$ and extra register $\mathcal{R}$, where $dim\mathcal{R} = (dim\mathcal{A} + dim \mathcal{B})$. The input is denoted $\rho_{in} \in D(\mathcal{A}\otimes\mathcal{B}\otimes\mathcal{R})$, where $D(\mathcal{A})$ is the set of all possible quantum states in register $\mathcal{A}$. Let then $L(\mathcal{A})$ be the set of linear mappings from $\mathcal{A}$ to itself and let $\phi : L(\mathcal{A}) \rightarrow L(\mathcal{B})$ be a completely positive and trace preserving superoperator, also called *quantum operation*. Finally let $\bbbone_{\mathcal{A}}$ be the totally mixed state and $\mathbf{1}_\mathcal{A}$ be identity operator in register $\mathcal{A}$. We will write $U \cdot \rho$ instead of $U \rho U^{\dagger}$ and we will sometimes denote $[b] := \dyad{b}$. Let $U_f$ be the unitary which when applied to classical inputs (computational basis) $(x, y)$, returns the classical output $f(x, y) = (f_1(x, y), f_2(x, y))$. The ideal output is $\rho_{out} = (U_f \otimes \bbbone) \cdot \rho_{in}$. Given two states $\rho_{0}$ and $\rho_{1}$, the trace norm distance is $\Delta(\rho_{0}, \rho_{1}):= \frac{1}{2}\norm{\rho_{0} - \rho_{1}}$. When $\Delta(\rho_{0}, \rho_{1}) \leq \epsilon$ then any process applied to $\rho_{0}$ behaves the same as it would on $\rho_{1}$ except with probability at most $\epsilon$. A function $\mu$ is *negligible in $n$* if, for every polynomial $p$, for sufficiently large $n$’s it holds that $\mu(n) < \frac{1}{p(n)}$. All the proofs of security in this paper will be in the *real/ideal simulation paradigm*: when considering a party as corrupted, we will construct a simulator interacting with the ideal functionality such that they are not able to detect that they are not in fact interacting directly with a real world honest party instead. The standard definition of security means that the simulated and real states are exponentially close and so indistinguishable for the adversary. We say that the $n$-step two party strategy is $\epsilon$-private for $B$ if there exists $\epsilon(n)$ negligible in $n$ such that for all adversaries $\tilde{\mathcal{A}}$ and for all steps $i$ we have: $$\Delta\qty(v_{i}(\tilde{\mathcal{A}}, \rho_{in}), \Tr_{\mathcal{B}_{i}}\qty(\tilde{\rho}_{i}(\tilde{A}, \rho_{in}))) \leq \epsilon(n)$$ where $v_{i}(\tilde{\mathcal{A}}, \rho_{in})$ is the view of the adversary when interacting with the simulator and $\Tr_{\mathcal{B}_{i}}\qty(\tilde{\rho}_{i}(\tilde{A}, \rho_{in}))$ is the view of the adversary in the real protocol. For further details on these definitions, see [@DNS10]. Another property which a quantum protocol may satisfy is *verifiability*. This intuitively means that the probability of receiving a corrupted output without aborting is negligible. \[verification\] A protocol is said to be *$\epsilon$-verifiable for party $P_{i}$* if for any (potentially malicious) behaviour of party $P_{j}$ with $j \neq i$, the probability of obtaining a wrong output and not aborting is bounded by $\epsilon$. If the output of the real protocol with malicious party $\tilde{P}_{j}$ is $\tilde{\rho}(\tilde{P}_{j}, \rho_{in})$ then we have that: where [$$\begin{array}[c]{lllllllllllllll}\rho_{ideal}(\rho_{in})&:=&p_{ok}(\bbbone_{\H_{P_i}}\otimes \mathcal{C}_{\H_{P_j}})\cdot U_f \cdot(\rho_{in})+(1-p_{ok})([\mathsf{abort}_1])\end{array}$$]{} where $\mathcal{C}_{\mathcal{H}_{P_{j}}}$ is the deviation that acts on $P_{j}$’s systems after they receive their outcome (a CP-map, it can be purified by including ancilla), $\rho'_{in} = (\bbbone_{\mathcal{H}_{P_i}} \otimes \mathcal{D}_{\mathcal{H}_{P_{j}}})\rho_{in}$ is an initial state compatible with $P_{i}$’s input, where $\mathcal{D}_{\mathcal{H}_{P_{j}}}$ is a deviation on the input by $P_{j}$. Note that since $\mathcal{C}_{\H_{P_j}}$ is performed at the final step of the protocol, we also have that the global state (before the final step deviation, i.e. at step $n-1$) follows: where $\rho^{n-1}_{ideal}(\rho_{in}) := p_{ok} U_f \cdot (\rho_{in})+(1-p_{ok})([\mathsf{abort}_1])$ During our protocol we will use bit commitment and 1-out-of-2 oblivious transfer (or OT). Bit commitment consists of two phases, Commit and Reveal, such that after the Commit the receiver has no information about the value that has been committed (hiding), while during the Reveal the sender cannot reveal a value different from the one committed previously (binding). Both these properties can be either computationally or unconditionally verified depending on the scheme (but not both unconditionally). We suppose that all the commitments used verify the following *strict binding* property. \[strict bind\] Let $COM$ be a commitment scheme, a deterministic polynomial-time function taking two arguments: the opening information $a$ and the message $y$. We say $COM$ is strictly binding if for all $a, y, a', y'$ with $(a, y) \neq (a', y')$, we have that $COM(a, y) \neq COM(a', y')$. It means that the sender is committed to the (unique) opening information *and* the message that is being committed. A 1-out-of-2 oblivious transfer is a two party functionality in which one party ($P_1$ in our case) has two strings $(x_0, x_1)$ and the other ($P_2$) has a bit $b \in \{0, 1\}$. At the end of the protocol $P_2$ recovers $x_b$. $P_1$ should not know which of the strings $P_2$ has chosen while $P_2$ has no information about the string they did not choose $x_{1-b}$. The following coin-tossing protocol will be needed later: 1. $P_1$ chooses $\alpha_{1} \overset{R}{\in} \{0, 1\}^{\log(s)}$ uniformly at random, commits to it and sends the commitment to $P_2$ (this commitment has to be perfectly hiding). 2. $P_2$ chooses $\alpha_{2} \overset{R}{\in} \{0, 1\}^{\log(s)}$ uniformly at random and sends it to $P_1$. 3. $P_1$ opens their commitment and reveals $\alpha_1$. 4. They both set $\alpha = \alpha_1 \oplus \alpha_2$, where $\oplus$ corresponds to the bitwise XOR ($\alpha$ is the index of the evaluation graph). Quantum Covert Adversaries -------------------------- We now introduce a new adversarial model for quantum protocols, based on the covert adversaries in [@AumLin07]. The quantum covert adversaries are also able to deviate arbitrarily from the protocol. The main difference with malicious is that when they cheat they are caught with high probability but not necessarily exponentially close to $1$. This models real world situations where getting caught might have dire consequences for the parties, eg. financial repercussions. By associating the correct cost to being caught, even if the probability of getting caught is not exponentially close to $1$ the deterrence might be still high enough to make cheating unappealing. We say that the protocol is $\epsilon$-private for $B$ if for all adversaries $\tilde{\mathcal{A}}$ and for all steps $i$ we have: $$\Delta\qty(v_{i}(\tilde{\mathcal{A}}, \rho_{in}), \Tr_{\mathcal{B}_{i}}\qty(\tilde{\rho}_{i}(\tilde{A}, \rho_{in}))) \leq \epsilon$$ Note that we do not have any requirement on $\epsilon$ and so, while we would like it to be close to $0$ it does not necessarily need to be negligible. Stronger and more elegant definitions of covert adversaries that might be adaptable to the quantum case can be found in [@AumLin07], but for reasons specific to our construction (namely the fact that measurement is irreversible and disturbs quantum states) they are not directly applicable here. #### Relation with specious and malicious adversaries In the classical case the covert adversary lies between the honest-but-curious and malicious ones for some choices of $\epsilon$, as shown in [@AumLin07]. Their analysis partially holds also for the quantum case: the quantum covert adversary is strictly less powerful than the fully malicious adversary, unless the $\epsilon$ is negligible, in which case both notions are trivially equivalent. The situation is not as clear with quantum specious adversaries. We only get that if the protocol is secure against covert adversaries with a certain $\epsilon$ then it is also secure against specious adversaries with the same $\epsilon$, which is not required to be negligible. The Quantum Cut-and-Choose Technique {#quantum cc} ==================================== The Cut-and-Choose method is a standard technique to boost a protocol secure against honest-but-curious to being secure against malicious adversaries, by enforcing the honest-but-curious behaviour. The classical Yao protocol, which also relies on a client/server (or garbler/evaluator) setup, was first proven secure against honest-but-curious adversaries (e.g. in [@LP04proof]) and then against malicious adversaries using this technique in [@LinPin07]. The garbler creates $s$ copies of the graph and the evaluator chooses which ones (the *check graphs*) they will check for consistency. If the checks pass and additional precautions are taken, the evaluator is confident that with high probability the remaining graphs (the *evaluation graphs*) were also constructed correctly and can be used for the computation. Here we will have $s$ graphs in total, $s-1$ *check graphs* and $1$ *evaluation graph*. In the classical Yao protocol, the probability of cheating and not getting caught is made negligible by using $\frac{s}{2}$ check graphs and $\frac{s}{2}$ evaluation graphs and revealing only the majority output of the evaluation graphs (not possible in quantum case, see below). There are several caveats in this setting even classically, cf. [@LinPin07; @Kir08]. We extend this technique to quantum computations in three steps. First, we show how to verify quantum states using Quantum-State-Preparation Cut-and-Choose (QSP-CC). This ensures that the resource state for the quantum computation in VBQC is constructed correctly. Secondly, we define *Classical Instructions Cut-and-Choose* (CI-CC), using the classical Cut-and-Choose to verify that the (classical) instructions for the computation are correct. Finally, we combine the two to get *Quantum Computation Cut-and-Choose* (QC-CC). Quantum State Preparation Cut-and-Choose (QSP-CC) {#qscc} ------------------------------------------------- The intuition for this functionality is that it allows the receiver to essentially test that a state $\ket{\psi_\alpha}$ was prepared and sent correctly (as promised), up to a certain probability, without the sender revealing the classical description of that state. This procedure boosts the classical commitment scheme towards a quantum-state commitment. Note however, that a proper quantum-state commitment scheme would require that the receiver also obtains no information about the state, which is not true here: in the above scheme the receivers can always obtain some (partial) information by measuring the state. See Protocol \[qsp-cc\] (below) for details and Appendix \[proof\_cc\] for proof that the state is $\frac{1}{\sqrt{s}}$-close to the correctly-prepared. Note that if we used more than $1$ evaluation graphs (as needed for boosting the success probability), the probability of successful cheating does not scale linearly with parallel repetitions of QSP-CC due to coherent (entangled) attacks. **Set up:** Two parties Alice and Bob.\ **Input:** Alice inputs a set of $s$ pure states $\{\ket{\psi_1},\cdots,\ket{\psi_s}\}$ along with their classical descriptions $\psi_i$. Bob chooses one index $\alpha$ at random.\ **Output:** For any strategy of an adversarial Alice, there exists a $0 \leq p \leq 1$ such that Bob obtains a state $\frac{1}{\sqrt{s}}$-close to $\rho_{id}(p) := p\dyad{\psi_\alpha} + (1 - p)[\textrm{Abort}]$.\ **Protocol:**\ – Alice commits the classical values $\psi_i$ with quantum-safe classical commitment scheme.\ – Alice sends all the $s$ labelled quantum states and then the $s$ commitments to Bob.\ – Bob randomly chooses the index $\alpha$ and request to open all commitments for $i \neq \alpha$.\ – Alice reveals all the classical values $\psi_i$ for $i \neq \alpha$.\ – Bob measures all states with index $i \neq \alpha$ in the basis $\{P_i := \dyad{\psi_i}, \bar{P}_i := I - \dyad{\psi_i}\}$ and aborts if he obtains the second outcome for any measurement. If the states $\ket{\psi_i}$ are tensor products of qubits, as is the case for the states sent by the verifier in VBQC, this measurement can be performed with local single qubit measurements.\ – The state $\ket{\psi_\alpha}$ (not measured) is guaranteed to be $\frac{1}{\sqrt{s}}$-close to $\rho_{id}(p)$ for some $p$. Classical Instructions Cut-and-Choose (CI-CC) --------------------------------------------- To perform the VBQC protocol, even if the resource state is correct, one needs to ensure that the classical instructions, i.e. measurement angles for each qubit $\boldsymbol{\delta}(\boldsymbol{\phi},\mathbf{ran})$, are also correct. The subscripts to bold symbols denote different graphs. The angles $\boldsymbol{\phi}$ are public, the receiver wants to ensure that $(\boldsymbol{\delta}_\alpha,\mathbf{ran}_\alpha)$, for the evaluation graph $\alpha$, is correct (and committed) without learning $\mathbf{ran}_\alpha$. To achieve this, the sender commits to the classical instructions for all graphs $\boldsymbol{\delta}_i$ after sending the (correct) qubits $\ket{\psi_i(\mathbf{ran}_i)}$ to the receiver. When $\mathbf{ran}_i$ is opened, the receiver can deterministically decide if $\boldsymbol{\delta}_i$ is correct (w.r.t. $\boldsymbol{\phi}$). Intuitively we expect that such a classical Cut-and-Choose has a $\frac1s$ probability of failure but it turns out that it is in fact $\frac{1}{\sqrt{s}}$, due to the specific proof techniques used to prove security against a quantum adversary and in particular the special rewinding from Appendix \[unruh rew\]. This will become clear in the proof of the following section. **Set up:** Two parties Alice and Bob.\ **Input:** Alice inputs a set of $s$ pure states $\{\ket{\psi_1(\mathbf{ran}_1)},\cdots,\ket{\psi_s(\mathbf{ran}_s)}\}$ and for each state a set of classical instructions $\{\boldsymbol{\delta}_1,\cdots, \boldsymbol{\delta}_s\}$ and the underlying randomness $\{\mathbf{ran}_1,\cdots,\mathbf{ran}_s\}$. Bob chooses one index $\alpha$ at random.\ **Output:** For any strategy of an adversarial Alice involving only the classical instructions, the probability that Bob receives the wrong instructions $\boldsymbol{\delta}_\alpha$ for $\ket{\psi_\alpha(\mathbf{ran}_\alpha}$ and does not abort is at most $\frac{1}{\sqrt{s}}$.\ **Protocol:**\ – Alice commits the values $\boldsymbol{\delta}_i, \mathbf{ran}_i$ with quantum-safe classical commitments.\ – Alice sends all the $s$ labelled quantum states and then the $s$ commitments to Bob.\ – Bob randomly chooses the index $\alpha$ and request to open all commitments for $i \neq \alpha$.\ – Alice reveals all the classical values $\boldsymbol{\delta}_i, \mathbf{ran}_i$ for $i \neq \alpha$.\ – Bob verifies that all the instructions are computed correctly and aborts otherwise.\ – The remaining set of instructions $\boldsymbol{\delta}_\alpha$ is correct up to probability $\frac{1}{\sqrt{s}}$. The receiver can use the remaining committed instructions $\boldsymbol{\delta}_\alpha$ to drive the computation by asking the sender to open the instructions corresponding to the measurement outcomes (influence-past). This differs from the classical case where the circuit evaluation is non-interactive. Quantum Computation Cut-and-Choose (QC-CC) {#qc-cc} ------------------------------------------ We can now introduce a Cut-and-Choose technique for quantum computation. The sender can deviate in any way. By combining CI-CC with QSP-CC and using the commitments during the (quantum) computation, the evaluator knows (with high probability) that they have been asked to perform the correct quantum computation. **Set up:** Two parties Alice and Bob, a (quantum) computation $f$ hidden in the pairs $(\mathbf{ran}_i,\boldsymbol{\delta}_i)$.\ **Input:** Alice inputs a set of $s$ pure states $\{\ket{\psi_1(\mathbf{ran}_1)},\cdots,\ket{\psi_s(\mathbf{ran}_s)}\}$ and for each state a set of classical instructions $\{\boldsymbol{\delta}_1,\cdots, \boldsymbol{\delta}_s\}$, the underlying randomness $\{\mathbf{ran}_1,\cdots,\mathbf{ran}_s\}$ and the classical description of each state. Bob chooses one index $\alpha$ at random.\ **Output:** For any adversarial Alice, the probability of performing the wrong computation and not aborting is $\order{\frac{1}{\sqrt{s}}}$. **Protocol:**\ – Alice commits the classical values $\psi_i(\boldsymbol{ran}_i)$ and $\boldsymbol{\delta}_i, \mathbf{ran}_i$ as in protocols \[qsp-cc\] and \[i-cc\] respectively.\ – Alice sends all the $s$ labelled quantum states and then the $s$ commitments to Bob.\ – Bob randomly chooses the index $\alpha$ and request to open all commitments for $i \neq \alpha$.\ – Alice reveals all the classical values $\psi_i(\boldsymbol{ran}_i), \boldsymbol{\delta}_i, \mathbf{ran}_i$ for $i \neq \alpha$.\ – Bob performs the same verifications as in protocols \[qsp-cc\] and \[i-cc\] and aborts similarly.\ – Alice reveals a subset of instructions $\boldsymbol{\delta}_\alpha$, as required by the protocol they wish to perform (but keeps secret $\mathbf{ran}_\alpha$).\ – Bob uses these along with the state $\ket{\psi_\alpha}$ to perform the desired computation. From Protocol \[qsp-cc\], the state $\ket{\psi_\alpha(\mathbf{ran}_\alpha)}$ is $\frac1{\sqrt{s}}$-close to the ideal state. From Protocol \[i-cc\], the pair $(\boldsymbol{\delta}_\alpha,\mathbf{ran}_\alpha)$ are constructed correctly up to probability $\frac{1}{\sqrt{s}}$. It follows that the computation is performed correctly up to probability $O(\frac1{\sqrt{s}})$. We suppose that we have access to an oracle $O^f$ which, upon being given the secret parameters $\mathbf{ran}_i, \psi_i(\boldsymbol{ran}_i)$ and a subset of the instructions $\boldsymbol{\delta}_i$ which is sufficient to drive a single specific computation on the resource state $\ket{\psi_i(\mathbf{ran}_i)}$, returns the output $f_i$ produced by using these instructions on the (correct) state $\ket{\psi_i(\mathbf{ran}_i)}$. Let us construct a simulator for an adversarial Alice: 1. The simulator runs the protocol normally until Alice reveals her commitments: receives the qubits and commitments, chooses random index $\alpha$ and receives the openings of commitments for $i \neq \alpha$. 2. Using the special rewinding (Appendix \[unruh rew\]), it rewinds the simulation and chooses at random a second index $\alpha'$, (for known $\mathbf{ran}_{\alpha'}, \psi_{\alpha'}(\boldsymbol{ran}_{\alpha'})$, $\boldsymbol{\delta}_{\alpha'}$). This (classical) part of the protocol can be viewed as three steps: commitment (sending the commitments), challenge (choosing the random $\alpha$) and response (revealing the commitments). Furthermore it has two properties: *special soundness* and *strict soundness*. Intuitively, special soundness means that given two correct communication transcripts with different challenges, an extractor is able to compute a witness (the simulator recovers the secret values). Strict soundness means that given a commitment and challenge, there is a unique acceptable response: here the only accepted response is to decommit the correct committed values (due to perfect and strict binding). Following the analysis in [@Unr12], protocols possessing such properties are secure against malicious quantum adversaries and can use rewinding against quantum adversaries in their security proof to extract a witness, evading issues naive rewinding faces due to no-cloning. This rewinding can only be done once as per the A-style definition of $\Sigma$-protocols in [@Unr12]. Lemma \[q rew state\] shows that after this step the distance between the real execution and the simulation is bounded by $\frac{1}{\sqrt{s}}$. Importantly, the clear separation between classical information and quantum states in protocols based on [@fk] is what makes the rewinding possible on the classical part of the protocol. 1. The simulator performs all the checks for $i \neq \alpha'$ and aborts in the same cases as an honest party would. 2. Alice reveals a subset of $\boldsymbol{\delta}_{\alpha'}$ needed to perform the computation. 3. The simulator can send this subset, along with the corresponding $\mathbf{ran}_{\alpha'}, \psi_{\alpha'}(\boldsymbol{ran}_{\alpha'})$ which were acquired previously, to the oracle $O^f$ and recover the correct output $f_{\alpha'}$, which is returned to Alice at the end of the computation. The distance between the ideal and real execution is bounded by $O(\frac1{\sqrt{s}})$. In addition to the arguments given in QSP-CC against using $s/2$ evaluation graphs with quantum states, we give an attack against our protocol showing concretely why this is not applicable: malicious $P_1$ encodes a teleportation of $P_2$’s input in the trap qubits of one graph. Since the results of all measurements are given back, $P_1$ can recover $P_2$’s input if the corrupt graph is (one of the) evaluated. This attack succeeds with probability $\frac{\textrm{number of evaluation graphs}}{s}$ and so we cannot hope for a better security bound than an inverse polynomial with this version of the protocol. Classically, evaluating an incorrect circuit has minimal influence when using $s/2$ evaluation circuits where only the majority output is returned. Here guaranteeing that the majority of evaluation circuits is correct would not be enough (the evaluation of a circuit gives extra information for the sake of verifiability). A final crucial observation is that this proof provides an example where proving security against a quantum adversary is hard, even for a classical functionality (CC). The part of the protocol that needs rewinding is entirely classical and the same proof (and extra cost) is necessary even for a fully classical CC protocol (single evaluation graph). The failure probability goes from $\frac{1}{s}$ to $\frac{1}{\sqrt{s}}$ for quantum adversaries. This kind of quadratic gap is unavoidable against quantum adversaries, unless (possibly) one uses totally different proof techniques: it is not sufficient to use cryptographic primitives resistant against quantum computers (eg. based on LWE), but proof techniques (and security parameters) should also be modified. The 2PQC Protocol {#prot} ================= #### Ideal functionality. - **Inputs:** Each party has a classical input, $x, y \in \{0, 1\}^{n}$ for $P_1$ and $P_2$ respectively. The adversary has auxiliary input $z \in \{0, 1\}^{*}$. Honest parties send their input to the trusted party computing $f = (f_1, f_2)$, a corrupted party $i$ either sends $\mathsf{abort}_i$ or any input of length $n$ (computed in poly-time from their input and auxiliary input). - **Computation by the trusted party:** If the trusted party receives an inconsistent input or $\mathsf{abort}_i$ by any party $i$, it returns $\mathsf{abort}_i$ to both parties. Otherwise the trusted party first sends $f_1(x', y')$ to $P_1$ ($x' = x$ if $P_1$ is honest, similarly for $P_2$), who can then choose to abort if corrupted (by sending $\mathsf{abort}_1$, which the trusted party forwards to $P_2$). If not, then $P_2$ receives $f_{2}(x', y')$ from the trusted party. - **Outputs:** Honest parties output what they received from the trusted party, corrupted parties have no output, the adversary outputs an arbitrary BQP function of their inputs and outputs. We will use the security parameters $s$ (number of graphs for CC) and $n$ (size of the inputs) throughout the paper. $P_1$ and $P_2$ will denote the garbler/client and the evaluator/server respectively. #### High-level overview $P_1$ and $P_2$ have already chosen a VBQC graph computing $f = (f_1, f_2)$ fault-tolerantly. $P_1$ chooses and commits to the randomness for the $s$ versions of the graph (the angles $\theta$ of the states and the flips $r$ in the measurements) and also to all the corresponding measurements angles according to the flow, the possible input measurement angles for both parties, the decryption keys for $P_2$’s output also according to the flow and the positions of the traps among $P_2$’s output qubits. For every input bit of $P_2$, they perform a 1-out-of-2 OT at the end of which $P_2$ learns the measurement angles for their input qubits for all graphs (doing the OTs before sending the qubits is essential for the security proof, it allows the simulator to recover the adversary’s input before constructing the graphs). They then perform a QC-CC protocol: the qubits of each graph are the states $\ket{\psi_i(\mathbf{ran}_i)}$, the commitments are $\mathbf{ran}_i, \psi_i(\boldsymbol{ran}_i)$ and $\boldsymbol{\delta}_i$, they choose the evaluation graph with a coin-tossing protocol, $P_1$ reveals the commitments of check circuits and $P_2$ verifies them as well as the states. Then they perform the evaluation with the VUBQC protocol with $P_1$ decommitting to the instructions (measurement angles). At the end they perform a simple key-exchange protocol so that $P_2$ may decrypt their output. If both parties are honest and follow the steps of the protocol then the protocol is correct. If the parties are honest, all the graphs and commitments are correct. The protocol (restricted to the evaluation graph) is equivalent to the normal VBQC, with the evaluator ($P_2$) keeping part of the output. The last step of the protocol allows $P_2$ to decrypt this output just the same way as the $P_1$ would in a regular execution. Moreover, all the checks pass and there is no abort. The correctness directly follows from the correctness of the VBQC protocol. \[thm:qyao\_privacy\] Assume that the oblivious transfer protocol is $\epsilon_{2}$-private against malicious adversaries and that the commitments are perfectly hiding and binding. Let $c$ be the maximum degree of the graph, $\delta$ the number of errors tolerated by the fault-tolerant encoding and $s$ the number of graphs constructed as part of the CC. If the protocol is $\epsilon_{2}$-verifiable for $P_1$, then it is $\epsilon_{2}$-private against a malicious $P_2$ and $\epsilon_{1}$-private against a covert $P_1$, where $\epsilon_{2} = \qty(\frac{8}{9})^{d}$ for $d = \ceil*{\frac{\delta}{2(2c+1)}}$ and $\epsilon_{1} = \frac{1}{\sqrt{s}}$ *Proof Sketch.* The proof follows from Lemma \[sec P1\] ($\epsilon_1$-privacy against a covert $P_1$) and Lemma \[sec P2\] ($\epsilon_2$-privacy against a malicious $P_2$) (see detailed proof in Appendix \[sec proofs\]). The simulator for adversarial $P_1$ (Lemma \[sec P1\]) is very similar to the one in the proof for Protocol \[qc-cc\]: obtains one set of values form a first run (runs as usual until $P_1$ reveals the commitments) then rewinds the adversary to get a second set and recovers the secret parameters of the adversary which then sends to the ideal functionality, thus getting the ideal output. The simulator runs the evaluation graph with a random input, encrypts the ideal output and returns it to the adversary. The simulator for adversarial $P_2$ (Lemma \[sec P2\]) relies on the construction of a graph which has deterministic output (see Lemma \[fake graph\]). The simulator recovers the adversary’s input with the OTs and sends it to the ideal functionality, from which $P_2$’s ideal output is obtained. Then constructs a graph which always produces this output and hides it among the remaining $s-1$ graphs, which are constructed correctly. The simulator biases the choice of evaluation graph with rewinding the coin-toss so that this special graph is chosen. The checks pass and $P_2$ evaluates the fake graph and gets the correct output. **Input:** $P_1$ has input $x \in \{0, 1\}^{n}$ and $P_2$ has input $y \in \{0, 1\}^{n}$.\ **Auxiliary input:** Functions $f_1$ and $f_2$, a security parameter $s$ (which is a power of $2$) and the description of a fault-tolerant MBQC pattern $C$ such that $C(x, y) = (f_1(x, y), f_2(x, y))$ for classical inputs $(x, y)$.\ **Output:** Party $P_1$ should receive $f_1(x, y)$ and party $P_2$ should receive $f_2(x, y)$.\ **The protocol:** 1. $P_1$ constructs $s$ copies of the dotted-triple-graph computing $C$ using independent randomness (here they just choose everything in the different graphs, no need to prepare the qubits just yet). 2. $P_1$ constructs commitments to: 1. All $\theta_{i,q}^{j}$, $r_{i,q}^{j}$ (these correspond to the $\mathbf{ran}_i$, while the $\theta_{i,q}^{j}$ alone correspond to $\ket{\psi_i(\mathbf{ran}_i}$in the QC-CC protocol, all the other commitments correspond to $\boldsymbol{\delta}_i$), $\prescript{\textsc{k}}{}\delta_{i,q}^{j}$ where $i$ runs over all $s$ graphs, $q$ is the index of the base location in the graph, j correspond to the index of computation, trap and dummy qubits for this particular base location (in the order they appear in the triple-dotted-graph $i$, so if the first qubit for this base location is a trap, $\theta_{i,q}^{1}$ will be the angle associated with that trap) for this base location $q$ and $\textsc{k}$ runs over all possible correction values (previous measurements) affecting this position, according to the flow (the position of the traps and dummies in all graphs are given implicitly through these values). They also commit to the values of the dummy qubits. 2. For their and $P_2$’s input they commit to both possible versions of $\prescript{}{b}\delta_{i,q}^{j}$ where $b \in \{0,1\}$, in permuted order for their input and in correct order for $P_2$’s input. 3. They commit to all potential keys (a One-Time-Pad) for each of $P_2$’s output qubits (according to flow). 4. They commit to the positions of the computation qubits, dummies and traps in the last layer of computation for $P_2$’s output qubits. 3. $P_1$ and $P_2$ participate in $n$ instances of a 1-out-of-2 OT protocol, where in each one $P_1$’s inputs are the sets of $\qty{decommit(\prescript{}{0}\delta_{i,q}^{j})}_{0 \leq i \leq s-1}$ and $\qty{decommit(\prescript{}{1}\delta_{i,q}^{j})}_{0 \leq i \leq s-1}$ and $P_2$’s input is bit $b_{q}$ corresponding to position $q$. In the end of step 3, $P_2$ receives the decommitments to the measurement angles corresponding to all their inputs (for the same binary value across all graphs). 4. $P_1$ and $P_2$ perform the QC-CC protocol (\[qc-cc-prot\]): 1. $P_1$ sends the qubits in the states $\ket*{+_{\theta_{i,q}^{j}}}$ 2. Then $P_1$ sends the commitments 3. They both pick the random graph index $\alpha$ using the coin-tossing Protocol \[coin-toss\] 4. $P_1$ opens the commitments from 2.(a), 2.(b), 2.(c), 2.(d) for any graph whose index is not $\alpha$ 5. $P_2$ performs checks and outputs $\mathsf{abort}_2$ and halts if any of the checks fail (the checks are the following : the $\delta$s are correctly constructed and are compatible with the choice of $\phi$, $r$ and $\theta$; the traps and dummies are in the correct place (from commitments 2.(a) and 2.(d)); the decryption keys are correct; the values they received for their input via the OTs are consistent (they are in the correct place with regard to their input bit); they verify that all $\ket*{+_{\theta_{i,q}^{j}}}$ are correct by measuring them in the $\theta_{i,q}^{j}$ basis). 6. Then $P_1$ opens the values from commitments to the $\delta$s in 2.(b) corresponding to their actual binary input for graph $\alpha$. $P_2$ entangles the qubits according to the dotted-triple-graph and evaluates this graph by asking $P_1$ to open the values to $\prescript{\textsc{k}}{}\delta_{i,q}^{j}$ in 2.(a) for <span style="font-variant:small-caps;">k</span> corresponding to the measurements values they obtained on each qubit. If any of the traps are measured incorrectly $P_1$ privately raises a flag *corrupted*. This corresponds to the evaluation phase of Protocol \[prot:KW15\]. 5. At the end of the computation they perform the following key-release step: 1. $P_2$ measures all the qubits in the final layer (output qubits) according to the corresponding $\prescript{\textsc{k}}{}\delta_{i,q}^{j}$, as decommitted by $P_1$. 2. They send back all the measurement outcomes corresponding to $P_1$’s output qubits and commit to the ones corresponding to their output qubits. 3. $P_1$ checks all the traps on their qubits and outputs $\mathsf{abort}_1$ if any fail or if the flag *corrupted* was raised during computation, otherwise they decrypt their output using their decryption keys and set the decryption as their output. (Continues on next page) 4. $P_1$ reveals the positions of traps and dummies in the final layer of the computation by decommitting 2.(d) 5. For these positions $P_2$ reveals the commitments of 5.(b) 6. $P_1$ checks that these traps were measured correctly and outputs $\mathsf{abort}_1$ if any fail, otherwise they decommit the decryption keys in 2.(c) corresponding to $P_2$’s output. 7. $P_2$ decrypts their output, sets the decryption as their final output and ends the protocol. Acknowledgments {#acknowledgments .unnumbered} =============== Funding from EPSRC grants EP/N003829/1 and EP/M013243/1 is acknowledged. Mapping of a universal set of gate in the MBQC framework {#example mbqc} ======================================================== We present here diagrams taken from [@bfk] showing how to translate a universal set of gates to the MBQC model using the brickwork graphs. ![Translation of H, $\pi/8$, CNOT and identity gates[]{data-label="mbqc"}](mbqc_paterns.jpg){width="0.95\columnwidth"} Proof of QSP-CC {#proof_cc} =============== We have two CP-maps, one corresponding to the real protocol and one to the simulation, and need to show that these maps are $\epsilon$-close. We define $P_0 := I \otimes_{i=1}^N [\psi_i]$ and $P_j := I \underset{i\neq j}{\bigotimes} [\psi_i] \otimes I_{j}$ for all $1 \leq j \leq s$. It is clear that $P_j P_k = P_0$ if $j \neq k$ and $\comm{P_j}{P_k} = 0$. Moreover we define $P_j = P_0 + P_j^c$ and can easily see that $P_j^c P_k^c = \delta_{j, k}P_j^c$. Also we define $p_2 = \Tr(P_0\rho)$ and $p_1 = \frac{1}{s}\underset{i}{\sum}\Tr(P_i\rho) \leq p_2 + \frac{1}{s}(1 - p_2)$. We have the following two CPTP maps: $$\Phi_1(\rho)=\frac{1}{s}\overset{s}{\underset{i = 1}{\sum}} P_i\rho P_i + (1 - p_1)[\textrm{Abort}]$$ $$\Phi_2(\rho)=P_0\rho P_0 + (1 - p_2)[\textrm{Abort}]$$ The real protocol corresponds to Bob acting on some state $\rho$ sent by Alice, by applying the CPTP map $\Phi_1(\cdot)$, i.e. measuring if the states sent are correct by choosing randomly one state to be left unmeasured. We will show that it is $\epsilon$-close to $\Phi_2(\cdot)$ map resulting to $\rho_{id}(p)$. Let a purification of $\rho$ be $\ket{\psi}$, we need to show: $$\Delta(P_0\dyad{\psi}P_0, \frac{1}{s}\underset{i}{\sum}P_i\dyad{\psi}\bra{\psi}P_i) \leq \epsilon$$ We use the sub-normalised fidelity and its relation with the trace distance. We will use the following properties and definitions from [@DFPR13]: $\tilde{\Delta}\qty(\rho, \sigma) \leq \sqrt{1-\tilde{F}^2(\rho, \sigma)}$ $\tilde{F}(\rho, \sigma) = F(\rho, \sigma) + \sqrt{(1 - \Tr\rho)(1 - \Tr\sigma)}$ $F^2(\ket{\phi}, \sigma) = \expval{\sigma}{\phi}$ Let $\sigma_1 = \frac{1}{s}\underset{i}{\sum}P_i\dyad{\psi}P_i$, $\sigma_2 = P_0\dyad{\psi}P_0$, $p_2 = \Tr\sigma_2$ and $p_1 = \Tr\sigma_1 \leq p_2 + \frac{1}{s}(1 - p_2)$. It is straightforward to see that $F(\sigma_1, \sigma_2) = p_2$. We obtain: $$\tilde{F}(\sigma_1, \sigma_2) \geq p_2 + \sqrt{(1 - p_2)(1 - p_2 - \frac{1}{s}(1 - p_2))} \approx p_2 + (1-p_2)(1 - \frac{1}{2s}) \geq 1 - \frac{1}{2s}$$ assuming $s \gg 1$. It follows that: $$\tilde{\Delta}(\sigma_1, \sigma_2) \leq \sqrt{1 - (1 - \frac{1}{2s})^2} \approx \sqrt{\frac{1}{s}}$$ We have $\Phi_1(\rho) = \sigma_1 + (1 - p_1)[\textrm{Abort}]$ and $\Phi_2(\rho) = \sigma_2 + (1 - p_2)[\textrm{Abort}]$ and using the above expressions we get: $$\Delta(\Phi_1(\rho), \Phi_2(\rho)) \leq \sqrt{\frac{1}{s}}$$ By tracing out all but the unmeasured system $\alpha$, Bob has (in the simulated case) a state $\frac{1}{\sqrt{s}}$-close to $\rho_{id}(p_2) = p_2[\psi_c] + (1 - p_2)[\textrm{Abort}]$. Quantum Rewinding ================= Classically the simulator runs the adversary (chooses its input $x, z, r$) internally and rewinds it by having black box access to the *next message* function $m_{i+1} = V(x,z, r, m_1, \ldots, m_i)$ where $m_1, \ldots, m_i$ are previous messages. The simulator has to save all messages so that it can send them again later, which is impossible in the quantum setting (due to no-cloning). We present two techniques given in [@Wat09] and [@Unr12] which achieve a similar result, with different constraints, show that they are applicable for the simulators of our protocol and calculate the success probability for both cases. Watrous’ Oblivious Quantum Rewinding {#wat rew} ------------------------------------ Let $Q$ be a unitary acting on the pure state $\ket{\psi}\ket{0^{k}}$. We first apply $Q$ to $\ket{\psi}\ket{0^{k}}$ and then measure the first qubit in the computational basis. Let $p(\psi) \in (0, 1)$ be the probability that this measurement outcome is $0$. Then there are unique unit vectors $\ket{\phi_{0}(\psi)}$ and $\ket{\phi_{1}(\psi)}$ such that: $$\label{Eq:Watrous} Q\ket{\psi}\ket{0^{k}} = \sqrt{p(\psi)}\ket{0}\ket{\phi_{0}(\psi)} + \sqrt{1 - p(\psi)}\ket{1}\ket{\phi_{1}(\psi)}$$ Lemma 8 from [@Wat09] gives a procedure constructing from $Q$ outputting a state close to $\ket{\phi_{0}(\psi)}$ for any $\ket{\psi}$. Q represents an attempt at simulating for some cheating adversary, $\ket{\psi}$ is the internal state of this adversary. Getting $0$ means the simulation was successful and getting $1$ corresponds to failure and necessity to rewind. Lemma 8 from [@Wat09] states that this is possible if $p(\psi)$ is non-negligible and independent of $\psi$. Rewinding gives a state $\epsilon$-close to that of a successful simulation for any exponentially small $\epsilon$ with polynomially many rewinds. In our case the rewinding takes place during the coin-tossing part of simulation for $P_2$. Let $\hat{\alpha}$ be the random evaluation graph index chosen by the simulator at the beginning of the proof. After the coin-tossing phase of the protocol, before verifying if the simulation has succeeded or not, the state of the system is in product form (i.e. the random choices of $\alpha, \hat{\alpha}$, are depicted as an equal superposition, but are totally uncorrelated): $$\ket{\Phi_f}:=\qty(\overset{s}{\underset{\alpha = 1}{\sum}}c_{\alpha}\ket{\phi(\psi, \alpha)}\ket{\alpha})\qty(\overset{s}{\underset{\hat{\alpha} = 1}{\sum}}\frac1{\sqrt{s}}\ket{\hat{\alpha}})$$ where $\underset{\alpha}{\sum} \abs{c_\alpha}^2 = 1$ are coefficients, $\ket{\phi(\psi, \alpha)}$ is the (normalised) state at the end of the protocol given initial state $\psi$ and choice of graph $\alpha$, while the last part ($\ket{\hat{\alpha}}$) corresponds to the random choice of graph made by the simulator. The projection to the subspace that does not need rewinding (where $\alpha = \hat{\alpha}$) is given: $P_0 := \overset{s}{\underset{\alpha' = 1}{\sum}} I \otimes \dyad{\alpha'} \otimes \dyad{\alpha'}$. We have: $$\ket{\Phi_f} = P_0\ket{\Phi_f} + (I - P_0) \ket{\Phi_f}$$ To bring it in the form of Eq.(\[Eq:Watrous\]) we rewrite it. We define the (normalised) states: $\ket{\phi_0(\psi)}=\overset{s}{\underset{\alpha = 1}{\sum}} c_{\alpha}\ket{\phi(\psi, \alpha)}\ket{\alpha}\ket{\alpha};\quad \ket{\phi_1(\psi)}=\overset{s}{\underset{\alpha = 1}{\sum}} \underset{\alpha \neq \hat{\alpha}}{\sum} \frac{c_{\alpha}}{\sqrt{s - 1}}\ket{\phi(\psi, \alpha)} \ket{\alpha} \ket{\hat{\alpha}}$ Then we have: $$\ket{\Phi_f} = \sqrt{\frac{1}{s}}\ket{\phi_0(\psi)} + \sqrt{1 - \frac{1}{s}}\ket{\phi_1(\psi)}$$ Now, following the unitary action that led to $\ket{\Phi_f}$, we perform the measurement $\{P_0, I - P_0\}$ and store the outcome in the value of an extra qubit (the first one): $$\ket{\Phi_f} = \sqrt{\frac{1}{s}}\ket{0}\ket{\phi_0(\psi)} + \sqrt{1 - \frac{1}{s}}\ket{1}\ket{\phi_1(\psi)}$$ Finally, we note that this is exactly in the form of Eq. (\[Eq:Watrous\]) where $p(\psi) = \frac{1}{s}$ is constant (independent of $\psi$). Further details in [@Wat09], proof of Lemma 8. Unruh’s Special Quantum Rewinding {#unruh rew} --------------------------------- Watrous’ lemma only ensures that the simulation is successful, but *no information* is kept between two rewinds (hence *oblivious rewinding*). In the simulator for covert $P_1$ we need two transcripts in order to recover their input (which is otherwise secret), so another type of rewinding is necessary. We present the conditions under which Unruh’s rewinding method [@Unr12] applies and then show that these conditions are met in our protocol and calculate an upper bound on the distance between the actual and simulated (rewinding to extract input) runs. Let $\Pi$ be a protocol between $P_1$, with input $(x, w)$, and $P_2$, with input $x$ and output in $\{0, 1\}$, with three messages: commitment $com$ by $P_1$, challenge $ch$ sampled (efficiently) uniformly at random by $P_2$ from the set $C_x$ (membership in $C_x$ has to be easy to decide), and response $resp$ by $P_1$. $P_2$ accepts (outputs $0$) by a deterministic poly-time computation on $(x, com, ch, resp)$ (it is called an *accepting conversation* for $x$). Such a protocol has *special soundness* if there is a deterministic poly-time algorithm $\mathsf{K}_{0}$ (the *special extractor*) such that for any two accepting conversations $(com, ch, resp)$ and $(com, ch', resp')$ for $x$ with $ch \neq ch'$, we have that $w := \mathsf{K}_{0}(x, com, ch, resp, ch', resp')$. Such a protocol has *strict soundness* if for any two accepting conversations $(com, ch, resp);(com, ch, resp')$ for $x$, $resp = resp'$. *Canonical extractor ([@Unr12])* The extractor runs the first step of the adversary to recover $com$ (each step is a separate unitary operation), chooses two values $ch, ch' \in C_{x}$, runs the second step with $ch$ to get $resp$, applies the inverse of the second step (this is the rewinding) and reruns the second step with $ch'$ to get $resp'$ before applying $\mathsf{K}_{0}$. Since each response is uniquely determined by the commitment and the challenge, if the measured response of the adversary is correct then they must have sent a state close to the real response, therefore this does not disturb too much the internal state of the adversary. In our case, $com$ corresponds to step 5 (sending commitments), $ch$ is step 6 (result of coin-toss) and $resp$ is step 7 (revealing commitments). We calculate the distance between the internal state of the adversary in the real protocol where only the commitments for the check graphs are opened and the one after the rewinding in the simulation where all commitments have been opened. We consider the opening of the commitments as a measurement performed on a state shared with the adversary. Let $P_i$ denote the projector corresponding to one run of this part of the protocol where the challenge is $i$ (analogous to the $P_{ch}^{*}$ in [@Unr12]). This corresponds to revealing all commitments but one, corresponding to the graph of index $i$. We define $\bar{P}_i = \bbbone - P_i$. There is a “correct” subspace $P$, that projects to the subspace that answer all $s$ tests, i.e. $PP_i := P$ for all $i$. The strict soundness property means that there is a unique classical response to each challenge. This intuitively means that two projection $P_i$ and $P_j$ acting on the same subsystem, are either identity or identical which essentially means that they commute, i.e. $P_i P_j = P_j P_i$. On the other hand, the property of special soundness means that any two of these tests, when both successful, allow the simulator to recover a “witness”. This means that $P_i P_j = P$ for all $i \neq j$. \[q rew state\] Let $\ket{\psi}$ be any state and let $\qty{P_i}_{0 \leq i \leq s}$ and $P$ be as defined above. For $\epsilon = \frac{1}{s}$ we have that: $$\max_{\ket{\psi}}\Delta\qty(P\ket{\psi},\frac{1}{s}\underset{i}{\sum}P_i\dyad{\psi}P_i) \leq \sqrt{\epsilon}$$ The proof follows the one above for QSP-CC. We again use the sub-normalised fidelity. Let $\rho = P\dyad{\psi}P$ and $\sigma = \frac{1}{s}\underset{i}{\sum}P_i\dyad{\psi}P_i$. We further define $P_i = P + P_i^c$ and $p = \expval{P}{\psi}$. It follows directly from special soundness that $P_i^c P_j^c = \delta_{i, j} P_i^c$. Since $P_i^c, P_j^c, P$ are orthogonal and the sum of the traces (probabilities) of the $P_i^c$-terms cannot exceed $(1 - p)$, we then have that: $$\Tr\sigma := \Tr (\frac{1}{s}\underset{i}{\sum}P_i\dyad{\psi}P_i) = \frac{1}{s}\underset{i}{\sum}\expval{(P + P_i^c)}{\psi} = p + \frac{1}{s}\underset{i}{\sum}\expval{P_i^c}{\psi} \leq p + \frac{1}{s}(1 - p)$$ $F^2\qty(P\dyad{\psi}P, \frac{1}{s}\underset{i}{\sum}P_i\dyad{\psi}P_i) = p^2$ Assuming $\epsilon = \frac{1}{s}$ where $s \gg 1$, with simple calculation we obtain: $$\tilde{F}(\rho, \sigma) \geq 1 - \frac{\epsilon}{2}\Rightarrow\quad \tilde{\Delta}\qty(\rho, \sigma) \leq \sqrt{1 - (1 - \frac{\epsilon}{2})^2} \approx \sqrt{\epsilon}$$ This result is independent of $\ket{\psi}$ (and $p$) and thus completes the proof. We have $\Phi_1(\cdot)\stackrel{\sqrt{\epsilon}} \approx \Phi_2(\cdot)$ where $\Phi_1(\cdot)$ is the CP-map corresponding to the projection in the $P$ subspace, while $\Phi_2(\cdot)$ is the CP-map corresponding to the real protocol operation (project in one of $P_i$ subspace, randomly chosen from the $s$ possible challenges). The simulated view is $\sqrt{\epsilon}$-close to the real protocol. Given this measurement, the simulator, (a) if it accepts (measurement result $P$) the state is close to the real protocol, (b) if it rejects, it is identical with the real protocol (since for abort, the exact state of the parties is irrelevant). Proof of Security {#sec proofs} ================= All the following proofs will be carried out in the OT-hybrid model. This means that parties can at the same time communicate with one another but also rely in certain steps of the protocol on an ideal call to a trusted party (also called oracle) performing an oblivious transfer. During the simulation, the simulator replaces this trusted party and receives all inputs that the adversary sends to it. We make this assumption only in order to make the proof clearer and easier, in a real protocol this ideal functionality can then be replaced with any OT protocol which is secure against malicious quantum adversaries. We start by proving the $\epsilon$-verifiability for the client. Protocol \[qyao cc protool\] is $\epsilon_{2}$-verifiable for $P_1$, where $\epsilon_{2} = \qty(\frac{8}{9})^{d}$ for $d = \ceil*{\frac{\delta}{2(2c+1)}}$. In this proof we consider that $P_1$ is honest while $P_2$ is malicious. During the first steps of the protocol, $P_2$ only participates in the perfectly secure OT at the end of which they receives their input measurement angles, based on their choice of deviated input $\hat{x}$. $P_2$ then receives the qubits of all the graphs and the commitments. These are perfectly hiding so no information can be recovered from them until $P_1$ decides to send the decommitment values. Furthermore, deviating at this point on the qubits is equivalent to deviating later. Because the coin-tossing protocol used is proven secure (the same rewinding techniques as the ones used in the proofs for $P_1$ and $P_2$ can be used), a malicious $P_2$ cannot bias the result. After the result of the coin-toss is known, $P_2$ receives the decommitments for all graphs but the evaluation graph, about which they therefore remain blind. The evaluation part of the protocol follows exactly the same pattern as the VUBQC protocol in [@KW15], with the difference that instead of announcing the angles of the measurements, $P_1$ sends the corresponding decommitments, which in the case of an honest $P_1$ is perfectly equivalent. This protocol has been proven to be $\epsilon_{2}$-verifiable for $P_1$ in [@KW15], see Theorem \[vbqc verif\]. After the computation $P_2$ measures the output qubits and has to commit to the result of the measurements, of which they have to reveal the traps. $P_1$ can therefore verify that all the traps were measured correctly. The only last deviation the server is allowed to perform is a further computation on their binary output, which is by definition allowed even in the ideal case. From this analysis it follows that the exact same verification properties from the protocol in [@KW15] hold for this protocol, namely that our protocol is $\epsilon_{2}$-verifiable for $P_1$, where $\epsilon_{2} = \qty(\frac{8}{9})^{d}$ for $d = \ceil*{\frac{\delta}{2(2c+1)}}$, which completes the proof. The honest behaviour of $P_2$ is essentially enforced by the traps (verifiability), while the Cut-and-Choose technique enforces $P_1$ to also behave honestly. Security Against Covert $P_1$ {#secP1} ----------------------------- #### Intuition The proof is an adapted version of Lindell and Pinkas’ proof for malicious $P_1$ [@LinPin07] and the proof for covert adversaries of [@AumLin07]. The intuition is that if they choose to cheat by constructing incorrectly the graphs or the OTs then they will get caught with probability at least $\epsilon_{1}$ during the opening phase of the protocol, whereas if they do construct the graphs and OTs correctly then they acts the same as an honest party would. The simulator runs the protocol normally until the opening of the commitments by $P_1$. Then, since the coin toss is random even with malicious $P_1$, the simulator does quantum rewinding (see [@Unr12]) and learn the rest of the commitments. When the malicious $P_1$ reveals their input measurement angles, having obtained the related randomness the simulator can deduce $P_1$’s bit-input and send it to the ideal functionality. It then runs the evaluation graph with random input and returns the output received from the ideal functionality (note that the correct decoding key is known to the simulator from the rewinding). \[sec P1\] Protocol \[qyao cc protool\] is $\epsilon_{1}$-private against a covert $P_1$, where $\epsilon_{1} = \frac{1}{\sqrt{s}}$ Let us construct the simulator in the following way: 1. The simulator runs the protocol normally until step 3 : participates in the OTs, with a random input $y'$ instead of $P_2$’s actual input. 2. Then acts as the simulator in Protocol \[qc-cc-prot\]: receives the qubits and commitments, participates in the coin-tossing protocol, receives the openings, rewinds, performs the coin-tossing again and receives the other set of openings, performs all the verifications and aborts as an honest party would. 3. The adversary decommits the value of their input angles for the new evaluation graph, the simulator deduces $P_1$’s binary deviated input $\hat{x}$ from the knowledge of secret parameters and sends it to the ideal functionality. It gets in return the output value $f_{1}(\hat{x}, y)$ for this deviated input. 4. Then performs the computation on graph $\alpha'$ using a random input $y'$. At the end of the computation replaces the bits of the computation positions for $P_1$’s output with the bits received from the ideal functionality (after correcting them using the decryption keys known due to the rewinding) and returns this to the adversary, while committing to what it computed as $P_2$’s output. When $P_1$ reveals the positions of the traps and dummies in the last layer, the simulator verifies that they were traps as an honest $P_2$ and opens the corresponding positions. Receives the keys from the adversary, and outputs whatever the adversary would and halts. The fact that the simulation is the same up to a negligible factor whether we use a random index chosen by $P_2$ or a coin-tossing protocol can be formalised in terms of a series of games, involving an ideal functionality for coin-tossing (such an ideal functionality takes as input a dummy input $\lambda$ from $P_2$ and returns to both players the same random string $\alpha$): **Game 1.** Here the simulator runs the protocol as usual, with no rewinding. **Game 2.** We replace the protocol for coin-tossing with the ideal functionality. Because the protocol for coin-tossing is secure against malicious quantum adversaries, the distance between the first game and the second is negligible. **Game 3.** Here the simulator sends directly to covert $P_1$ a challenge chosen at random instead of calling the ideal functionality for coin-tossing. This is equivalent to the setup $(com, ch, resp)$ for [@Unr12]. This is indistinguishable from the previous game from the point of view of the adversary. **Game 4.** Now we use quantum rewinding so the simulator sends two challenges to $P_1$, this is equivalent to the rewinding performed in [@Unr12]. According to the analysis performed in Lemma \[q rew state\], here the distance is $\frac{1}{\sqrt{s}}$. **Game 5.** We perform the switch the other way : we replace the simulator sending the random challenges by two calls to the ideal functionality (there is no problem with calling this ideal functionality twice as the input is a dummy input). This is once more indistinguishable for the adversary. **Game 6.** Again we switch back : the ideal functionality calls are replaced by the real coin-tossing protocol. The distance is also negligible and this game represents exactly what happens during our simulation. The result of this game-based analysis is that the distance between the real execution of the protocol and the simulation the way we perform it for this step is bounded by $\frac{1}{\sqrt{s}}$, up to a negligible quantity. Following the analysis in Appendix \[unruh rew\] and Lemma \[q rew state\], we get that after the rewinding we have (with $(\hat{x}, y)$ being the input after the adversary’s deviation, equivalent to the $\rho_{in}'$ from the definition of verifiability): $$\Delta\qty(v_{i}(\tilde{\mathcal{P}}_1, \hat{x}), \Tr_{\mathcal{P}_{2, i}}\qty(\tilde{\rho}(\tilde{A}, \hat{x}))) \leq \frac{1}{\sqrt{s}}$$ The state at the end of the simulation is $\epsilon_1=\frac{1}{\sqrt{s}}$-close to the real execution. Security Against Malicious $P_2$ {#secP2} -------------------------------- We first prove a lemma, showing how to construct a graph evaluating to a given fixed output for any input in a way indistinguishable from a regular computation producing that output for a given (possibly different) input. \[fake graph\] Given the value $f_2(x, \hat{y})$, there exist a dotted-triple graph, along with commitments identical to the evaluation graph, which when evaluated for any classical input $x'$ and $y'$ returns the fixed output $f_2(x, \hat{y})$ and is indistinguishable for the evaluator (even when given the decommitments for the evaluation graph in Protocol \[qyao cc protool\]) from a DT(G) computing $(f_1(x, y), f_2(x, y))$ for any $x$ and $y$. The graph is constructed as such: - $P_1$ chooses all the parameters ($\theta_{q}^{j}$, $r_{q}$, $\prescript{\textsc{k}}{}\delta_{q}^{j}$, and $\prescript{\textsc{}}{b}\delta_{i,q}^{j}$ for inputs) at random for all the base-location qubits apart from those corresponding to $P_2$’s output. - For those base locations $q$, for the computation qubits $j$, $P_1$ chooses at random $\theta_{q}^{j}$ and $b_{q} \in \{0, 1\}$ and sets $\prescript{\textsc{k}}{}\delta_{q}^{j} = \theta_{q}^{j} + b_{q}\pi$ for all choices of <span style="font-variant:small-caps;">k</span>. $P_1$ then sets the corresponding decryption keys to $\prescript{\textsc{k}}{}k_{q} = f_{2}(x, \hat{y})_{q} \oplus b_{q}$ for all choices of <span style="font-variant:small-caps;">k</span>. - Then $P_1$ prepares all qubits in the correct state for the base locations but instead of sending the correct ones for the “added” qubits for the edges linking $P_2$’s outputs qubits to the rest of the graph, $P_1$ sends dummy qubits (in states chosen from $\{\ket{0}, \ket{1}\}$). This will have the effect of breaking away these qubits from the rest of the graph. Since the dummies isolate the output of $P_2$ from the rest of the graph, $P_2$ obtains the result $b_{q}$ when measuring the last layer whatever the previous measurement outcomes are. Thus applying the decryption keys receives $f_{2}(x, \hat{y})$. The indistinguishability follows directly from the server’s blindness in the VBQC protocol and the hiding property of the bit commitment scheme: since the random $\theta_{q}^{j}$s are not revealed at any point server cannot get any information from the decommitments if done as in the evaluation graph in Protocol \[qyao cc protool\]. #### Intuition. The following proof combines the proofs for a malicious $P_2$ from [@LinPin07] and [@AumLin07]. The simulator first extracts $P_2$’s deviated input $\hat{y}$ (once again $(x, \hat{y})$ plays the same role as the $\rho_{in}'$ in the verifiability definition) from the OT-protocols, which it then uses to call the ideal functionality and receive $f_2(x, \hat{y})$. Then it chooses at random one graph index for which it will construct the associated graph such that it computes always this value as $P_2$’s output (as in Lemma \[fake graph\]). Then uses the rewinding technique from [@Wat09] to bias the coin-toss so this index is picked as the evaluation graph. The other graphs are constructed correctly and all checks pass. Then for the evaluation it follows the same steps as in original protocol, guaranteeing that $P_2$ will receive $f_2(x, \hat{y})$ at the end because the computation over a single graph is verifiable and secure against a malicious $P_2$. While the property of $\epsilon_2$-verifiability for $P_1$ guarantees that the protocol is close to the ideal, the previous lemma gives the construction of a simulator of this ideal execution. Here we justify the coin tossing protocol, since if $P_2$ chose the evaluation graph we would be unable to obtain the simulator for $P_2$. \[sec P2\] If Protocol \[qyao cc protool\] is $\epsilon_{2}$-verifiable for the client, then it is $\epsilon_{2}$-private against a malicious server. Let us construct the simulator in the following way: 1. The simulator chooses at random the values of $\prescript{\textsc{}}{b}\delta_{i,q}^{j}$ for $P_2$’s input and organizes these values in $2n$ sets of length $s$: $\qty{\prescript{}{0}\delta_{i,q}^{j}}_{0 \leq i \leq s-1}$ and $\qty{\prescript{}{1}\delta_{i,q}^{j}}_{0 \leq i \leq s-1}$ for all qubits $q$ corresponding to $P_2$’s input. 2. Then invokes the adversary and receives from them what it would have sent to the trusted party computing the OT: $\hat{y} = (\hat{y}_{1}, \ldots, \hat{y}_{m})$ which is the actual input that the adversary intended to use. The simulator returns the corresponding sets of inputs. Here the simulated and real state are identical. 3. The simulator calls the ideal functionality and receives the value $f_{2}(x, \hat{y})$. Chooses the index of the evaluation graph $\alpha$ and for all the check graphs constructs them normally (only now the values of $\theta_{i,q}^{j}$ and $r_{i,q}^{j}$ are determined by $\prescript{}{b}\delta_{i,q}^{j}$ and $\phi_{b}$) and for the evaluation constructs it as in the proof of Lemma \[fake graph\]. Commits to all those values as it would in the real protocol, in the same order, as well as the decryption keys from Lemma \[fake graph\]. The simulator sends the qubits and gives all these commitments to $P_2$. The ideal and real situations are again indistinguishable as shown in Lemma \[fake graph\]. 4. The simulator needs to “bias” the coin toss to outputs the $\alpha$ chosen: - It first generates a perfectly binding commitment $\hat{c}$ to a random value $\alpha_{1}$ and sends it to the adversary. - Then receives $\alpha_{2}$ from the adversary. - If $\alpha_{1} \oplus \alpha_{2} = \alpha$, the simulator continues by decommiting $\alpha_{1}$. Otherwise rewinds back to the beginning of this step, with fresh randomness and a new value for $\alpha_{1}$. It should be noted that *no information is kept between rewinds* as it is only used to ensure that the adversary is forced to pick our fake graph as the evaluation graph. The probability of success of the rewinding is $\frac{1}{s}$, which is independent of the initial state of the adversary. We can therefore use the oblivious quantum rewind technique from [@Wat09] for this step of simulation. At this point $\mel{\phi_{0}(\psi)}{\rho(\psi)}{\phi_{0}(\psi)} \geq 1 - \epsilon$, where $\ket{\psi}$ is the state of the adversary before this step, $\ket{\phi_{0}(\psi)}$ corresponds to the state after a success happening in one try while $\rho(\psi)$ is the state at the end of the rewinding process. The expected number of rewinds is $\order{\frac{s^2}{s - 1} log{\frac{1}{\epsilon}}}$. Here we want an $\epsilon$ negligible in $n$, so $log{\frac{1}{\epsilon}} = \order{n}$ is sufficient and we get $\order{ns}$ rewinds. More details can be found in Appendix \[wat rew\]. 1. The simulator opens all commitments as in the protocol for the checks. These correspond to the correct graphs and thus all pass. Nothing is leaked about the evaluation graph and so the ideal and real states are indistinguishable. 2. The simulator then runs the remaining graph and runs the computation until the end of the protocol on the modified evaluation graph as $P_1$. In the end, outputs whatever the adversary outputs and halts. Let $k^{P_2}$ be the key committed as the decryption key for $P_2$’s output as part of the fake graph construction in Lemma \[fake graph\]. At the end of the computation, $P_2$ is in possession of the state $\Tr_{\H_{P_1}}\qty(E_{k^{P_2}}(f(x, \hat{y}))) = E_{k^{P_2}}(f_{2}(x, \hat{y}))$, where $E_{k^{P_2}}(z)$ denotes the encryption as a quantum state of the classical value $z$ under the key $k^{P_2}$ (this key is the one chosen as part of the construction of the fake graph in the previous lemma). At the end of the computation the simulator reveals the key by decommitting it and $P_2$ can decrypt their outcome. The server’s view of the real protocol after receiving the key is: $$\Tr_{\H_{P_1}}\qty(E_{k^{P_2}}\qty(\tilde\rho^{n-1}(\tilde{P}_{2}, \hat{y})))$$ From Eq. (\[eq:verification2\]) (noting that partial trace is a distance non-increasing operation): $$\Delta\qty(\Tr_{\H_{P_1}}\qty(E_{k^{P_2}}\qty(\tilde\rho^{n-1}(\tilde{P}_{2}, \hat{y}))), \Tr_{\H_{P_1}}\qty(E_{k^{P_2}}(\rho^{n-1}_{ideal}(\hat{y})))) \leq \epsilon_{1}$$ We showed that the simulated view is $\epsilon_{2}$-close to the real view of $P_2$. [^1]: Note, that even though we evaluate a classical function we still need quantum computation if this function cannot be efficiently computed (in the honest case) with a classical computer (e.g. functions that involve factoring) and thus classical techniques are not applicable.
{ "pile_set_name": "ArXiv" }
--- author: - 'Shin <span style="font-variant:small-caps;">Nakamura</span>[^1]' title: 'Comments on Chemical Potentials in AdS/CFT' --- Introduction ============ The application of gauge/gravity correspondence to quark-hadron physics has recently attracted much attention. In particular, AdS/CFT at finite baryon density is important since there is a technical difficulty called the “sign problem” (see for example, Ref. ) in lattice QCD when introducing the finite baryon chemical potential. Holographic descriptions of systems at finite baryon density have been studied in a number of papers and many interesting results have been obtained.[@KSZ; @HT; @ParSah; @NSSY; @KMMMT; @Bergman; @DGKS; @UBC; @KSZ-2; @NSSY-2; @Par; @Kyusyu; @KB; @MMMT; @EKR; @Mats]$^{,}$[^2] However, there are issues that are still under debate. One of them is related to holographic definition of the chemical potential. It is known that the global flavor symmetry is promoted to the local (gauge) symmetry on the flavor brane in the gravity-dual side. The $U(1)_{\mbox{\scriptsize B}}$ symmetry, which is the diagonal part of the global flavor symmetry, corresponds to the $U(1)$ gauge symmetry on the flavor brane. Thus, we can naturally identify the “electric charge” on the flavor brane as a bulk counterpart of the $U(1)_{\mbox{\scriptsize B}}$ charge. However, we have several ways of identifying the baryon chemical potential with the bulk field. It is natural to relate the non-normalizable mode of $A_{0}$, the zeroth component of the $U(1)$ gauge field on the flavor brane, to the chemical potential since it is the conjugate field to the electric charge. However, we need to establish the dictionary in a gauge-invariant way.[^3] There are at least two methods of defining a gauge-invariant quantity related to the boundary value of $A_{0}$: 1. $\mu=\frac{1}{\beta}\int^{\beta}_{0}dt A_{0}|_{\rho=\infty}$    (Definition 1), 2. $\mu=\int^{\infty}_{\rho_{\mbox{\scriptsize min}}}d\rho F_{\rho 0}$        (Definition 2), where $F_{\rho 0}\equiv \partial_{\rho}A_{0}-\partial_{0}A_{\rho}$ and $\rho$ is the radial coordinate of the bulk geometry whose boundary is located at $\rho=\infty$. $\rho_{\mbox{\scriptsize min}}$ is the point where the flavor brane terminates inside the bulk. Depending on the setup and the dynamics, the brane may terminate at the horizon of the bulk geometry (black hole embeddings) or elsewhere (Minkowski embeddings). The first definition is also gauge invariant in finite-temperature systems since the Euclidean time direction is compactified. If we assume a static configuration of the $U(1)$ gauge field, the above quantities are reduced to $A_{0}(\infty)$ and $A_{0}(\infty)-A_{0}(\rho_{\mbox{\scriptsize min}})$, respectively. The latter can also be equivalent to $A_{0}(\infty)$ if we choose $A_{0}(\rho_{\mbox{\scriptsize min}})=0$; however, a crucial difference between them is whether or not we allow the constant shift of $A_{0}$ as a physically meaningful degree of freedom. Definition 2, or Definition 1 with $A_{0}(\rho_{\mbox{\scriptsize min}})=0$, has been used in Refs.  and while Definition 1 has recently been employed successfully in Refs. and (and also in the pioneering Ref. ). The difference between the two definitions disappears on black hole embeddings because we must set $A_{0}=0$ at the horizon where the Euclidean time circle shrinks to zero, while the difference can survive in principle on Minkowski embeddings. However, the degree of freedom of the constant shift of $A_{0}$ has also been fixed at finite baryon-charge density even on Minkowski embeddings (see for example, Ref. ). Thus, a natural question arises: when and why is the degree of freedom of the constant shift forbidden? In this paper, we will answer this question in a model-independent way. The key point is the consistency of the definition of the chemical potential with the Legendre transformation and the thermodynamic relations. In §\[Legendre\], we demonstrate how the identification of the chemical potential is related to the thermodynamic potentials and the Legendre transformation by using a toy model to visualize the problem. In §\[proposal\], we reinterpret this demonstration. We will see that Definition 2 (and its generalized version) is naturally selected, at least at finite charge density where the Legendre transformation is well-defined. We find that the absence of the constant-shift degree of freedom of $A_{0}$ results from the fact that the grand potential in the gravity dual contains two terms: one of them is the source term, which corresponds to (the expectation value of) the charge density in YM theory, and the other is the charge projection operator, which is explained in §\[proposal\]. Therefore, the absence of the constant-shift degree of freedom of $A_{0}$ holds in general when the model has these two terms. In §\[proposal\], we also propose a general method for defining the baryon (and other) chemical potentials in general setups such as those containing the mass of baryons. In the discussion section, we consider a case where we have a nontrivial charge distribution along the $\rho$ direction. We point out that the definition of the chemical potential may be more complicated in the presence of a nontrivial charge distribution in the bulk. Unfortunately, the argument presented in this paper does not apply to the case where the free energy is independent of the chemical potential (if such a sector exists). Indeed, all the known sectors where Definition 1 plays an important role are charge-less Minkowski embeddings on which this property is realized. [@Bergman; @DGKS; @UBC; @Kyusyu; @KB; @MMMT; @Mats] In this sense, we are not going to dispute the validity of Definition 1 in this special sector in the present work. Consistency of Legendre transformation {#Legendre} ====================================== We consider a system where the $U(1)$ charges are present on the flavor branes. Here, we ignore the dynamics of the charges and we assume they are massless and localized at $\rho=\rho_{\mbox{\scriptsize min}}$. The model may not be very close to phenomenologically realistic setups, however it is sufficiently close to allow us to observe an important feature related to the chemical potential. The total Lagrangian[^4] of the system is given by $$\begin{aligned} \int^{\infty}_{\rho_{\mbox{\scriptsize min}}}d\rho{\cal L} =\int^{\infty}_{\rho_{\mbox{\scriptsize min}}}d\rho {\cal L}_{\mbox{\scriptsize DBI}}-QA_{0}(\rho_{\mbox{\scriptsize min}}),\end{aligned}$$ where ${\cal L}_{\mbox{\scriptsize DBI}}$ is the DBI Lagrangian of the flavor branes. We have assumed translational (and rotational) symmetry of the system; all the bulk fields depend only on $\rho$, and integrals over the other directions have already been evaluated. The amount of $U(1)_{\mbox{\scriptsize B}}$ charge is $Q$, which is understood to be thermal expectation value if we are in the grand canonical ensemble, whereas it is a control parameter in the canonical ensemble. The minus sign in front of the source term originates from the fact that the charge induced on the flavor brane is always opposite to the quark charge inserted in the D3-branes (where the YM theory is applicable) in the picture before replacing the D3-branes with the near-horizon geometry. We need to define the on-shell Lagrangian to specify the thermodynamic potentials. Here the meaning of on-shell is that the total Lagrangian satisfies the equations of motion, including $$\begin{aligned} \partial_{\rho}\frac{\partial{\cal L}}{\partial A'_{0}} =\frac{\partial{\cal L}}{\partial A_{0}}=-Q\delta(\rho-\rho_{\mbox{\scriptsize min}}), \label{eom}\end{aligned}$$ as well as the boundary conditions in such a way that all the charges are on the brane: $$\begin{aligned} \left. \frac{\partial{\cal L}}{\partial A'_{0}} \right|_{\infty}=-Q, \ \ \ \ \left. \frac{\partial{\cal L}}{\partial A'_{0}} \right|_{\rho_{\mbox{\scriptsize min}}}=0. \label{bound-cond}\end{aligned}$$ We ignore the scalar fields on the brane since they do not contribute within the context of the present section (see Appendix \[scalar\]). Chemical potential as $A_{0}(\infty)$ {#wrong-mu} ------------------------------------- Let us start with Definition 1 where we regard $A_{0}(\infty)$ as the chemical potential. The grand potential $\Omega$, which is consistent with the thermodynamic relation $Q=-\partial \Omega/\partial \mu$ in this case, is given (up to $\mu$-independent terms) by[^5] $$\begin{aligned} \Omega=\left. \int d\rho {\cal L}\right|_{\mbox{\scriptsize on-shell}}. \label{omega-1}\end{aligned}$$ Let us verify its consistency explicitly: $$\begin{aligned} \delta\Omega &=&\int d\rho \left\{\partial_{\rho}\left[\frac{\partial{\cal L}}{\partial A'_{0}}\delta A_{0}\right] -\left[\partial_{\rho}\frac{\partial{\cal L}}{\partial A'_{0}} -\frac{\partial{\cal L}}{\partial A_{0}} \right]\delta A_{0} \right\}. \nonumber \\ &=& -Q\delta A_{0}(\infty), \label{delta-omega-1}\end{aligned}$$ where we have used the equations of motion and the boundary conditions (\[bound-cond\]). We have derived the thermodynamic relation $Q=-\partial \Omega/\partial \mu$ without imposing any further constraint on $A_{0}(\rho_{\mbox{\scriptsize min}})$. Let us perform a Legendre transformation on $\Omega$ to define the Helmholtz free energy $F$: $$\begin{aligned} F=\Omega+\mu Q %=\int d\rho \left\{{\cal L}_{\mbox{\scriptsize DBI}}-Qf(\rho)A_{0}(\rho)\right\}+QA_{0}(\infty). =\int d\rho {\cal L}_{\mbox{\scriptsize DBI}}-QA_{0}(\rho_{\mbox{\scriptsize min}}) +Q A_{0}(\infty). \label{F-0}\end{aligned}$$ If the above construction is consistent, we need to derive the correct thermodynamic relation $\partial F/\partial Q=\mu$. Let us examine explicitly whether or not this is the case: $$\begin{aligned} \frac{\partial F}{\partial Q} &=&A_{0}(\infty)-A_{0}(\rho_{\mbox{\scriptsize min}}) \nonumber \\ &&+\int d\rho \frac{\partial{\cal L}}{\partial A'_{0}} \frac{\partial A'_{0}}{\partial Q} +Q\frac{\partial}{\partial Q} \left\{A_{0}(\infty)-A_{0}(\rho_{\mbox{\scriptsize min}}) \right\}.\end{aligned}$$ Here, the second line simplifies to zero since: $$\begin{aligned} \int d\rho \frac{\partial{\cal L}}{\partial A'_{0}} \frac{\partial A'_{0}}{\partial Q} &=&\int d\rho \left\{ \partial_{\rho} \left[ \frac{\partial {\cal L}_{\mbox{\scriptsize DBI}}}{\partial A'_{0}} \frac{\partial A_{0}}{\partial Q} \right] - \left[ \partial_{\rho} \frac{\partial {\cal L}_{\mbox{\scriptsize DBI}}}{\partial A'_{0}} \right] \frac{\partial A_{0}}{\partial Q} \right\} \nonumber \\ &=& -Q\frac{\partial A_{0}(\infty)}{\partial Q} +Q\frac{\partial A_{0}(\rho_{\mbox{\scriptsize min}})}{\partial Q},\end{aligned}$$ where we have used the equations of motion, boundary conditions (\[bound-cond\]) and the fact that $\frac{\partial {\cal L}_{\mbox{\scriptsize DBI}}}{\partial A'_{0}}=\frac{\partial {\cal L}}{\partial A'_{0}}$. Therefore, we obtain $$\begin{aligned} \frac{\partial F}{\partial Q} =A_{0}(\infty)- A_{0}(\rho_{\mbox{\scriptsize min}}), \label{delFdelQ}\end{aligned}$$ where the second term is absent at the starting point. If we follow the above procedure, the thermodynamic relations and the Legendre transformation do not close under the chemical potential given by Definition 1. Chemical potential as $A_{0}(\infty)- A_{0}(\rho_{\mbox{\scriptsize min}})$ {#correct-mu} --------------------------------------------------------------------------- Now, let us start with Definition 2 of the chemical potential: $$\begin{aligned} \mu=A_{0}(\infty)- A_{0}(\rho_{\mbox{\scriptsize min}}). \label{true-chem-0}\end{aligned}$$ A grand potential that is consistent under this definition is $$\begin{aligned} \Omega=\int d\rho {\cal L}_{\mbox{\scriptsize DBI}}. \label{omega-2}\end{aligned}$$ Notice that we have removed the source term from the new $\Omega$. Let us verify the consistency: $$\begin{aligned} \delta\Omega &=&\int d\rho \left\{\partial_{\rho}\left[\frac{\partial{\cal L}_{\mbox{\scriptsize DBI}}}{\partial A'_{0}}\delta A_{0}\right] - \left[ \partial_{\rho}\frac{\partial{\cal L}_{\mbox{\scriptsize DBI}}}{\partial A'_{0}} \right] \delta A_{0} \right\}. \nonumber \\ &=& -Q\left\{ \delta A_{0}(\infty)-\delta A_{0}(\rho_{\mbox{\scriptsize min}}) \right\}, \label{delta-omega-2}\end{aligned}$$ where we have used the same on-shell conditions (\[eom\]) and (\[bound-cond\]). Equation (\[delta-omega-2\]) gives the correct thermodynamic relation $\partial \Omega/\partial \mu=-Q$ under the present definition. Let us perform a Legendre transformation on the above $\Omega$ to obtain the Helmholtz free energy: $$\begin{aligned} F= \int d\rho {\cal L}_{\mbox{\scriptsize DBI}} +Q\left\{A_{0}(\infty)-A_{0}(\rho_{\mbox{\scriptsize min}})\right\}. \label{FF-0}\end{aligned}$$ Interestingly, the free energy (\[FF-0\]) is exactly the same as Eq. (\[F-0\]). We have already seen that Eq. (\[F-0\]) has a consistent thermodynamic relation (\[delFdelQ\]) under the dictionary (\[true-chem-0\]). A method for defining the chemical potential {#proposal} ============================================ We have seen in the previous section that the consistency with the thermodynamic relations and the Legendre transformation may indicate how to uniquely identify the chemical potential. Let us reorganize the results of the previous section to clarify matters. We have obtained the same Helmholtz free energy starting with the different definitions of the chemical potential, one of which was selected on the basis of the consistency with the thermodynamic relation $\partial F/\partial Q=\mu$. This means that the Helmholtz free energy plays a fundamental role in the definition of the chemical potential in our formalism. Indeed, the canonical ensemble is a better starting point for us than the grand canonical ensemble, since the correspondence between the $U(1)_{\mbox{\scriptsize B}}$ charge and the $U(1)$ charge on the flavor brane is clearer than that between the chemical potential and $A_{0}$. These observations suggest that we should start with the free energy (\[F-0\]) or (\[FF-0\]): $$\begin{aligned} F=\int d\rho {\cal L}+QA_{0}(\infty). \label{FF-1}\end{aligned}$$ The first term is the total Lagrangian of the system. The second term is simply the [*charge projection operator*]{} originally introduced into black hole thermodynamics to define the Helmholtz free energy [@ChargeProjection; @CP2]. Let us remind ourselves of what the charge projection operator is. If we start with the total Lagrangian, its variation is given by $$\begin{aligned} \delta L=(\mbox{\rm term giving the equations of motion}) -Q\delta A_{0}(\infty),\end{aligned}$$ where the last contribution originates from the boundary term. However, we need to control the charge $Q$ rather than $A_{0}$ since we are in the canonical ensemble. If we add the charge projection operator to the total Lagrangian, the variation becomes $$\begin{aligned} \delta (L+QA_{0})=(\mbox{\rm term giving the equations of motion}) +(\delta Q) A_{0}(\infty);\end{aligned}$$ thus, we can employ the same equations of motion while holding the charge fixed. The point is that we need to choose an appropriate expression of the free energy depending on how we control the parameter. We can reinterpret the results of the previous section in this context. For example, Eq. (\[delta-omega-1\]) shows that we obtain the equations of motion by extremizing[^6] the grand potential (\[omega-1\]) with $A_{0}(\infty)$ kept fixed but without fixing $A_{0}(\rho_{\mbox{\scriptsize min}})$. Alternatively, we obtain the same equations of motion by extremizing another grand potential (\[omega-2\]) by fixing both $A_{0}(\rho_{\mbox{\scriptsize min}})$ and $A_{0}(\infty)$. We have chosen the appropriate grand potential depending on how we control the boundary conditions. We have (at least) two possible choices at this stage. However, we have found that only one of them, given in §\[correct-mu\], is consistently connected to the unique expression of the Helmholtz free energy (\[FF-1\]) by the Legendre transformation. The method for defining the chemical potential is now clear: 1. Find the charge projection operator with respect to the conserved charge under consideration. 2. Add the charge projection operator to the total (on-shell) Lagrangian of the system to define the Helmholtz free energy. 3. Differentiate the Helmholtz free energy with respect to the charge to find the conjugate chemical potential. 4. Perform the Legendre transformation, if necessary, to switch to the grand canonical ensemble. Let us examine how this works in more general setups. We consider, as an example, the Sakai-Sugimoto model with massive charged sources, which is studied in Ref. . The total Lagrangian added to the charge projection operator is simply the Helmholtz free energy employed in Ref. : $$\begin{aligned} F=\int d\rho {\cal L}_{\mbox{\scriptsize DBI}}+ L_{\mbox{\scriptsize source}}+QA_{0}(\infty),\end{aligned}$$ where $L_{\mbox{\scriptsize source}}$ is the Lagrangian of the baryon-charged objects, which consists of their mass contribution ($L_{\mbox{\scriptsize mass}}$) and the source ($-QA_{0}(\rho_{\mbox{\scriptsize min}})$). The chemical potential obtained by differentiating the free energy with respect to the charge is $$\begin{aligned} \mu =A_{0}(\infty)-A_{0}(\rho_{\mbox{\scriptsize min}}) + \frac{\partial L_{\mbox{\scriptsize mass}}}{\partial Q},\end{aligned}$$ after taking account of the force-balance condition [@Bergman]. The last term is the mass of the baryon-charged object, which is now naturally incorporated into the definition of the baryon chemical potential. The above definition, which is a variant of Definition 2, does [*not*]{} contain the degree of freedom of the constant shift of $A_{0}$. Indeed, we can show that the degree of freedom of the constant shift is always absent when $L_{\mbox{\scriptsize source}}$ contains the charged source term balanced with the charge projection operator. This explains the absence of the constant-shift degree of freedom from the model-independent definition of the chemical potential. However, there is a caveat. The method proposed above does not work if $\partial F/\partial Q$ is singular.[^7] For example, a sector where the amount of charge remains zero regardless of the chemical potential has been considered in Refs. and . This is a Minkowski embedding without the charge, and we call it the “trivial sector” in this paper. Obviously, $\partial F/\partial Q$ is singular in such a sector and our method does not apply. Therefore, we do not claim that our results apply to the trivial sector in the present work; all the statements in this paper apply only to the case where $\partial F/\partial Q$ is well-defined. Discussion {#discussions} ========== We have seen that the natural definition of the chemical potential is Definition 2 (or its generalization) rather than Definition 1 when $\partial F/\partial Q$ and the Legendre transformation are well-defined. A crucial point is that the degree of freedom of the constant shift of $A_{0}$ does not exist except for the very special case where $\partial F/\partial Q$ is singular. We have also proposed a general method for defining the chemical potential in terms of the bulk quantities. We now add a few comments on systems with nontrivial charge distribution along the $\rho$ direction.[^8] If the charge is not localized at a particular value of $\rho$, the definition of the chemical potential becomes more complicated. For example, the toy model we have considered in §\[Legendre\] can be generalized in the following way. Suppose that the total Lagrangian is given by $$\begin{aligned} \int^{\infty}_{\rho_{\mbox{\scriptsize min}}}d\rho{\cal L} =\int^{\infty}_{\rho_{\mbox{\scriptsize min}}}d\rho \left\{ {\cal L}_{\mbox{\scriptsize DBI}}-q(\rho)A_{0}(\rho) \right\},\end{aligned}$$ where $q(\rho)$ is the charge density along the $\rho$ direction, which satisfies $\int d\rho\: q(\rho)=Q$. The charge projection operator we need to add is still $QA_{0}(\infty)$ since the boundary term still results in the same total charge inside the system by virtue of the Gauss law. Then the Helmholtz free energy is given by the on-shell value of $$\begin{aligned} F =\int^{\infty}_{\rho_{\mbox{\scriptsize min}}}d\rho \left\{ {\cal L}_{\mbox{\scriptsize DBI}}-q(\rho)A_{0}(\rho) \right\}+QA_{0}(\infty),\end{aligned}$$ and the chemical potential is given by $$\begin{aligned} \mu &=&A_{0}(\infty) -\int^{\infty}_{\rho_{\mbox{\scriptsize min}}}d\rho \frac{\partial q(\rho)}{\partial Q} A_{0}(\rho) \nonumber \\ &=& \int^{\infty}_{\rho_{\mbox{\scriptsize min}}}dr \frac{\partial q(r)}{\partial Q} \int^{\infty}_{r}d\rho F_{\rho 0}, \label{general-chem}\end{aligned}$$ which is again written in terms of the field strength.[^9] The response of the distribution to the variation of the total charge, $\partial q(r)/\partial Q$, must be determined by the dynamics. It is certainly worthwhile investigating how this identification works in various general setups. This discussion is rather general and it applies to any chemical potential in principle. Thus, it is also interesting to consider the isospin chemical potential [@Par; @EKR; @isospin] in holographic setups using the method outlined in this paper. Since mesons can carry the isospin charge, we can discuss them within the framework of the (nonabelian) DBI theory of flavor branes without introducing any extra objects such as baryon vertices or fundamental strings; the finite isospin system may be a suitable test ground[^10] for the proposed method. Acknowledgements {#acknowledgements .unnumbered} ================ The author would like to thank Sang-Jin Sin, Tetsuo Hatsuda, Yunseok Seo, Youngman Kim and Sangmin Lee for discussions and comments. The author thanks the hospitality of the Elementary Particle Theory Group at Kyushu University where part of the present work was carried out. This work was supported by KOSEF Grant R01-2004-000-10520-0 and the SRC Program of the KOSEF through the Center for Quantum Spacetime of Sogang University (grant number R11-2005-021). Scalar-Field Dependence {#scalar} ======================= In the main text, we have ignored the scalar fields on the flavor brane which may contribute to the variation of the thermodynamic potentials when we vary $Q$ or $\mu$. We show that the contribution indeed vanishes [@Bergman]. Suppose that the DBI Lagrangian contains a scalar field $y$. Then, the additional contribution to the variation of the free energies that may originate from the $y$ field is $$\begin{aligned} \int^{\infty}_{\rho_{\mbox{\scriptsize min}}}d\rho \frac{\partial {\cal L}_{\mbox{\scriptsize DBI}}(y')}{\partial y'}\frac{\partial y'}{\partial \mu} = \left. {\rm (const)}\frac{\partial y}{\partial \mu} \right|^{\infty}_{\rho_{\mbox{\scriptsize min}}}, \label{y-contri}\end{aligned}$$ where we have used the equation of motion for $y$: $\partial {\cal L}_{\mbox{\scriptsize DBI}}(y')/\partial y'={\rm const.}$ Here, $\partial y/\partial \mu |_{\infty}$ is zero because the boundary value of $y$ determines another parameter of the theory such as the current quark mass, which is kept fixed under the variation of the chemical potential. Then, (\[y-contri\]) indicates the variation oroginates from only the $y(\rho_{\mbox{\scriptsize min}})$ dependence of the action. However, this is zero because of the force-balance condition of the flavor brane along the $y$ direction at $\rho=\rho_{\mbox{\scriptsize min}}$. The same logic applies to differentiation with respect to the charge. [99]{} M. P. Lombardo, ; hep-lat/0612017. K.-Y. Kim, S.-J. Sin and I. Zahed, hep-th/0608046. N. Horigome and Y. Tanii, ; hep-th/0608198. A. Parnachev and D. A. Sahakyan, ; hep-th/0610247. S. Nakamura, Y. Seo, S.-J. Sin and K. P. Yogendran, hep-th/0611021. S. Kobayashi, D. Mateos, S. Matsuura, R. C. Myers and R. M. Thomson, ; hep-th/0611099. O. Bergman, G. Lifschytz and M. Lippert, ; arXiv:0708.0326. J. L. Davis, M. Gutperle, P. Kraus and I. Sachs, ; arXiv:0708.0589. M. Rozali, H-H Shieh, M. V. Raamsdonk and J. Wu, ; arXiv:0708.1322. K.-Y. Kim, S.-J. Sin and I. Zahed, “The Chiral Model of Sakai-Sugimoto at Finite Baryon Density,” ; arXiv:0708.1469. S. Nakamura, Y. Seo, S.-J. Sin and K. P. Yogendran, arXiv:0708.2818. A. Parnachev, arXiv:0708.3170. K. Ghoroku, M. Ishihara and A. Nakamura, ; arXiv:0708.3706. A. Karch and A. O’Bannon, ; arXiv:0709.0570. D. Mateos, S. Matsuura, R. C. Myers and R. M. Thomson, ; arXiv:0709.1225. J. Erdmenger, M. Kaminski and F. Rust, arXiv:0710.0334. S. Matsuura, ; arXiv:0711.0407. S. K. Domokos and J. A. Harvey, ; arXiv:0704.1604\ A. Karch and A. O’Bannon, ; arXiv:0705.3870\ Y. Kim, B.-H. Lee, S. Nam, C. Park and S.-J. Sin, ; arXiv:0706.2525\ S.-J. Sin, “Gravity Back-reaction to the Baryon Density for Bulk Filling Branes,” ; arXiv:0707.2719\ Y. Kim, C.-H. Lee and H.-U. Yee, arXiv:0707.2637. H. R. Braden, J. D. Brown, B. F. Whiting and J. W. York, Jr,\ S. W. Hawking and S. F. Ross, ; hep-th/9504019. S. Coleman, J. Preskill and F. Wilczek, ; hep-th/9201059. R. Apreda, J. Erdmenger, N. Evans and Z. Guralnik, ; hep-th/0504151\ J. Erdmenger, M. Kaminski and F. Rust, ; arXiv:0704.1290\ K. Kim, Y. Kim and S. H. Lee, arXiv:0709.1772\ O. Aharony, K. Peeters, J. Sonnenschein and M. Zamaklar, arXiv:0709.3948. [^1]: E-mail: nakamura@hanyang.ac.kr [^2]: Other related references include Ref. . [^3]: We consider finite-temperature systems in this paper where the Euclidean time direction is compactified using the periodicity of the inverse temperature $\beta=1/T$. The gauge transformation we are considering is one that respects the periodicity. [^4]: We employ the probe approximation where the back reaction to the bulk geometry from the flavor brane is ignored, and the bulk Lagrangian is omitted since it does not affect the discussion in this paper under the approximation. The Lagrangian should be understood as being renormalized, although we do not write the counterterms explicitly. [^5]: We omit $|_{\mbox{\scriptsize on-shell}}$ from the next equation. [^6]: The on-shell constraint is removed from Eqs. (\[omega-1\]) and (\[omega-2\]) when we discuss the extremization. [^7]: We are not referring to the singularity at phase transition points. A phase transition is defined as a jump between different branches of the solutions of the equations of motion. Our concern is whether or not $\partial F/\partial Q$ is well-defined within a single branch of the solutions. [^8]: Such a case has been studied in Ref. . [^9]: Equation (\[general-chem\]) can be formally interpreted as the hypothetical work against the electric field necessary, to bring the unit charge from the boundary to accomplish the new charge distribution on top of the old one. [^10]: Of course, we need to consider both baryon and isospin chemical potentials in phenomenologically realistic setups.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present resolved *Herschel* images of a circumbinary debris disk in the 99 Herculis system. The primary is a late F-type star. The binary orbit is well characterised and we conclude that the disk is misaligned with the binary plane. Two different models can explain the observed structure. The first model is a ring of polar orbits that move in a plane perpendicular to the binary pericenter direction. We favour this interpretation because it includes the effect of secular perturbations and the disk can survive for Gyr timescales. The second model is a misaligned ring. Because there is an ambiguity in the orientation of the ring, which could be reflected in the sky plane, this ring either has near-polar orbits similar to the first model, or has a 30 degree misalignment. The misaligned ring, interpreted as the result of a recent collision, is shown to be implausible from constraints on the collisional and dynamical evolution. Because disk+star systems with separations similar to 99 Herculis should form coplanar, possible formation scenarios involve either a close stellar encounter or binary exchange in the presence of circumstellar and/or circumbinary disks. Discovery and characterisation of systems like 99 Herculis will help understand processes that result in planetary system misalignment around both single and multiple stars.' author: - | G. M. Kennedy[^1]$^1$, M. C. Wyatt$^1$, B. Sibthorpe$^2$, G. Duchêne$^{3,4}$, P. Kalas$^4$, B. C. Matthews$^{5,6}$, J. S. Greaves$^7$, K. Y. L. Su$^8$, M. P. Fitzgerald$^{9,10}$\ $^1$ Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK\ $^2$ UK Astronomy Technology Center, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ, UK\ $^3$ Department of Astronomy, University of California, B-20 Hearst Field Annex, Berkeley, CA 94720-3411, USA\ $^4$ Laboratoire d’Astrophysique, Observatoire de Grenoble, Université J. Fourier, CNRS, France\ $^5$ Herzberg Institute of Astrophysics, National Research Council Canada, 5071 West Saanich Road., Victoria, BC, Canada, V9E 2E7, Canada\ $^6$ University of Victoria, Finnerty Road, Victoria, BC, V8W 3P6, Canada\ $^7$ School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews, Fife KY16 9SS, UK\ $^8$ Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA\ $^9$ Institute of Geophysics and Planetary Physics, Lawrence Livermore National Laboratory, L-413, 7000 East Avenue, Livermore, CA 94550, USA\ $^{10}$ Department of Physics and Astronomy, UCLA, Los Angeles, CA 90095-1547, USA title: '99 Herculis: Host to a Circumbinary Polar-ring Debris Disk' --- circumstellar matter — stars: individual: 99 Herculis, HD 165908, HIP 88745, GJ704AB Introduction {#s:intro} ============ The *Herschel* Key Program DEBRIS (Dust Emission via a Bias free Reconnaissance in the Infrared/Submillimeter) has observed a large sample of nearby stars to discover and characterise extrasolar analogues to the Solar System’s asteroid and Kuiper belts, collectively known as “debris disks.” The 3.5m *Herschel* mirror diameter provides 6-7” resolution at 70-100$\mu$m , and as a consequence our survey has resolved many disks around stars in the Solar neighbourhood for the first time .[^2] Here we present resolved images of the 99 Herculis circumbinary disk. This system is particularly interesting because unlike most debris disk+binary systems, the binary orbit is well characterised. The combination of a known orbit and resolved disk means we can compare their (different) inclinations and consider circumbinary particle dynamics and formation scenarios. This system is a first step toward building on the binary debris disk study of @2007ApJ...658.1289T. Their *Spitzer* study found that debris disks are as common in binary systems as in single systems, but tend not to have separations in the 3-30AU range. However, only some of their systems had detections at multiple wavelengths to constrain the disk location and none were resolved, making the true dust location uncertain. Resolved systems such as 99 Her remove this ambiguity, and provide crucial information on the disk location, stability and dynamics. This paper is laid out as follows. We first consider the stellar and orbital properties of the 99 Her system, including the possibility of a third component. Then we consider the *Herschel* image data and several different models that can explain it. Finally, we discuss the implications of these models for the formation of the system. 99 Herculis {#s:stprop} =========== The binary 99 Herculis (HD 165908, HIP 88745, GJ 704AB, ADS 11077) contains the 37$^{\rm th}$ closest F star primary within the volume limited Unbiased Nearby Stars sample [@2010MNRAS.403.1089P]. The Catalogue of Components of Doubles and Multiple systems [CCDM J18071+3034, @2002yCat.1274....0D] lists three components, but using Hipparcos proper motions @2010MNRAS.403.1089P find that the 93” distant C component is not bound to the system. The binary pair has been known since 1859, and consists of an F7V primary orbited by a K4V secondary. The primary is known to be metal poor with \[Fe/H\] $\approx -0.4$ [e.g. @1996yCat..33140191G; @2000MNRAS.316..514A; @2007PASJ...59..335T] and has an age consistent with the main-sequence . Binary Configuration {#ss:binary} -------------------- Parameter Symbol (unit) Value Uncertainty ---------------------------- ------------------------- --------- ------------- Semi-major axis a (”) 1.06 $0.02$ Eccentricity e 0.766 $0.004$ Inclination i ($^\circ$) 39 $2$ Ascending node $\Omega$ ($^\circ$) 41 $2$ Longitude of pericenter $\omega$ ($^\circ$) 116 $2$ Date of pericenter passage T (yr) 1997.62 $0.05$ Period P (yr) 56.3 $0.1$ Total mass $M_{\rm tot} (M_\odot)$ 1.4 $0.1$ : 99 Her orbital elements, system mass and 1$\sigma$ uncertainties. The ascending node $\Omega$ is measured anti-clockwise from North. The longitude of pericenter is measured anti-clockwise from the ascending node, and projected onto the sky plane has a position angle of 163$^\circ$ (i.e. is slightly different to 41+116 because the orbit is inclined).[]{data-label="tab:elem"} ![99 Her binary orbit as seen on the sky, with the line of nodes and pericenter indicated. North is up and East is left. The stellar orbits are shown over one (anti-clockwise) orbital period with black dots. Grey dots (primary is the larger grey dot) show the positions at the PACS observation epoch. Black dot sizes are scaled in an arbitrary way such that larger dots are closer to Earth. The arrows indicate the direction of motion and the scale bar indicates the binary semi-major axis of 1.06” (16.5AU).[]{data-label="fig:sys"}](systop.eps){width="50.00000%"} To interpret the *Herschel* observations requires an understanding of the binary configuration, which we address first. Typically, the important orbital elements in fitting binary orbits are the semi-major axis, eccentricity, and period, which yield physical characteristics of the system (if the distance is known). Regular observations of 99 Her date back to 1859 and the binary has completed nearly three revolutions since being discovered. Additional observations since the previous orbital derivation , have allowed us to derive an updated orbit. Aside from a change of 180$^\circ$ in the ascending node [based on spectroscopic data @2006ApJS..162..207A], the orbital parameters have changed little; the main purpose of re-deriving the orbit is to quantify the uncertainties. The orbit was derived by fitting position angles (PAs) and separations ($\rho$) from the Washington Double Star catalogue [@2011yCat....102026M].[^3] We included three additional observations; a *Hubble Space Telescope* (HST) *Imaging Spectrograph* (STIS) acquisition image [epoch 2000.84, $PA=264 \pm 2^\circ$, $\rho=0.54 \pm 0.02$”, @2004ApJ...606..306B], an adaptive optics image taken using the Lick Observatory Shane 3m telescope with the IRCAL near-IR camera as an ongoing search for faint companions of stars in the UNS sample (epoch 2009.41, $PA=309 \pm 2^\circ$, $\rho=1.12 \pm 0.02$”), and a Keck II NIRC2 L’ speckle image taken to look for a third companion (see §\[s:third\], epoch 2011.57, $PA=317 \pm 1^\circ$, $\rho=1.20 \pm 0.014$”). For visual data we set uncertainties of 7$^\circ$ to PAs and 0.5” to separations, for *Hipparcos* data we used 5$^\circ$ and 0.1”, and for speckle observations without quoted uncertainties we used 2$^\circ$ and 0.04”. The resulting orbital elements, shown in Table \[tab:elem\], vary only slightly from those derived by .[^4] The best fit yields $\chi^2 = 190$ with 399 degrees of freedom. The fit is therefore reasonable, and most likely the $\chi^2$ per degrees of freedom is less than unity because the uncertainties assigned to the visual data are too conservative. If anything, the uncertainties quoted in Table \[tab:elem\] are therefore overestimated. However, visual data can have unknown systematic uncertainties due to individual observers and their equipment so we do not modify them [@2001AJ....122.3472H]. These data also allow us to derive a total system mass of 1.4$M_\odot$, where we have used a distance of 15.64pc [@2008yCat.1311....0F]. While the total mass is well constrained by visual observations, the mass ratio must be derived from either the differential luminosity of each star or radial velocities. We use the spectroscopic mass function derived by @2006ApJS..162..207A, yielding a mass ratio of 0.49, which has an uncertainty of about 10%. The mass ratio from the differential luminosity is 0.58, with a much larger (20%) uncertainty . Using the spectroscopic result, the primary (GJ 704A) has a mass of 0.94$M_\odot$, while the secondary (GJ 704B) is 0.46$M_\odot$. The system configuration is shown in Figure \[fig:sys\] and is inclined such that the primary is currently closer to Earth than the secondary. The position of the B component relative to A on the date it was observed by *Herschel* in late April 2010 was PA=314$^\circ$ at an observed separation of 1.15” (22.6AU when deprojected), indicated by grey circles in the Figure. A Third Component? {#s:third} ------------------ While the STIS images clearly resolve the binary, there is a possible third component with $PA \approx 284^\circ$ and $\rho \approx 0.27"$ that is about 2.4 times as faint as the B component. @2008AN....329...54S also report a third component (epoch 2005.8) at $PA \approx 50^\circ$ and $\rho \approx 0.228"$ (no magnitude is given). However, while they detected the secondary again in mid-2007, they do not report any detection of the putative tertiary [@2010AN....331..286S]. The detected positions are shown as star symbols in sky coordinates in Figure \[fig:pm\], which shows the motion of the 99 Her system. The system proper motion is $\mu_\alpha \cos \delta = -110.32$mas yr$^{-1}$, $\mu_\delta = 110.08$mas yr$^{-1}$ [@2007ASSL..350.....V], and accounts for the motion of the primary assuming the orbit derived in , which is very similar to ours. The small proper motion uncertainty means STIS and @2008AN....329...54S cannot have seen the same object if it is fixed on the sky. There is no clear sign of a third component in the residuals from fitting the orbit of the secondary. ![Motion of the 99 Her system (filled dots) in sky coordinates at three epochs. The epochs including the putative third component are enclosed in boxes. The arrow shows the direction of the system center of mass movement and the distance travelled in 5 years, and the grey lines show the path traced out by each star. Star symbols show the position of the third object observed in the STIS data in 2000 (dashed box) and by @2008AN....329...54S in 2005 (dotted box).[]{data-label="fig:pm"}](pm.eps){width="50.00000%"} ![Keck/NIRC2 adaptive optics image of 99 Her at 3.8$\mu$m, cropped to about 1.5” around the primary. North is up and East is left. The saturated A component is at the center of the frame.[]{data-label="fig:keck"}](her99b_fig_v2.eps){width="45.00000%"} To try and resolve this issue we obtained an adaptive optics image of 99 Her at L’ (3.8 $\mu$m) using the NIRC2 camera at Keck II on July 27, 2011, shown in Figure \[fig:keck\]. We adopted the narrow camera (10 mas/pixel) and used a five-point dither pattern with three images obtained at each position consisting of 25 coadds of 0.181 seconds integration. The cumulative integration time for the final co-registered and coadded image is 67.875 seconds. The core of the A component point-spread-function is highly saturated, which degrades the achievable astrometry. We estimate the position of 99 Her A by determining the geometric center of the first diffraction ring. The position of 99 Her B is taken from the centroid of the unsaturated core. The PA and separation are quoted above. There is no detection of the putative 99 Her C within 1.6” of the primary in the combined image if it is only a factor 2.4 fainter than the B component, because it would appear 20 times brighter than the brightest speckles. However, if it were closer to the primary than 0.2” it would currently be too close to detect. If the object was fixed on the sky near either the 2000 or 2005 locations, it would have been detected in the individual pointings of the five-point dither since each NIRC2 pointing has a field of view of $10''\times10''$. To be outside the field of view and still bound to the primary, the tertiary must have an apocenter larger than about 75AU (5”). An object in such an orbit would have a period of at least 200 years, so could not have been detected near the star in 2005 and be outside the NIRC2 field of view in 2011. The non-detections by @2010AN....331..286S and NIRC2 make the existence of the tertiary suspicious. It is implausible that the object was too close to, or behind the star both in 2007 and 2011, because at a semi-major axis of 0.23” (3.5AU) from the primary (similar to the projected separation) the orbital period is 7 years. Therefore, the object would be on opposite sides of the primary, and the two detections already rule out an edge-on orbit. Even assuming a circular orbit, such an object is unlikely to be dynamically stable, given the high eccentricity and small pericenter distance (4.1AU) of the known companion. A tertiary at this separation would be subject to both short term perturbations and possible close encounters. If the mutual inclination were high enough, it would also be subject to Kozai cycles due to the secondary that could result in a high eccentricity and further affect the orbital stability. While it may be worthy of further detection attempts, the existence of this component appears doubtful and we do not consider it further. IR and Sub-mm Data {#s:data} ================== Observations {#s:obs} ------------ ![image](pacs70.eps){width="33.00000%"} ![image](pacs100.eps){width="33.00000%"} ![image](pacsall160.eps){width="33.00000%"} *Herschel* Photodetector and Array Camera & Spectrometer data at 100 and 160$\mu$m were taken in April 2010 during routine DEBRIS observations. Subsequently, a Spectral & Photometric Imaging Receiver observation was triggered by the large PACS excess indicating the presence of a debris disk and a likely sub-mm detection. The disk was detected, but not resolved with SPIRE at 250 and 350$\mu$m. A 70$\mu$m PACS image was later obtained to better resolve the disk. Because every PACS observation includes the 160$\mu$m band, we have two images at this wavelength, which are combined to produce a single higher S/N image. All observations were taken in the standard scan-map modes for our survey; mini scan-maps for PACS data and small maps for SPIRE. Data were reduced using a near-standard pipeline with the Herschel Interactive Processing Environment [HIPE Version 7.0, @2010ASPC..434..139O]. We decrease the noise slightly by including some data taken as the telescope is accelerating and decelerating at the start and end of each scan leg. The high level of redundancy provided by PACS scan maps means that the pixel size used to generate maps can be smaller than the natural scale of 3.2”/pix at 70 and 100$\mu$m and 6.4”/pix at 160$\mu$m via an implementation of the “drizzle” method [@2002PASP..114..144F]. Our maps are generated at 1”/pix at 70 and 100$\mu$m and 2”/pix at 160$\mu$m. The benefit of better image sampling comes at the cost of correlated noise [@2002PASP..114..144F], which we discuss below. In addition to correlated noise, two characteristics of the PACS instrument combine to make interpretation of the data challenging. The PACS beam has significant power at large angular scales; about 10% of the energy lies beyond 1 arcminute and the beam extends to about 17 arcminutes (1000 arcsec). While this extent is not a problem in itself, it becomes problematic because PACS data are subject to fairly strong $1/f$ (low frequency) noise and must be high-pass filtered. The result is that a source will have a flux that is 10-20% too low because the “wings” of the source were filtered out. While this problem can be circumvented with aperture photometry using the appropriate aperture corrections derived from the full beam extent, the uncorrected apertures typically used for extended sources will result in underestimates of the source flux.[^5] Here, we correct the fluxes measured in apertures for 99 Her based on a comparison between PSF fitted and aperture corrected measurement of bright point sources in the DEBRIS survey with predictions from their stellar models [based on the calibration of @2008AJ....135.2245R]. These upward corrections are $16 \pm 5\%$, $19 \pm 5\%$, and $21 \pm 5\%$ at 70, 100, and 160$\mu$m respectively. These factors depend somewhat on the specifics of the data reduction, so are *not* universal. This method assumes that the correction for 99 Her is the same as for a point source, which is reasonable because the scale at which flux is lost due to filtering the large beam is much larger than the source extent. The corrected PACS measurement is consistent with MIPS 70$\mu$m, so we do not investigate this issue further. The beam extent and filtering is also important for resolved modelling because the stellar photospheric contribution to the image is decreased. Therefore, in generating a synthetic star+disk image to compare with a raw PACS observation, the stellar photospheric flux should be decreased by the appropriate factor noted above. Alternatively, the PACS image could be scaled up by the appropriate factor and the true photospheric flux used. Table \[tab:obs\] shows the measured star+disk flux density in each *Herschel* image. Uncertainties for PACS are derived empirically by measuring the standard deviation of the same sized apertures placed at random image locations with similar integration time to the center (i.e. regions with a similar noise level). The SPIRE observations of 99 Her are unresolved. The disk is detected with reasonable S/N at 250$\mu$m, marginally detected at 350$\mu$m, and not detected at 500$\mu$m. Fluxes are extracted with PSF fitting to minimise the contribution of background objects. Because all three bands are observed simultaneously (i.e. a single pointing), the PSF fitting implementation fits all three bands at once. A detection in at least one band means that all fluxes (or upper limits) are derived at the same sky position. Additional IR data exist for 99 Her, taken with the Multiband Imaging Photometer for *Spitzer* [MIPS, @2004ApJS..154...25R]. Only the star was detected at 24$\mu$m ($270.3 \pm 0.1$mJy), but this observation provides confirmation of the 99 Her stellar position in the PACS images relative to a background object 1.8 arcmin away to the SE ($PA=120^\circ$) that is visible at 24, 70, and 100$\mu$m. The presence of an excess at 70$\mu$m ($98 \pm 5$mJy compared to the photospheric value of 30mJy) was in fact reported by @2010ApJ...710L..26K. They did not note either the circumbinary nature or that the disk may be marginally resolved by MIPS at 70$\mu$m. Because our study focuses on the spatial structure, we use the higher resolution PACS data at 70$\mu$m, but include the MIPS data to model the SED. Basic image analysis {#s:basic} -------------------- Figure \[fig:pacs\] shows the *Herschel* PACS data. Compared to the beam size, the disk is clearly resolved at all three wavelengths. At 160$\mu$m the peak is offset about 5” East relative to both the 70 and 100$\mu$m images. However, the disk is still visible at 160$\mu$m as the lower contours match the 70 and 100$\mu$m images well. The 160$\mu$m peak is only 2-3$\sigma$ more significant than these contours. While such variations are possible due to noise, in this case the offset is the same in both 160$\mu$m images, so could be real. The fact that the peak extends over several pixels is not evidence that it is real, because the pixels in these maps are correlated (see below). If real, this component of the disk or background object cannot be very bright at SPIRE wavelengths because the measured fluxes appear consistent with a blackbody fit to the disk (see §\[s:sed\]). Based on an analysis of all DEBRIS maps (that have a constant depth), the chance of a 3$\sigma$ or brighter background source appearing within 10” of 99 Her at 160$\mu$m is about 5% (Thureau et al in prep). Given that the 160$\mu$m offset is only a 2-3$\sigma$ effect (i.e. could be a 2-3$\sigma$ background source superimposed on a smooth disk), the probability is actually higher because the number of background sources increases strongly with depth. These objects have typical temperatures of 20–40K , so could easily appear in only the 160$\mu$m image, particularly if the disk flux is decreasing at this wavelength. Band Flux (mJy) Uncertainty Method ---------- ------------ ------------- -------------- PACS70 93 10 15” aperture PACS100 87 10 15” aperture PACS160 80 15 17” aperture SPIRE250 44 6 PSF fit SPIRE350 22 7 PSF fit SPIRE500 4 8 PSF fit : *Herschel* photometry of 99 Her. The disk is not detected at 500$\mu$m and can be considered a 3$\sigma$ upper limit of 24mJy.[]{data-label="tab:obs"} We now analyse the PACS images using 2D Gaussian models to estimate the disk size, inclination, and position angle. A 2D Gaussian fits the star-subtracted PACS 100$\mu$m image fairly well, with major and minor full-width half-maxima of 17.7 and 12.8” at a position angle of $78^\circ$. Quadratically deconvolving from the 6.7” FWHM beam assuming a circular ring implies an inclination of $48^\circ$ from face-on and an estimated diameter of 250AU. Gaussian fitting to star-subtracted images at both 70 and 160$\mu$m yields similar results. As noted above, estimation of uncertainties in these parameters is non-trivial due to correlated noise, but made easier by the constant depth of our survey. By inserting the best fit Gaussian to the star-subtracted image of the 99 Her disk from the 100$\mu$m image into 438 other 100$\mu$m maps at high coverage positions offset from the intended target, we obtain a range of fit parameters for hundreds of different realisations of the same noise. This process yields an inclination of $45 \pm 5^\circ$ and PA of $75 \pm 8^\circ$. Repeating the process, but using the best fit Gaussian for the 70$\mu$m image yields an inclination of $44 \pm 6^\circ$ and PA of $68 \pm 9^\circ$. Though the inclination of the disk is similar to the binary, the position angle is significantly different from the binary line of nodes of $41 \pm 2 ^\circ$. This difference means that the disk and binary orbital planes are misaligned. As a check on the above approach, we can correct for the correlated noise directly. @2002PASP..114..144F show that for a map that has sufficiently many dithers (corresponding in our case to many timeline samples across each pixel), a noise “correction” factor of $r/\left(1-1/3r\right)$ can be derived, where $r$ is the ratio of natural to actual pixel scales and is 3.2 for our PACS maps. A correction factor of 3.6 for the measured pixel to pixel noise is therefore required when estimating the uncertainty on a fitted Gaussian. Including this factor at 70$\mu$m and calculating the uncertainty by the standard $\Delta \chi^2$ method yields an inclination of $42 \pm 7^\circ$ and a PA of $68 \pm 9^\circ$. At 100$\mu$m the result is an inclination of $44 \pm 6^\circ$ and a PA of $76 \pm 8^\circ$. These results are therefore almost exactly the same as the empirical method used above and therefore lead to the same conclusion of misalignment. As will become apparent in §\[s:spatial\], there is reason to believe that the disk plane could be perpendicular to the binary pericenter direction. The projection of the binary pericenter direction on the sky plane has a PA of $163 \pm 2^\circ$, and a line perpendicular to this has a PA of $73 \pm 2^\circ$. Therefore, the observed disk position angle of about 72$^\circ$ is consistent with being at 90$^\circ$ to the binary pericenter direction. SED {#s:sed} === ![SED for the 99 Her system (both stars) showing the stellar and disk models (grey lines) and star+disk model (black line). The blackbody disk model is the solid grey line, and the physical grain model the dashed line. Photometric measurements are shown as black filled circles, and synthetic photometry of the stellar atmosphere as open circles ($U-B$, $B-V$, & $b-y$ colours, and $m1$ and $c1$ Stromgren indices were fitted but are not shown here). Black triangles mark upper limits from IRAS at 60 and 100$\mu$m.[]{data-label="fig:sed"}](F037AB.eps){width="50.00000%"} The combination of all photometry for 99 Her allows modelling of the spectral energy distribution (SED). The model is separated into two components; a stellar atmosphere and a disk. Due to being fairly bright ($V\sim5$mag) the system is saturated in the 2MASS catalogue. However, sufficient optical photometry for each individual star and the pair exists [@1993AJ....106..773H; @1997yCat.2215....0H; @1997ESASP1200.....P; @2006yCat.2168....0M], as well as infra-red measurements of the AB pair from AKARI and IRAS . These data were used to find the best fitting stellar models via $\chi^2$ minimisation. This method uses synthetic photometry over known bandpasses and has been validated against high S/N MIPS 24$\mu$m data for DEBRIS targets, showing that the photospheric fluxes are accurate to a few percent for AFG-type stars. The stellar luminosities ($L_{\star,A}=1.96L_\odot$, $L_{\star,B}=0.14L_\odot$) and IR fluxes of the individual components are consistent with the fit for the pair ($L_{\star,AB}=2.08L_\odot$). The fit for the AB pair is shown in Figure \[fig:sed\]. The spatial structure of the disk can be modelled with dust at a single radial distance of 120AU (i.e. thin compared to *Herschel’s* resolution, §\[s:spatial\]), so disk SED modelling can be decoupled from the resolved modelling once this radial distance is known. Because we have measurements of the disk emission at only five wavelengths, we cannot strongly constrain the grain properties and size distribution. We fit the data with a blackbody model, and then compare the data with several “realistic” grain models . In fitting a blackbody we account for inefficient grain emission at long wavelengths by including parameters $\lambda_0$ and $\beta$, where the blackbody is modified by a factor $\left( \lambda_0/\lambda \right)^\beta$ for wavelengths longer than $\lambda_0$. The best fitting model has a temperature of 49K and fractional luminosity $L_{\rm disk}/L_\star = 1.4 \times 10^{-5}$. The SPIRE data are near the confusion limit of about 6mJy, so the parameters $\beta$ and $\lambda_0$ are unconstrained within reasonable limits by the data (based on previous sub-mm detections for other disks we fix them to $\lambda_0=210 \mu$m and $\beta=1$ in Figure \[fig:sed\] [@2007ApJ...663..365W]). Assuming that grains absorb and emit like blackbodies, the radial dust distance implied by 49K is 45AU. Because the disk is observed at a radius of 120AU (i.e. is warmer than expected for blackbodies at 120AU), the dust emission has at least some contribution from grains small enough to emit inefficiently in the 70-350$\mu$m wavelength range. Because the SED alone is consistent with a pure blackbody (i.e. with $\beta=0$), we cannot make such a statement without the resolved images. However, actually constraining the grain sizes is difficult because temperature and emission are also affected by composition. We fit the data by generating disk SEDs for grains at a range of semi-major axes and choosing the one with the lowest $\chi^2$. If the dust semi-major axis is different from the observed semi-major axis of 120AU the model parameters are changed and the model recalculated, thus iterating towards the best fit. We model the dust with a standard diameter ($D$) distribution $n(D) \propto D^{2-3q}$ where $q=1.9$ [equivalently $n(M) \propto M^{-q}$ where $M$ is mass @2003Icar..164..334O], with the minimum size set by the blowout limit for the specific composition used (about 1.4$\mu$m) and a maximum size of 10cm. The size distribution probably extends beyond 10cm, but objects larger than 10cm contribute negligibly to the emission because the size distribution slope means that smaller grains dominate. Preliminary tests found that icy grains provided a much better fit than silicates. To refine the grain model so the SED and resolved radius agree, we introduced small amounts of amorphous silicates to the initially icy model. The grains are therefore essentially ice mixed with a small fraction ($f_{\rm sil} = 1.5\%$) of silicate. The icy grain model is shown as a dotted line in Figure \[fig:sed\]. This model has a total dust surface area of 14AU$^2$ and a mass of order 10$M_\oplus$ if the size distribution is extrapolated up to 1000km size objects. The parameters from of this model are degenerate for the data in hand; for example the size distribution could be shallower and the fraction of silicates higher (e.g. $q=1.83$ and $f_{\rm sil} = 4\%$). If we allow the minimum grain size to be larger than the blowout limit, the disk is well fit by amorphous silicate grains with $q=1.9$ and $D_{\rm bl} = 10\mu$m. The disk spectrum can even be fit with a single size population of 25$\mu$m icy grains. However, the predictions for the flux at millimeter wavelengths depend on the size distribution, with lower fluxes for steeper size distributions. Therefore, grain properties and size distribution can be further constrained in the future with deep (sub)mm continuum measurements. In summary, it is hard to constrain the grain sizes or properties. There is a difference in the required minimum grain size that depends on composition. Because icy grains are reflective at optical wavelengths, a detection of the disk in scattered light could constrain the albedo of particles, and therefore their composition. Spatial structure {#s:spatial} ================= ![image](coplanar-models.eps){width="100.00000%"} The PACS images of the 99 Her disk are resolved, which allows modelling of the spatial distribution of grains that contribute to the observed emission at each wavelength. We compare synthetic images with the *Herschel* observations in several steps: i) Generate a three dimensional distribution of surface area $\sigma(r,\theta,\phi)$, where the coordinates are centered on the primary star. ii) Generate a radial distribution of grain emission properties. Because the SED can be modelled with blackbody grains at 49K and the spatial structure modelled with a narrow ring, there is no real need for a radial temperature dependence and the grain properties are only a function of wavelength: $P(\lambda) = B_\nu(49K,\lambda)$. Practically, we use a radial temperature dependence $T \propto r^{-1/2}$ centered on the primary, normalised so that the disk ring temperature is 49K. This approach ensures that temperature differences due to non-axisymmetries (negligible for 99 Her) are automatically taken into account. iii) Generate a high resolution model as viewed from a specific direction. The emission in a single pixel of angular size $x$ from a single volume element in the three dimensional model $dV$ viewed along a vector $\mathcal{R}$ is $dF_\nu(\lambda,r,\theta,\phi) = P(\lambda) \sigma(r,\theta,\phi) dV$, where $dV=x^2 d^2 d\mathcal{R}$, so $d\mathcal{R}$ is the length of the volume element, and $d$ is the distance to the particles from Earth [@1999ApJ...527..918W]. The emission is derived by integrating along the line of sight $\mathcal{R}$ for each pixel in the synthetic image. The photospheric fluxes for each star (decreased by the factors noted in §\[s:obs\]) are placed in the relevant pixels at this step. iv) Convolve the high resolution model with a high resolution telescope+instrument beam, for which we use interpolated and rotated PACS images of the star Arcturus.[^6] v) Degrade the resolution to match the data. vi) Generate a map of residuals, defined by $(observed-model)/uncertainty$, where the uncertainty is the pixel to pixel RMS for that observation. We compute the model $\chi^2$ from pixels in a square crop around the disk. A minor consideration is that in the general circumbinary case the disk temperature is not axisymmetric because the disk orbits the center of mass, not the primary. An axisymmetric disk is therefore subject to a temperature asymmetry such that it will be slightly hotter, and therefore brighter, where the distance to the primary is smallest. This “binary offset” asymmetry will rotate with the primary, and will be most pronounced in the coplanar case. The result of this effect is similar to the offset caused by perturbations from an eccentric object [“ pericenter glow” @1999ApJ...527..918W]. However, the pericenter glow is offset towards the pericenter of the perturbing object, so does not rotate unless the perturbing object’s pericenter precesses. The offset from the primary and the pericenter glow are completely independent effects. Therefore, if the pericenter glow effect is present, it will reinforce and cancel the binary offset effect, depending on the relative magnitude of each each offset. The magnitude of the binary offset effect is negligibly small ($\lesssim$1%) because the disk radius is much larger than the binary separation. Because our model is centered on the system center of mass this effect is taken into account anyway. We discuss the effect of the binary on pericenter glow in §\[s:dyn\]. To fit the data requires a handful of parameters, some are required for all models and some are model specific. The disk surface area, temperature, radius, width, and total opening angle are the five physical parameters for a ring. The sky position angle and inclination are two further parameters that set the orientation, but can be fixed if the disk plane is assumed to be aligned with the binary. In addition each observation has the stellar RA and Dec as parameters to allow for the 2” $1\sigma$ pointing accuracy of *Herschel*. The position at 160$\mu$m is tied to the 100$\mu$m position. There are therefore eleven possible parameters to fit for the resolved observations at 70, 100, and 160$\mu$m. We fix the disk temperature to 49K in all cases. From the basic analysis (§\[s:basic\]), a simple ring coplanar with the binary does not appear a viable option. To emphasise this point we show 70 and 100$\mu$m images of the best fitting coplanar model in Figure \[fig:copl\]. This model was generated by the steps outlined above, and the rightmost three panels are the results of steps iv (convolved model), iii (high resolution model), and vi (residuals). We fix the disk width to 20AU, the opening angle to 5$^\circ$, and the position angle and inclination to the binary plane, so there are six free parameters (surface area, radius, and two pairs of RA/Dec sky positions). While we include the 160$\mu$m data in the fitting, it does not constrain the fit strongly due to low S/N and always shows $\sim$2$\sigma$ residual structure due to the offset peak. For comparison with the models below, the $\chi^2$ value for all three PACS bands is 4278 with 3797 degrees of freedom. The positive and negative residuals (rightmost panels) show that the disk ansae in the model have the wrong orientation at both wavelengths. It is clear that any structure symmetric about the binary line of nodes will not be consistent with the observations because the position angle is significantly different. An alternative explanation for the misalignment between the observed position angle and the binary line of nodes could be that the dust does in fact lie in the binary plane, but that the particles are on eccentric orbits with common pericenter directions (i.e. the disk is elliptical and offset from the binary). In principle, the observations can constrain the eccentricity and pericenter direction. However, this model fails because the eccentricity needed to match the observed position angle is too extreme. In order to obtain an ellipse that lies in the binary orbital plane and has a position angle and aspect ratio similar to the observations requires eccentricities $\gtrsim$0.4. The eccentricity of these particles is so high that i) the ring is significantly offset from the star and ii) the ring has an extreme pericenter glow asymmetry at all wavelengths caused by particles residing at different stellocentric distances. Because the PACS 70$\mu$m image shows that the star lies very near the disk center, such a strong offset is ruled out. We now consider two relatively simple models that account for the misalignment between the disk and binary orbital planes. The first is based on the expected secular evolution of circumbinary particles, and the second is a simple misaligned ring where the disk position angle and inclination are free parameters. Secularly perturbed polar ring {#s:polar} ------------------------------ In this section we consider a ring inspired by the secular evolution of circumbinary particles. This approach ensures that the disk is stable over the stellar lifetime and encompasses the particle dynamics dictated by the binary. We first outline the dynamics of circumbinary particles, and then show the model for the 99 Her disk. ### Dynamics {#s:dyn} Particle dynamics are important for evolution and stability in the 99 Her system. A circumbinary disk will have its inner edge truncated, while circumstellar disks around either component can be truncated at their outer edges. In addition, secular perturbations lead to precession of test particles’ nodes coupled with inclination variations. We explore these dynamics using the *Swift HJS* integrator . In general, disk truncation allows us to place limits on possible locations for disk particles. However, in the case of 99 Her there is no evidence for disk components orbiting only one star, and the apparent circumbinary disk extent lies well beyond $\sim$30-60AU stability limit at any inclination [@1997AJ....113.1445W; @2011arXiv1108.4144D]. Circumbinary particles also undergo long-term dynamical evolution due to secular perturbations. Because the binary has a small mass ratio and high eccentricity, the dynamics are not well described by the circular restricted three-body problem, commonly applied in the case of debris disks perturbed by planets. Similar dynamics have previously been explored in the context of the HD 98800 system [@2008MNRAS.390.1377V; @2009MNRAS.394.1721V] and more generally [@2010MNRAS.401.1189F; @2011arXiv1108.4144D]. These studies show that the inclination ($i$) and line of nodes ($\Omega$) of circumbinary particles evolve due to perturbations from the binary. Depending on the binary eccentricity and particle inclination, $\Omega$ can circulate (increase or decrease continuously) or librate (oscillate about 90 or $270^\circ$). Particles with low inclinations stay on low inclination orbits, thus sweeping out a roughly disk or torus-like volume over long timescales. Higher inclination particles are subject to nodal libration and large inclination variations, thus sweeping out large parts of a sphere around the binary. Most importantly for 99 Her, the orbits of particles with $\Omega \approx 90^\circ$ (or $270^\circ$) and on near polar orbits will not change much due to secular evolution, thus sweeping out a polar ring. ![Secular evolution of circumbinary particles in inclination ($i$) and line of nodes ($\Omega$) space. Particles begin at dots and move along the curves due to perturbations from the binary. Particles that would appear reflected in the x axis duplicate the spatial distribution so are not shown. Crosses show the current location of particles in the two interpretations of the transient ring model (§\[s:ring\]). Over time these particles will sweep out curves similar to particles 4 and 11. The long term structure of the transient ring will therefore appear similar to either panel 4 or 11 in Figure \[fig:swift\], depending on which inclination is correct.[]{data-label="fig:sec"}](sec.eps){width="50.00000%"} Figure \[fig:sec\] shows the secular evolution of 23 particles on initially circular orbits in complex inclination space. All particles have initial nodes of 90$^\circ$ relative to the binary pericenter and inclinations spread evenly between 0 and 180$^\circ$ and are integrated for 1Gyr (i.e. there are no other significant effects on such long timescales). At 120AU, the time taken for a particle to complete one cycle of secular evolution (make a loop in figure \[fig:sec\]) varies in the range 2-7$\times 10^5$ years, with larger loops taking longer. These times will also scale with particle semi-major axis. Particles 1-12 are those with initial inclinations between 0-90$^\circ$ that are sufficient to describe the range of spatial structures as we cannot distinguish between pro/retrograde orbits. The particles can be split into two groups; those with low inclinations whose nodes circulate (1–3) and those with high inclinations whose nodes librate about 90$^\circ$ (4–12). The dividing line (separatrix) between these families for the binary eccentricity of 0.76 is $21^\circ$ when $\Omega=90^\circ$ [or $270^\circ$, @2010MNRAS.401.1189F]. While particles in the first group have $i<21^\circ$ when $\Omega=90^\circ$, their inclinations when $\Omega=0^\circ$ (or $180^\circ$) can be as high as $90^\circ$. Thus, particles near the separatrix will sweep out an entire spherical shell during their secular evolution. Similarly, particles near the separatrix but in the second group also sweep out a spherical shell, though the orbital evolution is different. ![image](swift.eps){width="100.00000%"} To visualise the structures swept out by these families of particles due to secular perturbations, Figure \[fig:swift\] shows the resulting debris structures for particles that follow each of the trajectories 1-12 from Figure \[fig:sec\] (left to right and down). The structures are oriented as they would be seen on the sky in the 99 Her system (i.e. have the same orientation with respect to the binary orbit shown in Figure \[fig:sys\]). Each structure was generated by taking the relevant particle at each time step and spawning 1000 additional particles spread randomly around the orbit. This process was repeated for every time step, thus building up the spatial density of a family of particles that follow a specific curve in Figure \[fig:sec\]. These structures are optically thin, which makes interpreting them somewhat difficult. We have included a scaled version of the binary orbit from Figure \[fig:sys\] in some panels in an attempt to make the orientations clearer. The first (top left) panel shows a circular orbit coplanar with the binary. The PA is the binary line of nodes, and Figure \[fig:copl\] shows why a disk in the plane of the binary is not a satisfactory match to the observations. The second and third panels are still symmetric about the binary orbital plane, but have a wider range of inclinations and are an even poorer match to the observations. Panel 3 shows that while particle inclinations are restricted for $\Omega=90,270^\circ$, they can be large for $\Omega=0,180^\circ$ and result in a “butterfly” structure when viewed down the binary pericenter direction. The remaining panels are for particles 4-12, whose nodes librate and for which the plane of symmetry is perpendicular to the binary pericenter direction. In panel 4 the range of nodes and inclinations is so large that a particle sweeps out nearly an entire spherical shell during a cycle of secular evolution (i.e. the particle is near the separatrix). This range decreases as the initial inclination nears a polar orbit, at which point the orbital elements do not evolve and the resulting structure appears in panel 12 as a simple ring. The key difference from the ring in panel 1 is that this ring’s position angle is perpendicular to the sky projection of the binary pericenter direction, and as noted in §\[s:basic\] is therefore similar to the observed PA in the PACS images. Secular perturbations from the binary also affect the long term evolution of particle eccentricities and pericenter longitudes. These effects are taken into account by our $n$-body approach. However, we noticed that the eccentricities imposed (“forced”) on the particles are lower than would be expected for a lower mass companion. Further $n$-body simulations of coplanar particles show that for 99 Her with a mass ratio of 0.49 the forced eccentricity at 120AU is about 0.03, but if the mass ratio were 0.05 the forced $e$ is 0.1. This lack of significant eccentricity forcing is visible by its absence in Figure \[fig:swift\], where the structures would be much broader if there were a large range of particle eccentricities. For example, if the mass of the secondary in the 99 Her system were significantly smaller, the model in panel 1 would become broader and offset from the binary center of mass, resulting in a small pericenter glow effect. This dependence suggests that a circumbinary disk’s structure may help constrain the binary mass ratio in cases where it is uncertain. However, we cannot apply this idea to make a better estimate of the 99 Her mass ratio because the PACS observations do not have enough resolution. In addition, at high inclinations the particle behaviour is more complicated, because polar particles switch between pro and retrograde orbits and do not follow simple circles in complex eccentricity space. ### Polar ring model ![image](swift-models.eps){width="100.00000%"} We now use the models from Figure \[fig:swift\] to fit the PACS observations. The model has only seven free parameters; the particle semi-major axis and initial inclination, the surface area of dust, and the same four RA/Dec positions. The dust temperature is fixed to 49K. Using a semi-major axis of 120AU, each panel was compared to the PACS images, setting the surface area in grains for each model to obtain the least residual structure. Of these we found that panel 9 was the best fit, shown in Figure \[fig:polar\]. These particles follow near-polar orbits so we call this model a “polar ring.” We find $\chi^2=3202$. In terms of $\chi^2$ the results for panels 8 and 10 are similar, but slightly higher. The uncertainty in the initial inclination is therefore about 10$^\circ$, and for the semi-major axis about 10AU. This model is much better than the coplanar model of Figure \[fig:copl\], with no overlapping residual structure at 70 and 100$\mu$m. The particles likely occupy a wider range of orbits than a single semi-major axis with some non-zero eccentricity, which may account from some minor (2$\sigma$) structure in the residuals at the disk ansae at 70$\mu$m. However, given that this model stems directly from the secular evolution, has very few free parameters, and accounts for the structure in all PACS images, we consider it a plausible explanation. Transient ring model {#s:ring} -------------------- A simple circular ring is a natural model to fit to the observations. This model has eight free parameters, with the width of the ring fixed at 20AU and the opening angle fixed to 5$^\circ$. As expected from the simple analysis in §\[s:basic\] the position angle of this ring is not aligned with the binary line of nodes, and is therefore misaligned with the binary orbit. The interpretation depends on the orientation of the best fit. A misaligned ring with polar orbits and the correct line of nodes would be considered further evidence in favour of the above polar ring model. A ring with a non-polar orientation will be spread into a broad structure like one of the panels in Figure \[fig:swift\] by secular perturbations. The ring cannot be long-lived and could therefore be the aftermath of a recent collision, seen after the collision products have sheared into a ring, but before secular perturbations destroy the ring structure. Thus we call this model a “transient ring.” ![image](ring-models.eps){width="100.00000%"} This model is shown in Figure \[fig:ring\], and is a reasonable match to the PACS observations. However, the residuals at 70$\mu$m show that the ring produces a structure that is slightly too elliptical, compared to the more rectangular structure that is observed and reproduced by the polar ring. This model also has less emission at the stellar position than is observed. For this model $\chi^2= 3304$. The disk is inclined 53$^\circ$ from face-on and the PA is 81$^\circ$. The uncertainties are similar to those derived for the Gaussian fits in §\[s:basic\]. The minimum relative inclination between the disk and binary orbital planes is therefore 32 degrees, with a line of nodes with respect to the binary orbit of 139$^\circ$. However, the inclination between the disk and binary plane could also be 87$^\circ$ if the disk were mirrored in the sky plane, which means that the particles have near-polar orbits. These orbits are nearly the same as panel 12 of Figure \[fig:swift\] (the narrow polar ring) because the line of nodes with respect to the binary orbit is 276$^\circ$. These two interpretations correspond to two points in Figure \[fig:sec\], shown as crosses. Over time the particles would spread around to make two more lines similar to those drawn. The particles in the lower inclination case are close to the separatrix, and would therefore sweep out a near-spherical shell like panel 4 of Figure \[fig:swift\]. In this case, the long term evolution produces structures that have the wrong position angle and are a poor match to the observations. The higher inclination case is very nearly a polar ring and would look very similar to panel 11. Such a result is expected because we found above that the polar ring model works well, and argues in favour of the polar-ring interpretation. We can in fact improve this simple ring model by increasing the total disk opening angle (i.e. allowing a larger range of inclinations), which emulates the range of inclinations that result from the secular evolution. We find a best fit when the particle inclinations are 25$^\circ$ (total opening angle of 50$^\circ$), where $\chi^2=3210$. This model looks very similar to the preferred polar ring model above, but is not generated in the same way, and will therefore change somewhat due to secular perturbations over time because the disk is not perfectly polar. Discussion {#sec:disc} ========== We strongly favour the polar ring model as the best explanation of the disk structure surrounding 99 Her. The polar ring is stable for the stellar lifetime, and takes the secular dynamics into account. The transient ring model, where the disk orientation is not fixed, also finds that the disk particles can have polar orbits. However, because the ring could be mirrored in the sky plane and appear the same, the ring could be misaligned with the binary orbital plane by about 30$^\circ$. Based on $\chi^2$ and the residuals the polar ring is marginally preferable over the transient ring. However, given that a more realistic model with a range of particle radii and inclinations could improve the fit in each case, we do not assign much importance to the relatively small $\chi^2$ differences. Instead, we consider several constraints on the collisional evolution that argue against the transient ring interpretation. By considering the timescales for collisions and secular evolution, we can estimate the likelihood of observing the products of a collision as a transient ring before it is spread into a broader structure. Based on the observed radius and total cross-sectional area, the collisional lifetime of grains just above the blowout size is about a million years [@2007ApJ...658..569W]. The emission could last much longer if larger objects exist in the size distribution, and the lifetime scales with the maximum size as $\sqrt{D_{\rm max}/D_{\rm bl}}$, so depends on the size of the largest fragment created in the collision. If the largest fragments are at least 1mm in size the lifetime is at least 50Myr, and we would expect the collisional cascade to be detectable for this length of time. The secular precession timescale is about 0.5Myr, and it is reasonable to assume that the ring structure would be erased by secular perturbations within 10 secular timescales. Thus, the collisional products would be observable as a ring for only 5Myr. Because the collision time is longer than the secular time, the collision products would spend at most a tenth of their lifetime appearing as a misaligned ring, and the remainder as a broader structure. That is, assuming such a collision did occur, we have less than a 1:10 chance of observing the collision products as a ring that looks like the *Herschel* observations.[^7] While 1:10 is not unreasonable, this estimate does not consider the object that must be destroyed to generate the observed dust or the plausibility of a 1mm maximum size. To produce the observed fractional luminosity, a parent body of at least 600km in diameter must be broken into blowout sized grains. With the more realistic assumption that the collision produced a range of grain sizes, the parent body must be larger, about 2000km if grains were as large as the 1mm assumed above (assuming $q=11/6$). Under the more realistic assumption of a wide range of fragment sizes, up to 1km say, the parent body would need to be roughly Earth-sized. However, for such large fragments the dust lifetime would be 50Gyr and the chance of observing the structure as a ring very unlikely (1:10,000). We can estimate the ability of collisions to smash large objects into small pieces by considering their strength and possible collision velocities. The specific energy needed for catastrophic disruption, where the largest collision product is half the original mass (i.e. very large), is roughly $10^{11}$erg/g for objects 2000km in size [@2009ApJ...691L.133S]. The energy needed to disrupt an object so that the collision products are all very small must be larger. The maximum collision energy possible for circular orbits is for a collision between two equal sized objects on pro and retrograde orbits. The collision energy assuming such an impact at twice the orbital velocity of a few km/s at 100AU is a few $10^{10}$erg/g. Therefore, only in the most optimistic (highest velocity) case is the collision energy sufficient to catastrophically disrupt 2000km objects. In the even of a disruption, the lifetime of the collision products will be very long because the largest remnant is about 1000km in size. In the more realistic case where collision velocities are set by object eccentricities and inclinations, disruption of large objects at large semi-major axes is even more difficult. This difficulty, combined with the smaller amount of starlight intercepted at such distances means that single collisions only produce a minor increase over the background level of dust [@2005AJ....130..269K]. These probability and collision arguments suggest that a single collision is an extremely unlikely explanation for the origin of the observed dust. The polar ring model does not have these issues. The secular evolution of particles in the 99 Her system means that particles on polar orbits suffer only minor changes in inclination and node (Fig \[fig:sec\]). These orbits are therefore stable over the stellar lifetime so the dust could be the steady state collision products of the polar planetesimal belt. Initial misalignment is therefore the only special requirement for the polar ring model. The excellent agreement between the PACS data and a simple model generated by particles on these orbits argues strongly in favour of this interpretation. The question is then shifted to one of the origin of the misalignment. Most binaries are thought to form through fragmentation and subsequent accretion during collapse of a molecular cloud [for a recent review see @2007prpl.conf..133G]. The resulting binary systems should be aligned with their protoplanetary disks when the separations are of order tens of AU [@2000MNRAS.317..773B]. Given the 16AU separation of the 99 Her system, it therefore seems that interactions during the subsequent phase of dynamical cluster evolution are a more likely cause of a misaligned disk. There are several ways that such a configuration could arise from interactions in a young stellar cluster. A close encounter in which a binary captures some material from the outer regions of a circumstellar disk hosted by another star seems possible. This “disk exchange” scenario requires an encounter where the binary pericenter is perpendicular to the circumstellar disk plane, and that the encounter distance and geometry captures material into orbits similar to those observed for the debris disk (e.g. most likely a prograde rather than retrograde encounter). An alternative scenario is a stellar exchange reaction, where a binary encounters a single star that harbours a circumstellar disk. During the exchange one of the binary components is captured by the originally single star, and the other leaves [e.g. @2011arXiv1109.2007M]. The post-encounter configuration is then a binary surrounded by a circumbinary disk. If the binary pericenter direction were perpendicular to the disk plane it could represent a young analogue of the 99 Her system. Such an encounter would require that the disk is not irreparably damaged by large stellar excursions during the exchange [@2011arXiv1109.2007M], but may also present a way to clear inner disk regions, thus providing a possible reason that the 99 Her disk resides at 120AU and not closer, where it could still be stable (see § \[s:dyn\]). Both scenarios require some degree of tuning; the encounters must happen with specific geometries to produce the observed relative binary and disk orientations. However, differences in the surface brightness between the different models in Figure \[fig:swift\] mean there could be some selection bias towards more disk-like structures. The advantage of the disk exchange scenario is that the cross section for interaction at a distance of about 100AU is much higher than for stellar exchange, which would need to have an encounter distance similar to the binary semi-major axis. With a factor of about ten difference in the encounter impact parameter for each scenario, the close encounter is therefore about 100 times more likely than the exchange (ignoring other constraints on geometry, configuration etc.). In the absence of detailed simulations of encounter outcomes, some data exist to help distinguish between these two scenarios. The minimum inclination of the stellar pole for the 99 Her primary relative to the binary orbital plane is $20 \pm 10^\circ$ [@1994AJ....107..306H]. The inclination difference is therefore different from the binary plane with 95% confidence, and is a hint that the system may be the result of an exchange. However, the scatter in inclination differences for binaries with separations similar to that of 99 Her is about 20$^\circ$ [@1994AJ....107..306H], which may indicate that systems with this separation are in fact aligned and the uncertainties were underestimated, or that this scatter is the intrinsic level of misalignment at these separations. Though 99 Herculis is the first clear case of misalignment between binary and disk planes, the GG Tauri system may show a similar signature. The GG Tau system consists of an Aa/Ab binary surrounded by a circumbinary ring , and a more distant Ba/Bb pair that may be bound . It is not clear if the inner binary is misaligned with the circumbinary disk, but is suggested because if they are aligned the ring’s inner edge is too distant to be set by the binary . However, there could also be problems if they are misaligned, because the expected disk scale height due to perturbations from the binary may be inconsistent with observations . Though uncertain, the possible misalignment between the binary and ring planes shows that GG Tau could be a young analogue of 99 Her-like systems. Summary ======= We have modelled the resolved circumbinary debris disk in the 99 Her system. This disk is unusual because it appears misaligned with the binary plane. It can be explained as either an inclined transient ring due to a recent collision, or more likely a ring of polar orbits. The transient ring is shown to be implausible from collisional arguments. While the inclined ring cannot exist on long (secular) timescales, the polar ring can. There appear to be two possible formation scenarios for the polar ring model, which both invoke stellar encounters. The binary may have captured material from another stars’ circumstellar disk, or a new binary may have formed in a stellar exchange where one of the systems already contained a circumstellar disk. While many binary and multiple systems are known to have debris disks, none are resolved and have orbits characterised as well as 99 Herculis. Future efforts should characterise this system further to test our interpretation and attempt to find more examples. A sample of resolved circumbinary disks would test whether disk-binary misalignment is a common outcome of star formation and cluster evolution, with implications for planetary systems around both single and binary stars. Acknowledgments {#acknowledgments .unnumbered} =============== We are grateful to the referee for a thorough reading of the manuscript, especially for noting that previous 99 Her visual orbits have the wrong ascending node. This research has made use of the Washington Double Star Catalog maintained at the U.S. Naval Observatory, and the SwiftVis $n$-body visualisation software developed by Mark Lewis. We also thank Herve Beust for use of the HJS code, and Paul Harvey for comments on a draft of this article. [59]{} natexlab\#1[\#1]{}href \#1\#2urllinklabel adsurllinklabel , H. A. & [Willmarth]{}, D. 2006, , 162, 207 [](http://adsabs.harvard.edu/abs/2006ApJS..162..207A) , S. J., [Caliskan]{}, H., [Kocer]{}, D., [Cay]{}, I. H., & [Gokmen Tektunali]{}, H. 2000, , 316, 514 [](http://adsabs.harvard.edu/abs/2000MNRAS.316..514A) , A. [et al.]{} 2010, , 518, L9 [](http://adsabs.harvard.edu/abs/2010A&A...518L...9A) , J. C., [Nelson]{}, R. P., [Lagrange]{}, A. M., [Papaloizou]{}, J. C. B., & [Mouillet]{}, D. 2001, , 370, 447 [](http://adsabs.harvard.edu/abs/2001A&A...370..447A) , M. R., [Bonnell]{}, I. A., [Clarke]{}, C. J., [Lubow]{}, S. H., [Ogilvie]{}, G. I., [Pringle]{}, J. E., & [Tout]{}, C. A. 2000, , 317, 773 [](http://adsabs.harvard.edu/abs/2000MNRAS.317..773B) , H. 2003, , 400, 1129 [](http://adsabs.harvard.edu/abs/2003A&A...400.1129B) , H. & [Dutrey]{}, A. 2005, , 439, 585 [](http://adsabs.harvard.edu/abs/2005A&A...439..585B) —. 2006, , 446, 137 [](http://adsabs.harvard.edu/abs/2006A&A...446..137B) , A. M., [McGrath]{}, E. J., [Lambert]{}, D. L., & [Cunha]{}, K. 2004, , 606, 306 [](http://adsabs.harvard.edu/abs/2004ApJ...606..306B) , L. J., [Wyatt]{}, M. C., [Duch[ê]{}ne]{}, G., [Sibthorpe]{}, B., [Kennedy]{}, G., [Matthews]{}, B. C., [Kalas]{}, P., [Greaves]{}, J., [Su]{}, K., & [Rieke]{}, G. 2011, , 417, 1715 [](http://adsabs.harvard.edu/abs/2011MNRAS.417.1715C) , J. & [Nys]{}, O. 2002, VizieR Online Data Catalog, 1274, 0 [](http://adsabs.harvard.edu/abs/2002yCat.1274....0D) , S. & [Blundell]{}, K. M. 2011, ArXiv e-prints, (1108.4144) [](http://adsabs.harvard.edu/abs/2011arXiv1108.4144D) , F. & [Laskar]{}, J. 2010, , 401, 1189 [](http://adsabs.harvard.edu/abs/2010MNRAS.401.1189F) , A. S. & [Hook]{}, R. N. 2002, , 114, 144 [](http://adsabs.harvard.edu/abs/2002PASP..114..144F) , S. P., [Kroupa]{}, P., [Goodman]{}, A., & [Burkert]{}, A. 2007, Protostars and Planets V, 133 [](http://adsabs.harvard.edu/abs/2007prpl.conf..133G) , R. G., [Carretta]{}, E., & [Castelli]{}, F. 1996, VizieR Online Data Catalog, 331, 40191 [](http://adsabs.harvard.edu/abs/1996yCat..33140191G) , M. J. [et al.]{} 2010, , 518, L3 [](http://adsabs.harvard.edu/abs/2010A&A...518L...3G) , S., [Dutrey]{}, A., & [Simon]{}, M. 1999, , 348, 570 [](http://adsabs.harvard.edu/abs/1999A&A...348..570G) , A. 1994, , 107, 306 [](http://adsabs.harvard.edu/abs/1994AJ....107..306H) , W. I., [Mason]{}, B. D., & [Worley]{}, C. E. 2001, , 122, 3472 [](http://adsabs.harvard.edu/abs/2001AJ....122.3472H) , B. & [Mermilliod]{}, M. 1997, VizieR Online Data Catalog, 2215, 0 [](http://adsabs.harvard.edu/abs/1997yCat.2215....0H) , T. J. & [McCarthy]{}, Jr., D. W. 1993, , 106, 773 [](http://adsabs.harvard.edu/abs/1993AJ....106..773H) , D. [et al.]{} 2010, , 514, A1 [](http://adsabs.harvard.edu/abs/2010A&A...514A...1I) , S. J. & [Bromley]{}, B. C. 2005, , 130, 269 [](http://adsabs.harvard.edu/abs/2005AJ....130..269K) , D. W., [Kim]{}, S., [Trilling]{}, D. E., [Larson]{}, H., [Cotera]{}, A., [Stapelfeldt]{}, K. R., [Wahhaj]{}, Z., [Fajardo-Acosta]{}, S., [Padgett]{}, D., & [Backman]{}, D. 2010, , 710, L26 [](http://adsabs.harvard.edu/abs/2010ApJ...710L..26K) , R. 2011, , 530, A126 [](http://adsabs.harvard.edu/abs/2011A&A...530A.126K) , Y. 1962, , 67, 591 [](http://adsabs.harvard.edu/abs/1962AJ.....67..591K) , A. & [Greenberg]{}, J. M. 1997, , 323, 566 [](http://adsabs.harvard.edu/abs/1997A&A...323..566L) , M. L. 1962, , 9, 719 [](http://adsabs.harvard.edu/abs/1962P&SS....9..719L) , B. D., [Wycoff]{}, G. L., [Hartkopf]{}, W. I., [Douglass]{}, G. G., & [Worley]{}, C. E. 2011, VizieR Online Data Catalog, 1, 2026 [](http://adsabs.harvard.edu/abs/2011yCat....102026M) , B. C. [et al.]{} 2010, , 518, L135 [](http://adsabs.harvard.edu/abs/2010A&A...518L.135M) , J. C. 2006, VizieR Online Data Catalog, 2168, 0 [](http://adsabs.harvard.edu/abs/2006yCat.2168....0M) , N. & [Goddi]{}, C. 2011, ArXiv e-prints, (1109.2007) [](http://adsabs.harvard.edu/abs/2011arXiv1109.2007M) , M. & [et al.]{} 1990, in IRAS Faint Source Catalogue, version 2.0 (1990), 0 [](http://adsabs.harvard.edu/abs/1990IRASF.C......0M) , B., [Mayor]{}, M., [Andersen]{}, J., [Holmberg]{}, J., [Pont]{}, F., [J[ø]{}rgensen]{}, B. R., [Olsen]{}, E. H., [Udry]{}, S., & [Mowlavi]{}, N. 2004, , 418, 989 [](http://adsabs.harvard.edu/abs/2004A&A...418..989N) , D. P. & [Greenberg]{}, R. 2003, , 164, 334 [](http://adsabs.harvard.edu/abs/2003Icar..164..334O) , S. 2010, in Astronomical Society of the Pacific Conference Series, Vol. 434, Astronomical Data Analysis Software and Systems XIX, ed. [Y. Mizumoto, K.-I. Morita, & M. Ohishi]{}, 139 [](http://adsabs.harvard.edu/abs/2010ASPC..434..139O) , M. A. C. & [ESA]{}, eds. 1997, ESA Special Publication, Vol. 1200, [The HIPPARCOS and TYCHO catalogues. Astrometric and photometric star catalogues derived from the ESA HIPPARCOS Space Astrometry Mission]{} [](http://adsabs.harvard.edu/abs/1997ESASP1200.....P) , N. M., [Greaves]{}, J. S., [Dent]{}, W. R. F., [Matthews]{}, B. C., [Holland]{}, W. S., [Wyatt]{}, M. C., & [Sibthorpe]{}, B. 2010, , 403, 1089 [](http://adsabs.harvard.edu/abs/2010MNRAS.403.1089P) , V., [Gueth]{}, F., [Hily-Blant]{}, P., [Schuster]{}, K.-F., & [Pety]{}, J. 2011, , 528, A81 [](http://adsabs.harvard.edu/abs/2011A&A...528A..81P) , G. L., [Riedinger]{}, J. R., [Passvogel]{}, T., [Crone]{}, G., [Doyle]{}, D., [Gageur]{}, U., [Heras]{}, A. M., [Jewell]{}, C., [Metcalfe]{}, L., [Ott]{}, S., & [Schmidt]{}, M. 2010, , 518, L1 [](http://adsabs.harvard.edu/abs/2010A&A...518L...1P) , A. [et al.]{} 2010, , 518, L2 [](http://adsabs.harvard.edu/abs/2010A&A...518L...2P) , G. H. [et al.]{} 2004, , 154, 25 [](http://adsabs.harvard.edu/abs/2004ApJS..154...25R) —. 2008, , 135, 2245 [](http://adsabs.harvard.edu/abs/2008AJ....135.2245R) , M., [Prieur]{}, J.-L., [Pansecchi]{}, L., [Argyle]{}, R. W., & [Sala]{}, M. 2010, Astronomische Nachrichten, 331, 286 [](http://adsabs.harvard.edu/abs/2010AN....331..286S) , M., [Prieur]{}, J.-L., [Pansecchi]{}, L., [Argyle]{}, R. W., [Sala]{}, M., [Basso]{}, S., [Ghigo]{}, M., [Koechlin]{}, L., & [Aristidi]{}, E. 2008, Astronomische Nachrichten, 329, 54 [](http://adsabs.harvard.edu/abs/2008AN....329...54S) , S. 1999, , 341, 121 [](http://adsabs.harvard.edu/abs/1999A&A...341..121S) , S. T. & [Leinhardt]{}, Z. M. 2009, , 691, L133 [](http://adsabs.harvard.edu/abs/2009ApJ...691L.133S) , Y. 2007, , 59, 335 [](http://adsabs.harvard.edu/abs/2007PASJ...59..335T) , D. E., [Stansberry]{}, J. A., [Stapelfeldt]{}, K. R., [Rieke]{}, G. H., [Su]{}, K. Y. L., [Gray]{}, R. O., [Corbally]{}, C. J., [Bryden]{}, G., [Chen]{}, C. H., [Boden]{}, A., & [Beichman]{}, C. A. 2007, , 658, 1289 [](http://adsabs.harvard.edu/abs/2007ApJ...658.1289T) , F., ed. 2007, Astrophysics and Space Science Library, Vol. 350, [Hipparcos, the New Reduction of the Raw Data]{} [](http://adsabs.harvard.edu/abs/2007ASSL..350.....V) , F. 2008, VizieR Online Data Catalog, 1311, 0 [](http://adsabs.harvard.edu/abs/2008yCat.1311....0F) , P. E. & [Evans]{}, N. W. 2008, , 390, 1377 [](http://adsabs.harvard.edu/abs/2008MNRAS.390.1377V) —. 2009, , 394, 1721 [](http://adsabs.harvard.edu/abs/2009MNRAS.394.1721V) , P. A. & [Holman]{}, M. J. 1997, , 113, 1445 [](http://adsabs.harvard.edu/abs/1997AJ....113.1445W) , M. C. & [Dent]{}, W. R. F. 2002, , 334, 589 [](http://adsabs.harvard.edu/abs/2002MNRAS.334..589W) , M. C., [Dermott]{}, S. F., [Telesco]{}, C. M., [Fisher]{}, R. S., [Grogan]{}, K., [Holmes]{}, E. K., & [Pi[ñ]{}a]{}, R. K. 1999, , 527, 918 [](http://adsabs.harvard.edu/abs/1999ApJ...527..918W) , M. C., [Smith]{}, R., [Greaves]{}, J. S., [Beichman]{}, C. A., [Bryden]{}, G., & [Lisse]{}, C. M. 2007, , 658, 569 [](http://adsabs.harvard.edu/abs/2007ApJ...658..569W) , M. C., [Smith]{}, R., [Su]{}, K. Y. L., [Rieke]{}, G. H., [Greaves]{}, J. S., [Beichman]{}, C. A., & [Bryden]{}, G. 2007, , 663, 365 [](http://adsabs.harvard.edu/abs/2007ApJ...663..365W) [^1]: Email: <gkennedy@ast.cam.ac.uk> [^2]: Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. [^3]: <http://ad.usno.navy.mil/wds/> [^4]: A figure showing the orbit is available in the WDS catalogue. [^5]: See http://herschel.esac.esa.int/twiki/bin/view/Public/WebHome for details regarding the PACS beam extent and calibration. [^6]: The observations are reduced in sky coordinates, so the spacecraft orientation is different for each observation and the PSF must be rotated accordingly. This rotation step could be avoided by reducing both in spacecraft coordinates. [^7]: Had we found that an eccentric ring could explain the data, the same argument applied to ring spreading by pericenter precession would apply, with the same 1:10 result. The particles’ pericenter directions are unlikely to be maintained through forcing by a third (circumbinary) body as for a standard pericenter glow model, because the perturbing body would be subject to the same pericenter precession.
{ "pile_set_name": "ArXiv" }
--- abstract: 'When matter is exposed to a high-intensity x-ray free-electron-laser pulse, the x rays excite inner-shell electrons leading to the ionization of the electrons through various atomic processes and creating high-energy-density plasma, i.e., warm or hot dense matter. The resulting system consists of atoms in various electronic configurations, thermalizing on sub-picosecond to picosecond timescales after photoexcitation. We present a simulation study of x-ray-heated solid-density matter. For this we use XMDYN, a Monte-Carlo molecular-dynamics-based code with periodic boundary conditions, which allows one to investigate non-equilibrium dynamics. XMDYN is capable of treating systems containing light and heavy atomic species with full electronic configuration space and 3D spatial inhomogeneity. For the validation of our approach we compare for a model system the electron temperatures and the ion charge-state distribution from XMDYN to results for the thermalized system based on the average-atom model implemented in XATOM, an *ab-initio* x-ray atomic physics toolkit extended to include a plasma environment. Further, we also compare the average charge evolution of diamond with the predictions of a Boltzmann continuum approach. We demonstrate that XMDYN results are in good quantitative agreement with the above mentioned approaches, suggesting that the current implementation of XMDYN is a viable approach to simulate the dynamics of x-ray-driven non-equilibrium dynamics in solids. In order to illustrate the potential of XMDYN for treating complex systems we present calculations on the triiodo benzene derivative 5-amino-2,4,6-triiodoisophthalic acid (I3C), a compound of relevance of biomolecular imaging, consisting of heavy and light atomic species.' author: - Malik Muhammad Abdullah - Anurag - Zoltan Jurek - 'Sang-Kil Son' - Robin Santra bibliography: - 'warm-dens-matter.bib' title: 'A molecular-dynamics approach for studying the non-equilibrium behavior of x-ray-heated solid-density matter' --- \[sec:level1\]Introduction ========================== X-ray free-electron lasers (XFELs) [@pellegrinireview2016; @Christophreview2016] provide intense radiation with a pulse duration down to only tens of femtoseconds. The cross sections for the elementary atomic processes during x-ray–matter interactions are small. Delivering high x-ray fluence can increase the probabilities of photoionization processes to saturation [@lindayoung2010]. Nonlinear phenomena arise because of the complex multiphoton ionization pathways within molecular or dense plasma environment [@vinko2012; @zastrau; @levy; @zoltan2014; @tachibana2015]. Theory has a key role in revealing the importance of different mechanisms in the dynamics. Many models have been developed for this purpose using both particle and continuum approaches [@hauriege2004; @beata2006; @peyrusse2012; @Scott2001; @bergh2004; @zoltan2004-2; @saalmann2009; @caleman2011-2; @haureige2012]. In order to give a complete description of the evolution of the atomic states in the plasma, one needs to account for the possible occurrence of all electronic configurations of the atoms/ions. A computationally demanding situation arises when a plasma consists of heavy atomic species [@rudek2012; @Fukuzawa2013]. For example, at a photon energy of 5.5keV, the number of electronic configurations accessible in a heavy atom such as xenon ($Z$=54) is about 20 million [@Fukuzawa2013]. If one wants to describe the accessible configuration space of two such atoms, one must deal with $(2\times10^7)^2$ = $4\times10^{14}$ electronic configurations. It is clear that following the populations of all electronic configurations in a polyatomic system as a function of time is a formidable task. To avoid this problem, the approximation of using superconfigurations has long been used [@Barshalom; @peyrusse2000; @bauche2015]. Moreover, the approach of using a set of average configurations [@chung2005; @lee1987] and the approach of limiting the available configurations by using a pre-selected subset of configurations in predominant relaxation paths [@beata2016] has been applied. The most promising approach to address this challenge is to sample the most important pathways in the unrestricted polyatomic electronic configuration space. This can be realized by using a Monte-Carlo strategy, which is straightforward to implement in a particle approach. In the present study we simulate the effect of individual ultrafast XFEL pulses of different intensities incident on a model system of carbon atoms placed on a lattice and analyze the quasi-equilibrium plasma state of the material reached through ionization and electron plasma thermalization. In order to have a comprehensive description during electron plasma thermalization we include all possible atomic electronic configurations for Monte-Carlo sampling, and no pre-selection of transitions and configurations is introduced. To this end, we use XMDYN [@xatom-xmdyn; @zoltan2014; @tachibana2015], a Monte-Carlo molecular-dynamics based code. XMDYN gives a microscopic description of a polyatomic system, and phenomena such as sequential multiphoton ionization [@lindayoung2010; @rudek2012], nanoplasma formation [@tachibana2015], thermalization of electrons through collisions and thermal emission [@tachibana2015] emerge as an outcome of a simulation. Probabilities of transitions between atomic states are determined by cross-section and rate data that are calculated by XATOM [@xatom-xmdyn; @sangyoulinda; @sangkil2012], a toolkit for x-ray atomic physics. In XMDYN individual ionization and relaxation paths are generated via a Monte-Carlo algorithm. A recent extension of XMDYN to periodic boundary conditions allows us to investigate bulk systems [@abdullah2015; @abdullah2016]. To validate the XMDYN approach towards a free-electron thermal equilibrium, we use an average-atom (AA) extension of XATOM [@sonPhysRevX], which is based on concepts of average-atom models used in plasma physics  [@Rozsnyai1972; @Liberman1979; @Perrot1982; @Peyrusse2006; @WILSON2006658]. AA gives a statistical description of the behavior of atoms immersed in a plasma environment. It calculates plasma properties such as ion charge-state populations and plasma electron densities for a system with a given temperature. We compare the electron temperatures and ion charge-state distributions provided by XMDYN and AA. We also make a comparison between predictions for the ionization dynamics in irradiated diamond obtained by the XMDYN particle approach and results from a Boltzmann continuum approach published recently [@beata2016]. With these comparisons, we demonstrate the potential of the XMDYN code for the description of high-energy-density bulk systems in and out of equilibrium. Finally, we consider a complex system of 5-amino-2,4,6-triiodoisophthalic acid (I3C in crystalline form) consisting of heavy and light atomic species. We show the evolution of average atomic charge states and free electron thermalization. We demonstrate that XMDYN can simulate the dynamics of x-ray-driven complex matter with all the possible electronic configurations without pre-selecting any pathways in the electronic configuration space. \[sec:level1\]THEORETICAL BACKGROUND ==================================== \[sec:level2\]XMDYN: Molecular dynamics with super-cell approach ---------------------------------------------------------------- XMDYN [@xatom-xmdyn] is a computational tool to simulate the dynamics of matter exposed to high-intensity x rays. A hybrid atomistic approach [@zoltan2004-2; @xatom-xmdyn] is applied where neutral atoms, atomic ions and ionized (free) electrons are treated as classical particles, with defined position and velocity vectors, charge and mass. The molecular-dynamics (MD) technique is applied to calculate the real-space dynamics of these particles by solving the classical equations of motion numerically. XMDYN treats only those orbitals as being quantized that are occupied in the ground state of the neutral atom. It keeps track of the electronic configuration of all the atoms and atomic ions. XMDYN calls the XATOM toolkit on the fly, which provides rate and cross-section data of x-ray-induced processes such as photoionization, Auger decay, and x-ray fluorescence, for all possible electronic configurations accessible during intense x-ray exposure. Probabilities derived from these parameters are then used in a Monte-Carlo algorithm to generate a realization of the stochastic inner-shell dynamics. XMDYN includes secondary (collisional) ionization and recombination, the two most important processes occurring due to an environment. XMDYN has been validated quantitatively against experimental data on finite samples calculated within open boundary conditions  [@zoltan2014; @tachibana2015]. Our focus here is the bulk properties of highly excited matter. XMDYN uses the concept of periodic boundary condition (PBC) to simulate bulk behavior [@abdullah2015; @abdullah2016]. In the PBC concept, we calculate the irradiation-induced dynamics of a smaller unit, called a super-cell. A hypothetical, infinitely extended system is constructed as a periodic extension of the super-cell. The Coulomb interaction is calculated for all the charged particles inside the super-cell within the minimum image convention [@metropolis]. Therefore, the total Coulomb force acting on a charge is given by the interaction with other charges within its well-defined neighborhood containing also particles of the surrounding copies of the super-cell. \[sec:supercell\]Impact ionization and recombination ---------------------------------------------------- While core excited states of atoms decay typically within ten or less femtoseconds, electron impact ionization and recombination events occur throughout the thermalization process and are in dynamical balance in thermal equilibrium. The models used in this study consider these processes on different footing that we overview in this section. Within the XMDYN particle approach, electron impact ionization is not a stochastic process (i.e., no random number is needed in the algorithm), but it depends solely on the real space dynamics (spatial location and velocity) of the particles and on the cross section. When a classical free electron is close to an atom/ion, its trajectory is extrapolated back to an infinite distance in the potential of the target ion by using energy and angular momentum conservation. Impact ionization occurs only if the impact parameter at infinity is smaller than the radius associated with the total electron impact ionization cross section. The total cross section is a sum of partial cross sections evaluated for the occupied orbitals, using the asymptotic kinetic energy of the impact electron. In the case of an ionization event the orbital to be ionized is chosen randomly, according to probabilities proportional to the subshell partial cross sections. XMDYN uses the binary-encounter-Bethe (BEB) cross sections [@kimandrudd1994] supplied with atomic parameters calculated with XATOM. Similarly, in XMDYN recombination is a process that evolves through the classical dynamics of the particles. XMDYN identifies for the ion that has the strongest Coulomb potential for each electron and calculates for how long this condition is fulfilled. Recombination occurs when an electron remains around the same ion for $n$ full periods (e.g., $n=1$) [@xatom-xmdyn; @georgescu07]. While recombination can be identified based on this definition, the electron is still kept classical if its classical orbital energy is higher than the orbital energy of the highest considered orbital $i$ containing a vacancy. When the classical binding becomes stronger, the classical electron is removed and the occupation number of the corresponding orbital is incremented by one. Although treating recombination the above way is somewhat phenomenological (e.g., no cross section derived from inverse processes is used), in particle simulations similar treatments are common [@Christian2005; @Edward2013; @georgescu07]. This process corresponds to three-body (or many-body) recombination as energy of electrons is transferred to other plasma electrons leading to the recombination event. The three-body recombination is the predominant recombination channel in a warm-dense environment. \[sec:supercell\]Electron plasma analysis ----------------------------------------- Electron plasma is formed when electrons are ejected from atoms in ionization events and stay among the ions through an extensive period as, e.g., in bulk matter. The plasma dynamics are governed not only by the Coulomb interaction between the particles but also by collisional ionization, recombination, and so on. XMDYN follows the system from the very first photoionization event through non-equilibrium states until free electron thermalization is reached asymptotically. In order to quantify the equilibrium properties reached, we fit the plasma electron velocity distribution using a Maxwell-Boltzmann distribution, $$f(v) = \sqrt{\left(\frac{1}{2\pi T}\right)^3} 4\pi v^2e^{-\frac{v^2}{2T}},$$ where $T$ represents the temperature (in units of energy), and $v$ is the electron speed. Atomic units are used unless specified. With the function defined in Eq. (1) we fit the temperature, which is used later to compare with equilibrium-state calculations.  \[sec:level1\]Validation of the methodology =========================================== In order to validate how well XMDYN can simulate free electron thermalization dynamics, we compare AA, where full thermalization is assumed, and XMDYN after reaching a thermal equlibrium. We first consider a model system consisting of carbon atoms. For a reasonable comparison of the results from XMDYN and AA, one should choose a system that can be addressed using both tools. AA does not consider any motion of atomic nuclei. Therefore we had to restrict the translational motion of atoms and atomic ions in XMDYN simulations as well. In order to do so, we set the carbon mass artificially so large that atomic movements were negligible throughout the calculations. Further, we increased the carbon-carbon distances to reduce the effect of the neighboring ions on the atomic electron binding energies. In XMDYN simulations, we chose a super-cell of 512 carbon atoms arranged in a diamond structure, but with a 13.16Å lattice constant (in case of diamond it is 3.567Å). The number density of the carbon atoms is $\rho_{0}=3.5\times10^{-3} \rm{\AA{}^{-3}}$, which corresponds to a mass density of $0.07 \rm{g/ cm^{3}}$. Plasma was generated by choosing different irradiation conditions typical at XFELs. Three different fluences, $\mathcal{F}_\text{low}\,=\,$6.7$\times10^{9}\,\rm{ph/}\rm{\mu m^{2}}$ , $\mathcal{F}_\text{med}\,=\,$1.9$\times10^{11}\,\rm{ph/}\rm{\mu m^{2}}$, and $\mathcal{F}_\text{high}\,=\,$3.8$\times10^{11}\,\rm{ph/}\rm{\mu m^{2}}$, were considered. In all three cases the photon energy and pulse duration were 1 keV and 10 fs (full width at half maximum), respectively. From XMDYN plasma simulations shown in Fig. \[fig:temp-bulk.eps\], the time evolution of the temperature of the electron plasma is analyzed by fitting to Eq. (1). Counterintuitively, right after photon absorption has finished, the temperature is still low, and then it gradually increases although no more energy is pumped into the system. The reason is that during the few tens of femtoseconds irradiation the fast photoelectrons are not yet part of the free electron thermal distribution; initially only the low-energy secondary electrons and Auger electrons that have lost a significant part of their energy in collisions determine the temperature. The fast electrons thermalize on longer timescales as shown in Figs. \[fig:temp-bulk.eps\](b) and (c), contributing to the equilibrated subset of electrons. In all cases equilibrium is reached within 100 fs after the pulse. ![Time evolution of the temperature of the electron plasma within XMDYN simulation during and after x-ray irradiation at different fluences: (a) $\mathcal{F}_\text{low}\,=\,$6.7$\times10^{9}\,\rm{ph/}\rm{\mu m^{2}}$, (b) $\mathcal{F}_\text{med}\,=\,$1.9$\times10^{11}\,\rm{ph/}\rm{\mu m^{2}}$ and (c) $\mathcal{F}_\text{high}\,=\,$3.8$\times10^{11}\,\rm{ph/}\rm{\mu m^{2}}$. In all three cases, the pulse duration is 10 fs FWHM; the pulse was centered at 20 fs, and the photon energy is 1 keV. The black curve represents the Gaussian temporal envelope. Note that in all cases equilibrium is reached within 100 fs after the pulse. []{data-label="fig:temp-bulk.eps"}](temp-bulk.eps){width="7.0cm" height="15cm"} AA calculates only the equilibrium properties of the system, which means that it does not consider the history of the system’s evolution through non-equilibrium states. We first calculate the total energy per atom, $E(T)$, as a function of temperature $T$ within a carbon system of density $\rho_{0}$. $$E(T) = \sum_p \varepsilon_p \tilde{n}_p(\mu,T) \int_{r \leq r_s} \!\!\! d^3 r \, \left| \psi_p(\mathbf{r}) \right|^2,$$ where $p$ is a one-particle state index, $\varepsilon_p$ and $\psi_{p}$ are corresponding orbital energy and orbital, and $\tilde{n}_p$ stands for the fractional occupation numbers at chemical potential $\mu$. Details are found in Ref. \[\]. In this way we obtain a relation between the average energy absorbed per atom, $\Delta{E}=E(T)-E(0)$, and the electron temperature (see Fig. \[fig:Energ-absorb-Bulk\]). From XMDYN the average number of photoionization events per atom, ${n_{\mathrm{ph}}}$, is available for each fluence point, and therefore the energy absorbed on average by an atom is known (= ${n_{\mathrm{ph}}}\times\rm{\omega_{\mathrm{ph}}}$, where $\rm{\omega_{\mathrm{ph}}}$ is the photon energy). Using this value we can select the corresponding temperature that AA yields. This temperature is compared with that fitted from XMDYN simulation. All these results are in reasonable agreement, as shown in Table \[table:result\_table\]. Later we use this temperature for calculating the charge-state distributions. ![Relation between plasma temperature and energy absorbed per atom in AA calculations for a carbon system of mass density $0.07 \rm{g/ cm^{3}}$.[]{data-label="fig:Energ-absorb-Bulk"}](Energ-absorb-Bulk.eps){width="7.0cm"} **Parameters** **Low fluence** **Medium fluence** **High fluence** ------------------------------- ----------------- -------------------- -------------------- Fluence (ph/$\rm{\mu m^2}$) $6.7\times10^9$ $1.9\times10^{11}$ $3.8\times10^{11}$ Energy absorbed per atom (eV) $29$ $665$ $1170$ XMDYN temperature (eV) $7$ $57$ $91$ AA temperature (eV) $6$ $60$ $83$ Figure \[fig:frac-yield-elec-dis\] shows the kinetic-energy distribution of the electron plasma (in the left panels) and the charge-state distributions (in the right panels) for the three different fluences. The charge-state distributions obtained from XMDYN at the final timestep (250 fs) are compared to those obtained from AA at the temperatures specified in Table \[table:result\_table\]. Although similar charge states are populated using the two approaches, differences can be observed: AA yields consistently higher ionic charges than XMDYN (20%–30% higher average charges) for the cases investigated. This is probably for the following reasons. XMDYN calls XATOM on the fly to calculate re-optimized orbitals for each electronic configuration. In this way XMDYN accounts for the fact that ionizing an ion of charge $Q$ costs less energy than ionizing an ion of charge $Q+$1. However, in the current implementation of AA, this effect is not considered. At a given temperature, AA uses the same orbitals (and, therefore, the same orbital energies) irrespective of the charge state. A likely consequence is that AA gives more population to higher charge states, simply because their binding energies are underestimated. That could also be the reason why AA produces wider charge-state distributions and predicts a somewhat higher average charge than XMDYN does. The other reason for the discrepancies could be the fact that XMDYN treats only those orbitals as being quantized that are occupied in the ground state of the neutral atom. For carbon, these are the $1s, 2s$, and $2p$ orbitals. All states above are treated classically in XMDYN, resulting in a continuum of bound states. As a consequence, the density of states is different and it may yield different orbital populations and therefore different charge-state distributions. Moreover, while free-electron thermalization has been ensured the bound electrons are not necessarily fully thermalized in XMDYN. In spite of the discrepancies observed, XMDYN and AA equilibrium properties are in reasonably good agreement. ![ Kinetic-energy distribution of the electron plasma and charge-state distributions from AA and XMDYN simulations (250 fs after the irradiation) for the low fluence (a,b), the medium fluence (c,d), and the high fluence (e,f).[]{data-label="fig:frac-yield-elec-dis"}](frac-yield-elec-dis.eps){width="50.00000%"} ![image](Averageenergy-Diam-Bulk.eps){width="7.25cm" height="5.75cm"} ![image](avg-char-xmdyn-contin.eps){width="14.0cm" height="5.6293cm"} We also performed simulations under the conditions that had been used in a recent publication using a continuum approach [@beata2016]. In these simulations, we do not restrict nuclear motions. A Gaussian x-ray pulse of 10 fs FWHM was used. The intensities considered lie within the regime typically used for high-energy-density experiments : $I_\text{max} = 10^{16}\, \rm{W/cm^2}$ for $\omega_{\rm{ph}}$ = 1000eV, and $I_\text{max} = 10^{18}\, \rm{W/cm^2}$ for $\omega_{\rm{ph}}$ = 5000eV. We employed a super-cell of diamond (mass density = $3.51\,\rm{g/cm^{3}}$) containing 1000 carbon atoms within the PBC framework. In this study, 25 different Monte-Carlo realizations were calculated and averaged for each irradiation case in order to improve the statistics of the results. For a system of 1000 carbon atoms each XMDYN trajectory takes 45 minutes of runtime. The average energy absorbed per atom \[Fig. \[fig:energ-absorb\]\] is $\sim{28}$eV and $\sim{26}$eV, respectively, for the 1000-eV and 5000-eV photon-energy cases, in agreement with Ref. \[\]. Figure \[fig:avg-ch\] shows the time evolution of the average charge for the two different photon energies. Average atomic charge states of +1.1 and +0.9, respectively, were obtained long after the pulse was over. Although the rapid increase of the average ion charge is happening on very similar times, the charge values at the end of the calculation are 30% and 40% higher than those in Ref. \[\] for the 1000-eV and 5000-eV cases, respectively \[Fig. \[fig:avg-ch\](a,b)\]. We can name two reasons that can cause such differences in the final charge states. One is that two different formulas for the total impact ionization cross section were used in the two approaches. In Ref. \[\] the cross sections are approximated from experimental ground state atomic and ionic data [@lennonetal1988], while XMDYN employs the semi-empirical BEB formula taking into account state-specific properties. Figure \[fig:xmdyn-continum-cs\] compares these cross sections for neutral carbon atom. It can be seen that the cross section and, therefore, the rate of the ionization used by XMDYN are larger, which can shift the final average charge state higher as well. The second reason is the evaluation of the three-body recombination cross section. In Ref. \[\] recombination is defined using the principle of microscopic reversibility which states that the cross section of impact ionization can be used to calculate the recombination rate [@vikrant2016]. In the current implementation of the Boltzmann code the two-body distribution function is approximated using one-body distribution functions in the evaluation of the rate for three-body recombination, whereas in XMDYN correlations at all levels are naturally captured within the classical framework due to the explicit calculation of the microscopic electronic fields. ![Comparison of impact ionization cross sections for neutral ground-state carbon used in the current work within XMDYN based on the BEB formula [@kimandrudd1994], and the cross sections used in the continuum approach of Ref. \[\] based on experimental data.[]{data-label="fig:xmdyn-continum-cs"}](BEB-Lennon-crosssesctions.eps){width="7.25cm" height="5.75cm"} \[sec:level1\]Application ========================= In order to demonstrate the capabilities of XMDYN we investigate the complex system of crystalline form I3C (chemical composition: $\rm{C_8H_4I_3NO_4\cdot H_2O}$) [@Beck:ba5137] irradiated by intense x rays. I3C contains the heavy atomic species iodine, which makes it a good prototype for investigations of experimental phasing methods based on anomalous scattering [@Guss806; @Hendrickson51; @sangkilprl; @sangkil2013; @Galli2015; @Galli2015-2]. We considered pulse parameters used at an imaging experiment recently performed at the Linac Coherent Light Source (LCLS) free-electron laser  [@I3C-team]. The photon energy was 9.7keV and the pulse duration was 10fs FWHM. Two different fluences were considered in the simulations, $\mathcal{F}_{\rm{high}}\,=\,$1.0$\times10^{13}\,\rm{ph/}\rm{\mu m^{2}}$ (estimated to be in the center of the focus) and its half value $\mathcal{F}_{\rm{med}}\,=\,$5.0$\times10^{12}\,\rm{ph/}\rm{\mu m^{2}}$. In these simulations, we do not restrict nuclear motions. The computational cell used in the simulations contained 8 molecules of I3C (184 atoms in total). The time propagation ends 250fs after the pulse. For the analysis 50 XMDYN trajectories are calculated for both fluence cases. These trajectories sample the stochastic dynamics of the system without any restriction of the electronic configuration space that possesses $(2.0\times10^{7})^{24}$ possible configurations considering the subsystem of the 24 iodine atoms only. The calculation of such an XMDYN trajectory takes approximately 150 minutes on a Tesla M2090 GPU while the same calculation takes 48 hours on Intel Xenon X5660 2.80GHz CPU (single core). Figure \[fig:i3c\_charge\] shows the average charge for the different atomic species in I3C as a function of time. Both fluences pump enormous energy in the system predominantly through the photoionization of the iodine atoms due to their large photoionization cross section. In both cases almost all the atomic electrons are removed from the light atoms, but mainly via secondary ionization. The ionization of iodine is very efficient: already when applying the weaker fluence $\mathcal{F}_{\rm{med}}$, the iodine atoms lose on average roughly half of their electrons, whereas for the high fluence case the average atomic charge goes even above +40. Further, we also investigate the free electron thermalization. The plasma electrons reach thermalization via non-equilibrium evolution within approximately 200fs. The Maxwellian distribution of the kinetic energy of these electrons corresponds to very high temperatures: 365eV for $\mathcal{F}_{\rm{med}}$ and 1keV for $\mathcal{F}_{\rm{high}}$ (see Fig. \[fig:i3c\_thermalization\]). Hence, we have shown that XMDYN is a tool that can treat systems with 3D spatial inhomogeneity, whereas the continuum models usually deal with uniform or spherically symmetric samples. If the sample includes heavy atomic species, pre-selecting electronic configurations can affect the dynamics of the system. XMDYN allows for a flexible treatment of the atomic composition of the sample and, particularly, easy access to the electronic structure of heavy atoms with large electronic configuration space. ![image](i3c_charges.eps){width="14.5cm" height="5.75cm"} ![image](i3c_thermalization.eps){width="14.5cm" height="5.75cm"} \[sec:level1\]Conclusions ========================= We have investigated the electron plasma thermalization dynamics of x-ray-heated carbon systems using the simulation tool XMDYN and compared its predictions to two other conceptually different simulation methods, the average-atom model (AA) and the Boltzmann continuum approach. Both XMDYN and AA are naturally capable to address ions with arbitrary electronic configurations, a very common situation in high-energy-density matter generated by, e.g., high-intensity x-ray irradiation. We found very similar quasi-equilibrium temperatures for the two methods. Qualitative agreement can be observed between the predicted ion charge-state distributions, although AA tends to yield somewhat higher charges. The reason could be that, in the current implementation, AA uses fixed atomic binding energies irrespective of the atomic electron configuration. We have also compared results from XMDYN and the Boltzmann continuum approach for free electron thermalization dynamics of XFEL-irradiated diamond as a validation of our approach. Thermal equilibrium of the electron plasma is reached within similar times in the two descriptions, although the asymptotic average ion charge states are somewhat different. The discrepancy could be attributed to the different approaches for impact ionization and recombination processes in the two models and to different parametrizations used in the simulation. Moreover, we have considered a complex system, crystalline I3C, containing the heavy atomic species iodine. We calculated the dynamics and evolution of the system from an x-ray-induced non-equilibrium state to a state where the plasma electrons are thermalized and hot dense matter is formed. The atomic electronic configurations for iodine are taken into account in full detail. Therefore, with XMDYN the treatment of systems including heavy atomic species (exhibiting complex inner-shell relaxation pathways) is comprehensive and expected to be reliable. Finally, we note that, in contrast to a Boltzmann continuum approach, it is straightforward within [XMDYN]{} to treat spatially inhomogeneous systems consisting of several or even many atomic species. Acknowledgement {#acknowledgement .unnumbered} =============== We thank Beata Ziaja for fruitful discussions about the Boltzmann continuum approach. We also thank John Spence, Richard Kirian, Henry Chapman, and Dominik Oberthuer, for stimulating the I3C calculations presented in this work. This work has been supported by the excellence cluster “The Hamburg Center for Ultrafast Imaging (CUI): Structure, Dynamics and Control of Matter at the Atomic Scale” of the Deutsche Forschungsgemeinschaft.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Document retrieval aims at finding the most important documents where a pattern appears in a collection of strings. Traditional pattern-matching techniques yield brute-force document retrieval solutions, which has motivated the research on tailored indexes that offer near-optimal performance. However, an experimental study establishing which alternatives are actually better than brute force, and which perform best depending on the collection characteristics, has not been carried out. In this paper we address this shortcoming by exploring the relationship between the nature of the underlying collection and the performance of current methods. Via extensive experiments we show that established solutions are often beaten in practice by brute-force alternatives. We also design new methods that offer superior time/space trade-offs, particularly on repetitive collections.' author: - Gonzalo Navarro - 'Simon J. Puglisi' - Jouni Sirén bibliography: - 'paper.bib' title: 'Document Retrieval on Repetitive Collections[^1]' --- Introduction ============ The [*pattern matching*]{} problem, that is, preprocessing a text collection so as to efficiently find the occurrences of patterns, is a classic in Computer Science. The optimal suffix tree solution [@Wei73] dates back to 1973. Suffix arrays [@MM93] are a simpler, near-optimal alternative. Surprisingly, the natural variant of the problem called [*document listing*]{}, where one wants to find simply in which texts of the collection (called the [*documents*]{}) a pattern appears, was not solved optimally until almost 30 years later [@Mut02]. Another natural variant, the [*top-$k$ documents*]{} problem, where one wants to find the $k$ [*most relevant*]{} documents where a pattern appears, for some notion of relevance, had to wait for other 10 years [@HSV09; @NN12]. A general problem with the above indexes is their size. While for moderate-sized collections (of total length $n$) their linear space (i.e., $\Oh(n)$ words, or $\Oh(n\log n)$ bits) is affordable, the constant factors multiplying the linear term make the solutions prohibitive on large collections. In this aspect, again, the pattern matching problem has had some years of advantage. The first compressed suffix arrays (CSAs) appeared in the year 2000 (see [@NM07]) and since then have evolved until achieving, for example, asymptotically optimal space in terms of high-order empirical entropy and time slightly over the optimal. There has been much research on similarly compressed data structures for document retrieval (see [@NavACMcs14]). Since the foundational paper of Hon et al. [@HSV09], results have come close to using just $\oh(n)$ bits on top of the space of a CSA and almost optimal time. Compressing in terms of statistical entropy is adequate in many cases, but it fails in various types of modern collections. [*Repetitive*]{} document collections, where most documents are similar, in whole or piecewise, to other documents, naturally arise in fields like computational biology, versioned collections, periodic publications, and software repositories (see [@Naviwoca12]). The successful pattern matching indices for these types of collections use grammar or Lempel-Ziv compression, which exploit repetitiveness [@CN12; @FN13]. There are only a couple of document listing indices for repetitive collections [@GKNPS13; @CM13], and none for the top-$k$ problem. Although several document retrieval solutions have been implemented and tested in practice [@NV12; @KN13; @FN13; @GKNPS13], no systematic practical study of how these indexes perform, depending on the collection characteristics, has been carried out. A first issue is to determine under what circumstances specific document listing solutions actually beat brute-force solutions based on pattern matching. In many applications documents are relatively small (a few kilobytes) and therefore are unlikely to contain many occurrences of a given pattern. This means that in practice the number of pattern occurrences ($occ$) may not be much larger than the number of documents the pattern occurs in ($docc$), and therefore pattern matching-based solutions may be competitive. A second issue that has been generally neglected in the literature is that collections have different kinds of repetitiveness, depending on the application. For example, one might have a set of distinct documents, each one internally repetitive piecewise, or a set of documents that are in whole similar to each other. The repetition structure can be linear (each document similar to a previous one) as in versioned collections, or even tree-like, or completely unstructured, as in some biological collections. It is not clear how current document retrieval solutions behave depending on the type of repetitiveness. In this paper we carry out a thorough experimental study of the performance of most existing solutions to document listing and top-$k$ document retrieval, considering various types of real-life and synthetic collections. We show that brute-force solutions are indeed competitive in several practical scenarios, and that some existing solutions perform well only on some kinds of repetitive collections, whereas others present a more stable behavior. We also design new and superior alternatives for top-$k$ document retrieval. Background {#section:background} ========== Let $T[1,n]$ be a concatenation of a collection of $d$ documents. We assume each document ends with a special character $\$$ that is lexicographically smaller than any other character of the alphabet. The *suffix array* of the collection is an array ${\ensuremath{\mathsf{SA}}}[1,n]$ of pointers to the suffixes of $T$ in lexicographic order. The *document array* ${\ensuremath{\mathsf{DA}}}[1,n]$ is a related array, where ${\ensuremath{\mathsf{DA}}}[i]$ is the identifier of the document containing $T[{\ensuremath{\mathsf{SA}}}[i]]$. Let $B[1,n]$ be a bitvector, where $B[i]=1$ if a new document begins at $T[i]$. We can map text positions to document identifiers by: ${\ensuremath{\mathsf{DA}}}[i] = {\ensuremath{\mathsf{rank}}}_{1}(B,{\ensuremath{\mathsf{SA}}}[i])$, where ${\ensuremath{\mathsf{rank}}}_{1}(B,j)$ is the number of $1$-bits in prefix $B[1,j]$. In this paper, we consider indexes supporting four kinds of queries: 1) ($P$) returns the range $[sp,ep]$, where the suffixes in ${\ensuremath{\mathsf{SA}}}[sp,ep]$ start with pattern $P$; 2) ($sp,ep$) returns ${\ensuremath{\mathsf{SA}}}[sp,ep]$; 3) ($P$) returns the identifiers of documents containing pattern $P$; and 4) ($P,k$) returns the identifiers of the $k$ documents containing the most occurrences of $P$. CSAs support the first two queries. () is relatively fast, while () can be much slower. The main time/space trade-off in a CSA, the *suffix array sample period*, affects the performance of () queries. Larger sample periods result in slower and smaller indexes. Muthukrishnan’s document listing algorithm [@Mut02] uses an array ${\ensuremath{\mathsf{C}}}[1,n]$, where ${\ensuremath{\mathsf{C}}}[i]$ points to the last occurrence of ${\ensuremath{\mathsf{DA}}}[i]$ in ${\ensuremath{\mathsf{DA}}}[1,i-1]$. Given a query range $[sp,ep]$, ${\ensuremath{\mathsf{DA}}}[i]$ is the first occurrence of that document in the range iff ${\ensuremath{\mathsf{C}}}[i] < sp$. A *range minimum query* (RMQ) structure over ${\ensuremath{\mathsf{C}}}$ is used to find the position $i$ with the smallest value in ${\ensuremath{\mathsf{C}}}[sp,ep]$. If ${\ensuremath{\mathsf{C}}}[i] < sp$, the algorithm reports ${\ensuremath{\mathsf{DA}}}[i]$, and continues recursively in $[sp,i-1]$ and $[i+1,ep]$. Sadakane [@Sad07] improved the space usage with two observations: 1) if the recursion is done in preorder from left to right, ${\ensuremath{\mathsf{C}}}[i] \ge sp$ iff document ${\ensuremath{\mathsf{DA}}}[i]$ has been seen before, so array ${\ensuremath{\mathsf{C}}}$ is not needed; and 2) array ${\ensuremath{\mathsf{DA}}}$ can also be removed by using () and $B$ instead. Let ${\ensuremath{\mathsf{lcp}}}(S,T)$ be the length of the *longest common prefix* of sequences $S$ and $T$. The LCP array of $T[1,n]$ is an array ${\ensuremath{\mathsf{LCP}}}[1,n]$, where ${\ensuremath{\mathsf{LCP}}}[i] = {\ensuremath{\mathsf{lcp}}}(T[{\ensuremath{\mathsf{SA}}}[i-1],n], T[{\ensuremath{\mathsf{SA}}}[i],n])$. We obtain the *interleaved LCP array* ${\ensuremath{\mathsf{ILCP}}}[1,n]$ by building separate LCP arrays for each of the documents, and interleaving them according to the document array. As ${\ensuremath{\mathsf{ILCP}}}[i] < {\ensuremath{\lvert P \rvert}}$ iff position $i$ contains the first occurrence of ${\ensuremath{\mathsf{DA}}}[i]$ in ${\ensuremath{\mathsf{DA}}}[sp,ep]$, we can use Sadakane’s algorithm with RMQs over ${\ensuremath{\mathsf{ILCP}}}$ instead of ${\ensuremath{\mathsf{C}}}$ [@GKNPS13]. If the collection is repetitive, we can get a smaller and faster index by building the RMQ only over the run heads in ${\ensuremath{\mathsf{ILCP}}}$. Algorithms {#section:algorithms} ========== In this section we review [*practical*]{} methods for document listing and top-$k$ document retrieval. For a more detailed review see, e.g., [@NavACMcs14]. [**Brute force.**]{} These algorithms sort the document identifiers in range ${\ensuremath{\mathsf{DA}}}[sp,ep]$ and report each of them once. stores ${\ensuremath{\mathsf{DA}}}$ in $n \log d$ bits, while retrieves the range ${\ensuremath{\mathsf{SA}}}[sp,ep]$ with () and uses bitvector $B$ to convert it to ${\ensuremath{\mathsf{DA}}}[sp,ep]$. Both algorithms can also be used for top-$k$ retrieval by computing the frequency of each document identifier and then sorting by frequency. [**Sadakane.**]{} This is a family of algorithms based on Sadakane’s improvements [@Sad07] to Muthukrishnan’s algorithm [@Mut02]. is the original algorithm of Sadakane, while uses an explicit document array instead of retrieving the document identifiers with (). and are otherwise the same, respectively, except that they build the RMQ over ${\ensuremath{\mathsf{ILCP}}}$ [@GKNPS13] instead of ${\ensuremath{\mathsf{C}}}$. [**Wavelet tree.**]{} A *wavelet tree* over a sequence can be used to quickly list the distinct values in any substring, and hence a wavelet tree over ${\ensuremath{\mathsf{DA}}}$ can be a good solution for many document retrieval problems. The best known implementation of wavelet tree-based document listing [@NV12] can use plain, entropy-compressed [@NM07], and grammar-compressed [@LM00] bitvectors in the wavelet tree. Our uses a heuristic similar to the original WT-alpha [@NV12], multiplying the size of the plain bitvector by $0.81$ and the size of the entropy-compressed bitvector by $0.9$, before choosing the smallest one for each level of the tree. For top-$k$ retrieval, combines the wavelet tree used in document listing with a space-efficient implementation [@NV12] of the top-$k$ trees of Hon et al. [@HSV09]. Out of the alternatives investigated by Navarro and Valenzuela [@NV12], we tested the greedy algorithm, LIGHT and XLIGHT encodings for the trees, and sampling parameter $g' = 400$. In the results, we use the slightly smaller XLIGHT. [**Precomputed document listing.**]{}  [@GKNPS13] builds a sparse suffix tree for the collection, and stores the answers to document listing queries for the nodes of the tree. For long query ranges, we compute the answer to the () query as a union of a small number of stored answer sets. The answers for short ranges are computed by using . is the original version, using a web graph compressor [@HNspire12.3] to compress the sets. If a subset $S'$ of document identifiers occurs in many of the stored sets, the compressor creates a grammar rule $X \to S'$, and replaces the subset with $X$. We chose block size $b=256$ and storing factor $\beta=16$ as good general-purpose parameter values. We extend in Section \[section:pdl\]. [**Grammar-based.**]{}  [@CM13] is an adaptation of a grammar-compressed self-index [@CN12] for document listing. Conceptually similar to , uses  [@LM00] to parse the collection. For each nonterminal symbol in the grammar, it stores the set of document identifiers whose encoding contains the symbol. A second round of is used to compress the sets. Unlike most of the other solutions, is an independent index and needs no CSA to operate. [**Lempel-Ziv.**]{}  [@FN13] is an adaptation of self-indexes based on LZ78 parsing for document listing. Like , does not need a CSA. [**Grid.**]{}  [@KN13] is a faster but usually larger alternative to . It can answer top-$k$ queries quickly if the pattern occurs at least twice in each reported document. If documents with just one occurrence are needed, uses a variant of to find them. We also tried to use for document listing, but the performance was not good, as it usually reverted to . Extending Precomputed Document Listing {#section:pdl} ====================================== In addition to , we implemented another variant of precomputed document listing [@GKNPS13] that uses  [@LM00] instead of the biclique-based compressor. In the new variant, named , each stored set is represented as an increasing sequence of document identifiers. The stored sets are compressed with , but otherwise is the same as . Due to the multi-level grammar generated by , decompressing the sets can be slower in than in . Another drawback comes from representing the sets as sequences: when the collection is non-repetitive, cannot compress the sets very well. On the positive side, compression is much faster and more stable. We also tried an intermediate variant, , that uses -like set compression. While ordinary replaces common substrings $ab$ of length $2$ with grammar rules $X \to ab$, the compressor used in searches for symbols $a$ and $b$ that occur often in the same sets. Treating the sets this way should lead to better compression on non-repetitive collections, but unfortunately our current compression algorithm is still too slow with non-repetitive collections. With repetitive collections, the size of is very similar to . Representing the sets as sequences allows for storing the document identifiers in any desired order. One interesting order is the top-$k$ order: store the identifiers in the order they should be returned by a () query. This forms the basis of our new structure for top-$k$ document retrieval. In each set, document identifiers are sorted by their frequencies in decreasing order, with ties broken by sorting the identifiers in increasing order. The sequences are then compressed by . If document frequencies are needed, they are stored in the same order as the identifiers. The frequencies can be represented space-efficiently by first run-length encoding the sequences, and then using differential encoding for the run heads. If there are $b$ suffixes in the subtree corresponding to the set, there are $\Oh(\sqrt{b})$ runs, so the frequencies can be encoded in $\Oh(\sqrt{b} \log b)$ bits. There are two basic approaches to using the structure for top-$k$ document retrieval. We can set $\beta = 0$, storing the document sets for all suffix tree nodes above the leaf blocks. This approach is very fast, as we need only decompress the first $k$ document identifiers from the stored sequence. It works well with repetitive collections, while the total size of the document sets becomes too large with non-repetitive collections. We tried this approach with block sizes $b = 64$ ( without frequencies and with frequencies) and $b = 256$ ( and ). Alternatively, we can build the structure normally with $\beta > 1$, achieving better compression. Answering queries is now slower, as we have to decompress multiple document sets with frequencies, merge the sets, and determine the top $k$. We tried different heuristics for merging only prefixes of the document sequences, stopping when a correct answer to the top-$k$ query could be guaranteed. The heuristics did not generally work well, making brute-force merging the fastest alternative. We used block size $b = 256$ and storing factors $\beta = 2$ () and $\beta = 4$ (). Smaller block sizes increased both index size and query times, as the number of sets to be merged was generally larger. Experimental Data {#section:data} ================= We did extensive experiments with both real and synthetic collections.[^2] The details of the collections can be seen in Table \[table:collections\] in the Appendix, where we also describe how the search patterns were obtained. Most of our document collections were relatively small, around 100 MB in size, as the implementation uses 32-bit libraries, while requires large amounts of memory for index construction. We also used larger versions of some collections, up to 1 GB in size, to see how collection size affects the results. In practice, collection size was more important in top-$k$ document retrieval, as increasing the number of documents generally increases the $docc/k$ ratio. In document listing, document size is more important than collection size, as the performance of depends on the $occ/docc$ ratio. [**Real collections.**]{} and are repetitive collections generated from a Finnish language Wikipedia archive with full version history. The collection consists of either $60$ pages (small) or $280$ pages (large), with a total of $8834$ or $65565$ revisions. In ${\textsf{Page}}$, all revisions of a page form a single document, while each revision becomes a separate document in ${\textsf{Revision}}$. is a nonrepetitive collection of $7000$ or $90000$ pages from a snapshot of the English language Wikipedia. is a repetitive collection containing the genomes of $100000$ or $227356$ influenza viruses. is a nonrepetitive collection of $143244$ protein sequences used in many document retrieval papers (e.g., [@NV12]). As the full collection is only 54 MB, there is no large version of . [**Synthetic collections.**]{} To explore the effect of collection repetitiveness on document retrieval performance in more detail, we generated three types of synthetic collections, using files from the Pizza & Chilli corpus[^3]. is similar to . Each collection has $1$, $10$, $100$, or $1000$ base documents, $100000$, $10000$, $1000$, or $100$ variants of each base document, respectively, and mutation rate $p = 0.001$, $0.003$, $0.01$, $0.03$, or $0.1$. We generated the base documents by mutating a sequence of length $1000$ from the DNA file with zero-order entropy preserving point mutations, with probability $10p$. We then generated the variants in the same way with mutation rate $p$. is similar to . We read $10$, $100$, or $1000$ base documents of length $10000$ from the English file, and generated $1000$, $100$, or $10$ variants of each base document, respectively. The variants were generated by applying zero-order entropy preserving point mutations with probability $0.001$, $0.003$, $0.01$, $0.03$, or $0.1$ to the base document, and all variants of a base document were concatenated to form a single document. We also generated collections similar to by making each variant a separate document. These collections are called . Experimental Results {#section:experiments} ==================== We implemented , , and ourselves[^4], and modified existing implementations of , , , and for our purposes. All implementations were written in C++. Details of our test machine are in the Appendix. As our CSA, we used RLCSA [@Maekinen2010], a practical implementation of a CSA that compresses repetitive collections well. The () support in RLCSA includes optimizations for long query ranges and repetitive collections, which is important for and . We used suffix array sample periods $8, 16, 32, 64, 128$ for non-repetitive collections and $32, 64, 128, 256, 512$ for repetitive ones. For algorithms using a CSA, we broke the ($P$) and ($P,k$) queries into a ($P$) query, followed by a ($[sp,ep]$) query or ($[sp,ep],k$) query, respectively. The measured times do not include the time used by the () query. As this time is common to all solutions using a CSA, and negligible compared to the time used by and , the omission does not affect the results. [**Document listing with real collections.**]{} Figure \[figure:doclist\] contains the results for document listing with real collections. For most of the indexes, the time/space trade-off is based on the SA sample period. ’s trade-off comes from a parameter specific to that structure involving RMQs (see [@FN13]). has no trade-off. Of the small indexes, is usually the best choice. Thanks to the () optimizations in RLCSA and the small documents, beats and , which are faster in theory due to using () more selectively. When more space is available, is a good choice, combining fast queries with moderate space usage. Of the bigger indexes, one storing the document array explicitly is usually even faster than . works well with and , but becomes too large or too slow elsewhere. [**Top-$k$ document retrieval.**]{} Results for top-$k$ document retrieval on real collections are shown in Figures \[figure:topk-small\] and \[figure:topk-large\]. Time/space trade-offs are again based on the suffix array sample period, while also uses other parameters (see Section \[section:pdl\]). We could not build with $\beta = 0$ for or the large collections, as the total size of the stored sets was more than $2^{32}$, which was too much for our compressor. was only built for the small collections, while construction used too much memory on the larger Wikipedia collections. On , dominates the other solutions. On , both and have good trade-offs with $k=10$, while and beat them with $k=100$. On , some variants, , and all offer good trade-offs. On , the brute-force algorithms win clearly. with $\beta=0$ is faster, but requires far too much space ($60$-$70$ bpc — off the chart). [**Document listing with synthetic collections.**]{} Figure \[figure:synthetic\] shows our document listing results with synthetic collections. Due to the large number of collections, the results for a given collection type and number of base documents are combined in a single plot, showing the fastest algorithm for a given amount of space and a mutation rate. Solid lines connect measurements that are the fastest for their size, while dashed lines are rough interpolations. The plots were simplified in two ways. Algorithms providing a marginal and/or inconsistent improvement in speed in a very narrow region (mainly and ) were left out. When and had very similar performance, only one of them was chosen for the plot. On , was a good solution for small mutation rates, while was good with larger mutation rates. With more space available, became the fastest algorithm. and were often slightly faster than , when there was enough space available to store the document array. On and , was usually a good mid-range solution, with being usually smaller than . The exceptions were the collections with $10$ base documents, where the number of variants ($1000$) was clearly larger than the block size ($256$). With no other structure in the collection, was unable to find a good grammar to compress the sets. At the large end of the size scale, algorithms using an explicit ${\ensuremath{\mathsf{DA}}}$ were usually the fastest choice. Conclusions {#section:conclusions} =========== Most document listing algorithms assume that the total number of occurrences of the pattern is large compared to the number of document occurrences. When documents are small, such as Wikipedia articles, this assumption generally does not hold. In such cases, brute-force algorithms usually beat dedicated document listing algorithms, such as Sadakane’s algorithm and wavelet tree-based ones. Several new algorithms have been proposed recently. is a fast and small solution, effective on non-repetitive collections, and with repetitive collections, if the collection is structured (e.g., incremental versions of base documents) or the average number of similar suffixes is not too large. Of the two variants, has a more stable performance, while is faster to build. is a small and moderately fast solution when the collection is repetitive but the individual documents are not. works well when repetition is moderate. We adapted the structure for top-$k$ document retrieval. The new structure works well with repetitive collections, and is clearly the method of choice on the versioned . When the collections are non-repetitive, brute-force algorithms remain competitive even on gigabyte-sized collections. While some dedicated algorithms can be faster, the price is much higher space usage. Appendix {#appendix .unnumbered} ======== Test Environment {#test-environment .unnumbered} ---------------- All implementations were written in C++ and compiled on g++ version 4.6.3. Our test environment was a machine with two 2.40 GHz quad-core Xeon E5620 processors (12 MB cache each) and 96 GB memory. Only one core was used for the queries. The operating system was Ubuntu 12.04 with Linux kernel 3.2.0. Collections {#collections .unnumbered} ----------- [lrrcccccc]{} Collection & & & Documents & $n/d $ & Patterns & ${\ensuremath{\overline{ occ }}}$ & ${\ensuremath{\overline{ docc }}}$ & $occ/docc$\ & 110 MB & 2.58 MB & 60 & 1919382 & 7658 & 781 & 3 & 242.75\ & 1037 MB & 17.45 MB & 280 & 3883145 & 20536 & 2889 & 7 & 429.04\ & 110 MB & 2.59 MB & 8834 & 13005 & 7658 & 776 & 371 & 2.09\ & 1035 MB & 17.55 MB & 65565 & 16552 & 20536 & 2876 & 1188 & 2.42\ & 113 MB & 49.44 MB & 7000 & 16932 & 18935 & 1904 & 505 & 3.77\ & 1034 MB & 482.16 MB & 90000 & 12050 & 19805 & 17092 & 4976 & 3.44\ & 137 MB & 5.52 MB & 100000 & 1436 & 1000 & 24975 & 18547 & 1.35\ & 321 MB & 10.53 MB & 227356 & 1480 & 1000 & 59997 & 44012 & 1.36\ & 54 MB & 25.19 MB & 143244 & 398 & 10000 & 160 & 121 & 1.33\ & 95 MB & & 100000 & & 889–1000\ & 95 MB & & 10–1000 & & 7538–15272\ & 95 MB & & 10000 & & 7537–15271\ Patterns {#patterns .unnumbered} -------- [**Real collections.**]{} For and , we downloaded a list of Finnish words from the Institute for the Languages in Finland, and chose all words of length $\ge 5$ that occur in the collection. For , we used search terms from an MSN query log with stop words filtered out. We generated $20000$ patterns according to term frequencies, and selected those that occur in the collection. For , we extracted $100000$ random substrings of length $7$, filtered out duplicates, and kept the $1000$ patterns with the largest $occ/docc$ ratios. For , we extracted $200000$ random substrings of length $5$, filtered out duplicates, and kept the $10000$ patterns with the largest $occ/docc$ ratios. [**Synthetic collections.**]{} For , patterns were generated with a similar process as for and : take $100000$ substrings of length $7$, filter out duplicates, and choose the $1000$ with the largest $occ/docc$ ratios. For and , patterns were generated from the MSN query log in the same way as for . [^1]: This work is funded in part by: Fondecyt Project 1-140796 (first author); Basal Funds FB0001, Conicyt, Chile (first and third authors); the Jenny and Antti Wihuri Foundation, Finland (third author); and by the Academy of Finland through grants 258308 and 250345 (CoECGR) (second author). [^2]: See <http://www.cs.helsinki.fi/group/suds/rlcsa/> for datasets and full results. [^3]: <http://pizzachili.dcc.uchile.cl/> [^4]: Available at <http://www.cs.helsinki.fi/group/suds/rlcsa/>
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider pressure-driven flows of electrolyte solutions in small channels or capillaries in which tracer particles are used to probe velocity profiles. Under the assumption that the double layer is thin compared to the channel dimensions, we show that the flow-induced streaming electric field can create an apparent slip velocity for the motion of the particles, even if the flow velocity still satisfies the no-slip boundary condition. In this case, tracking of particle would lead to the wrong conclusion that the no-slip boundary condition is violated. We evaluate the apparent slip length, compare with experiments, and discuss the implications of these results.' author: - | Eric Lauga\ [*Division of Engineering and Applied Sciences, Harvard University*]{},\ [*29 Oxford Street, Cambridge, MA 02138.*]{} title: ' **Apparent slip due to the motion of suspended particles in flows of electrolyte solutions**' --- Introduction ============ The no-slip boundary condition of fluid mechanics states that the velocity of a viscous flow vanishes near a stationary solid surface [@batchelor]. Although it has been a crucial ingredient of our understanding of fluid mechanics for more than a century, it has been much debated in the past [@Goldstein], and, in the case of liquids, a complete physical picture for its origin has yet to be given. The ongoing debate stems from the fact that it is an assumption which cannot be derived from first principles. It has been shown that on length scales much larger than the scale of surface heterogeneities, the no-slip condition might be a macroscopic consequence of inevitable microscopic roughness [@Richardson; @Jansons], but the case of perfectly smooth surfaces has yet to be explained. In particular, the physico-chemical properties of both the fluid and the solid surface certainly are important. Only a few experimental studies have addressed the no-slip condition in the past [@Schnell; @Churaev], and it is only the recent advances in the controlled fabrication of micro- and nano-devices and in the corresponding measurement techniques that have allowed the problem to be reconsidered. Over the last few years, a number of pressure-driven flow [@Watanabe; @Cheng; @Meinhart; @Breuer], shear-flow [@Pit], and squeeze-flow experiments [@Baudry; @Craig; @Bonaccurso; @Cottin; @Granick; @Granick2002; @Bonaccurso2] showing a response interpretable as some degree of slip for partially wetting liquids have been reported. Molecular dynamics simulations of Lennard-Jones liquids have also shown that slip can occur, but only at unrealistically high shear rates [@Nature; @Barrat]. Fluid slip is usually quantified by a slip length $\lambda$. Let us consider for simplicity a unidirectional flow past a solid surface. Following Navier [@Navier], the slip length linearly relates the surface slip velocity to the shear rate of the fluid evaluated at the surface $$\label{navier} u=\lambda \frac{\partial u}{\partial n}\cdot $$ The slip length can also be interpreted as the fictitious distance below the surface at which the velocity would be equal to zero if extrapolated linearly: the no-slip boundary condition is equivalent to $\lambda=0$ and a no-shear boundary condition is equivalent $\lambda=\infty$. Consider pressure-driven flow in a two-dimensional channel of height $2h$. If we assume that the boundary condition on the channel walls ($z=\pm h$) is given by , the axial velocity profile in the channel is $$\label{slip} U_{\rm slip}(z)= -\frac{h^2}{2\mu}\frac{\d p}{\d x}\left[1-\frac{z^2}{h^2} +\frac{2\lambda}{h}\right],$$ which is a Poiseuille flow augmented by a finite plug velocity, which augmented flow rate ${Q_{\rm slip}}$ is given in a non-dimensional form by $$\label{rate} \frac{Q_{\rm slip}}{Q_{\rm no\mbox{\footnotesize -}slip}}=1+\frac{3\lambda}{h}\cdot$$ Experimentalists have usually addressed the issue of fluid slip in two distinct ways. The first consists in performing indirect measurements, such as pressure-drop versus flow rate or squeezing rate versus resistance, and then use such measurements to infer a slip length. This procedure is indirect in the sense that it assumes that the flow resembles and then equation , or an equivalent, is used to determine $\lambda$ [@Watanabe; @Cheng; @Breuer; @Baudry; @Craig; @Bonaccurso; @Cottin; @Granick; @Granick2002; @Bonaccurso2]. The second way consists in performing direct velocity measurements in the fluid. We are only aware of two such previous works. Pit [*et al.*]{} [@Pit] measured velocities in shear flow of hexadecane over a smooth surface using a technique based on fluorescence recovery after photobleaching (see also [@Leger]). The measurements were performed down to 80 nm from the solid surface and averaged over a few tens of microns. Fluid slip was observed with $\lambda\sim 100$ nm in the case of lyophobic surfaces. Tretheway & Meinhart (2001) [@Meinhart] used micro-particle image velocimetry (PIV) techniques to measure the velocities of tracer nanoparticles (radius 150 nm) in pressure-driven channel flow of water. Measurements were made down to 450 nm from the solid surface and cross-correlated to increase signal-to-noise ratios. Results consistent with the no-slip condition were obtained in completely wetting conditions, but slip with $\lambda\sim1$ $\mu$m was obtained when the channel walls were treated to be hydrophobic. In this paper, we wish to draw attention to some of the possible consequences of this latter type of particle-based measurements. We address theoretically a prototypical pressure-driven flow experiment in small channels in the case where small tracer particles are used to probe the fluid velocity. We show that if electrical effects for both the channel and the particles are properly taken into account, it is possible for the particles to behave as if they were advected by a flow with a finite non-zero slip length, even if the velocity profile in the fluid surrounding the particle does not violate the no-slip condition. In the following section we summarize some important background electrostatics and hydrodynamics results, derive the formulae in the case of two dimensional channels and introduce the electroviscous effect. In section \[suspended\] we present a physical picture for the effect we report, derive the expressions for the apparent slip lengths and give the conditions for the occurrence of such slip. Finally, in section \[discussion\] we discuss implications of these results along with estimates of their order of magnitude under typical experimental conditions and compare with experiments. Flow of an electrolyte solution {#physical} =============================== The physical picture for the effect we wish to introduce relies on the following known facts. Surface charge and electrostatics {#surfacecharge} --------------------------------- A solid surface in contact with an electrolyte solution will in general acquire a net charge, due for example to the ionization of surface groups, ion adsorption and/or dissolution. This surface charge is a thermodynamic property of the solid-electrolyte pair and the reader is referred to [@saville; @israelachvili] for detailed presentations of the phenomenon. The equilibrium surface potential is called the zeta potential $\zeta$. Such surface charges are screened by a diffusive cloud of counter-ions in the solution. At equilibrium, the electrostatic potential $\psi$ in the electrolyte satisfies the Poisson-Boltzmann equation which quantifies the balance between purely electrostatic interactions and diffusion [@saville], $$\label{PB} \nabla^2 \psi =\frac{2en_0}{\e}\sinh \left(\frac{e\psi}{k_BT}\right),$$ where we consider here for simplification only the case of monovalent 1:1 ions, e.g. Na$^+$ and Cl$^-$ or OH$^-$ and H$^+$. A convenient approximation usually made to solve is the Debye-Hückel approximation [@saville; @rice; @hunter; @probstein] of small field strength, $|e\psi| \ll k_BT$, in which case the equation simplifies to the linearized Poisson-Boltzmann equation $$\label{debye} \nabla^2 \psi =\kappa^2\psi, \quad \kappa^{-1}=\left(\frac{\e k_BT}{2e^2n_0}\right)^{1/2},$$ where $\kappa^{-1}$ is the Debye screening length: it is the typical length scale in the solution over which counter-ions screen the charged solid surface, and beyond which the net charge density is essentially zero. However, is restricted to low surface potentials, typically 20mV, which is a severe approximation. Let us consider for simplicity the case of a two-dimensional channel of height $2h$ in the $z$-direction and let us instead derive the solution to for any value of the zeta potential at the wall $\zeta_w$ but in the limit where the channel dimensions are much larger than the double layers $\kappa h\gg 1$. This limit is appropriate for channel sizes down to $h\approx 5$ $\mu$m in the case of pure water, or even $h\approx 50$ nm in the case of tap water. Let us define the dimensionless potential $\phi=e\psi/k_BT$ and the dimensionless vertical coordinate $\bar{z}=z/h$. In this case, becomes $$\label{newPB} \frac{1}{(\kappa h )^2} \frac{\d^2 \phi}{ \d\z^2} =\sinh \phi,$$ with the boundary conditions $\phi(\z=\pm 1)=\phi_w=e\zeta_w/k_BT$. Since $1/\kappa h\ll 1$, the solution to equation involves boundary layers near $z=\pm 1$. The outer solution $\phi_{\rm \,out}$ is found by taking the limit $1/\kappa h=0$ in and we find $\phi_{\rm \,out}=0$. The inner solution $\phi_{\rm\, in}$ is valid near the boundaries for $\kappa h(1-|\z|)={\cal O}(1)$, in which case reduces to the Poisson-Boltzmann equation near an infinite plane, whose solution is [@hunter] $$\label{inner} \tanh\left(\frac{\phi_{\rm\, in}(\z)}{4} \right) = \tanh \left(\frac{\phi_w}{4} \right) e^{-\kappa h (1-|\z|)}.$$ Finally, since $\phi_{\rm \,out}=0$, the inner solution is also equal to the composite solution $\phi(\z)$, uniformly valid throughout the channel as $\kappa h \to \infty$, at leading order in $1/\kappa h$. For convenience, equation can be rewritten as $$\label{comp} \phi(\z)=2\ln \left(\frac{1+t_w e^{-\kappa h (1-|\z|)}}{1-t_w e^{-\kappa h (1-|\z|)}} \right),$$ where we have defined $t_w=\tanh (e\zeta_w/4k_B T)$. Hydrodynamics and electrokinetics --------------------------------- When a pressure-driven flow occurs in the channel, the fluid velocity is unidirectional ${\bf U}=U(z){\bf e}_x$, where ${\bf e}_x$ is the streamwise direction. In the absence of electrical effects, the fluid velocity is simply Poiseuille’s pressure-driven formula [@batchelor], which we will denote $U_{{\rm PD}}$, and is given by $$\label{PD} U_{\rm PD} (z)=-\frac{h^2}{2\mu}\frac{\d p}{\d x}\left[1-\frac{z^2}{h^2} \right]\cdot$$ Furthermore, if an external, or induced, electric field ${\bf E}_{S}=E_{S}{\bf e}_x$ is also applied to the channel, the presence of a net charge density near the solid surface moving in response to the field leads to an additional velocity component known as electroosmotic flow (EOF) [@saville]. It is directed in the $x$-direction, is given by $$\label{EOF} U_{{\rm EOF}}(z)=\frac{\e E_{S}}{\mu}\Big[\psi(z)-\zeta_w \Big],$$ and is valid for any value of $\zeta_w$. Streaming potential and electroviscous effect {#stream} --------------------------------------------- As the electrolyte solution flows down a pressure gradient, the cloud of counter-ions is advected by the flow and a streaming current is established. If no short-circuit is present between the two ends of the capillary, accumulation of charge sets up a potential difference along the channel, termed the “streaming potential”. Such potential, or equivalently electric field, opposes the mechanical transfer of charge by creating a reverse conduction current through the bulk solution such that the total net electric current is zero. This induced axial electric field scales with the applied pressure gradient and leads to the creation of an induced electroosmotic back-flow which effectively slows down the fluid motion in the capillary: a smaller flow rate for a given pressure drop is obtained than in the regular Poiseuille case, as if the liquid had a higher shear viscosity than expected. Consequently this effect is usually referred to as the primary “electroviscous effect” [@burgeen; @rice; @levine; @hunter; @probstein]. Let us consider the pressure-driven flow in a channel of height $2h$ and width $w\gg h$ of the electrolyte solution with electrostatic potential given by equation . We calculate below the value of the steady-state streaming electric field $E_S {\bf e}_x$ induced by the flow. #### Pressure-driven current First, the pressure-driven motion of the screening cloud of counter-ions near the charged surface leads to an advection-of-charge electric current $I_{S}^{PD}$ given by $$\label{ISPDbl} I_{S}^{\rm PD}=\int_{-h}^hw\rho_e(z)U_{{\rm PD}}(z)\d z=\frac{2\e wh k_B T}{\mu e} \left(\frac{\d p}{\d x} \right)I_1,$$ where we have used the electrostatic equation to relate the net charge density in the liquid to the electrostatic potential, ${\rho_e}= -\e \nabla^2 \psi$ and where $I_1$ is given by $$\label{I1} I_1=\phi_w-\int_0^1\phi(\z)\d \z,$$ with the same dimensionless notations as in section \[surfacecharge\]. In the limit where $\kappa h \gg 1$, plugging in the solution into leads to $$\label{I1limit} I_1=\phi_w - \frac{2}{\kappa h} \int_0^{\kappa h}\ln \left(\frac{1+t_we^{-x} }{1-t_we^{-x} }\right)\d x,$$ so that $$\label{ISPD} I_{S}^{\rm PD}=\frac{2\e wh\zeta_w}{\mu }\left(\frac{\d p}{\d x}\right)\left[1+{\cal O}\left(\frac{1}{\kappa h}\right)\right].$$ #### Electroosmotic current If an electric field is induced by the flow, the streaming current has a second component $I_S^{{\rm EOF}}$, given by the advection of counter-ions by the induced electroosmotic flow $$\label{ISEOFbl} I_S^{{\rm EOF}}=\int_{-h}^hw\rho_e(z)U_{\rm EOF}(z)\d z =\frac{2wE_S}{h\mu}\left(\frac{\e k_BT}{e}\right)^2I_2,$$ where $I_2$ is given by $$\label{I2} I_2=\int_0^1\left(\frac{\d\phi}{\d\z}\right)^2\d\z.$$ In the limit where $\kappa h \gg 1$, the boundary layer solution leads to the leading order expression for $I_2$ in powers of $1/\kappa h$, $$\label{I2limit} I_2=\frac{8\kappa h t_w^2 (1-e^{-2\kappa h})}{(1-t_w^2)(1-t_w^2e^{-2\kappa h})},$$ so that $$\label{ISEOF} I_S^{{\rm EOF}}=\frac{16 w \kappa E_S }{\mu}\left(\frac{\e k_B T}{e}\right)^2 \left(\frac{t_w^2}{1-t_w^2}\right) \left[1+ {\cal O}\left(\frac{1}{\kappa h}\right)\right]\cdot$$ #### Conduction current Finally, in response to the electric field, a conduction current $I_C$ is set up in the bulk of the solution; if we denote by $\sigma$ the ionic conductivity of the electrolyte (assumed to be constant), the conduction current is given by $$\label{} I_C=2hw\sigma E_S.$$ #### Induced electric field If we investigate the steady-state motion of the electrolyte solution, we require that there be no net electric current $$\label{} I_{S}^{\rm PD}+I_S^{{\rm EOF}}+I_{C}=0,$$ which leads to the formula for the flow-induced streaming electric field $$\label{ES} E_s=-\frac{\d p}{\d x}\left(\frac{\e\zeta_w}{\sigma\mu}\right)\left[1+\frac{8\kappa}{\sigma\mu h}\left(\frac{\e k_B T}{e}\right)^2 \left(\frac{t_w^2}{1-t_w^2}\right)\right]^{-1} +{\cal O}\left(\frac{1}{\kappa h}\right)\cdot$$ As expected, the induced field $E_S$ is proportional to the applied pressure gradient[^1]. Note that within the Debye-Hückel approximation , the induced electric field can be calculated exactly for all values of $\kappa h$ [@saville; @rice; @hunter; @probstein] and we find $$\label{ESdebye} E_S=\frac{\d p}{\d x} \left( \frac{\tanh\kappa h}{\kappa h} -1\right) \left[ \frac{\sigma\mu}{\e \zeta_w}+\frac{\e\zeta_w\kappa}{4h}\left(\frac{\sinh 2\kappa h-2\kappa h}{(\cosh\kappa h)^2} \right)\right]^{-1}\cdot$$ In the limits where $e|\zeta_w|/k_BT\ll 1$ ([*i.e.*]{} $t_w\ll 1$) and $\kappa h \gg 1$, the expressions given by and agree and are given by $$\label{} E_S= -\frac{\d p}{\d x}\left(\frac{\e\zeta_w}{\sigma\mu}\right)\left[1+\frac{(\e\zeta_w)^2\kappa}{2\sigma\mu h}\right]^{-1}\cdot$$ Velocity of a suspended particle and apparent slip {#suspended} ================================================== Physical picture ---------------- We now consider an experiment in which the above electric effects are present. We elect to use small tracer particles to probe the velocity profile, including possible fluid slip, as illustrated in Figure \[figure\]. For the same reason as for the capillary surfaces, these particles will usually be charged in solution. As they are advected by the fluid motion, they will also feel the influence of the induced streaming electric field: consequently their velocity will not only reproduce that of the fluid but will also include an induced electrophoretic component [@saville], proportional to their zeta potential and the streaming electric field. If the zeta potential of a particle has a sign opposite to that of the capillary surface, the particle will be slowed down by the electric field. On the contrary, if the particle possesses a potential of the same sign as the capillary surface, its electrophoretic component will be in the streamwise direction; furthermore, if its zeta potential is large enough, the electrophoretic velocity of the particle will be able to overcome the induced electroosmotic back-flow. It then follows that there is a significant potential implication of the induced electric field: if one were to conduct an experiment in such conditions without considering any important electrical effects, these particles would go faster than the expected Poiseuille pressure-driven profile, leading to the incorrect conclusion that the velocity profile has a non-zero slip velocity at the wall. Thus, even if the flow satisfies the no-slip condition, measurements of particle velocities would lead to non-zero apparent slip lengths. We shall quantify this mechanism in the following sections. Particle velocity ----------------- ![Schematic representation of the flow between two parallel plates with charged surfaces (zeta potential $\zeta_w$) and a charged suspended particle (zeta potential $\zeta_p$); in the case illustrated, $\zeta_w<0$ and $\zeta_p<0$. The channel height is $2h$, the particle radius is $a$, the smallest wall-particle distance is $d$ and the screening length $\kappa^{-1}$.[]{data-label="figure"}](channel.eps){width=".7\textwidth"} We consider the presence of a single solid spherical particle of radius $a\ll h$ suspended in a two-dimensional channel of height $2h$ where a pressure-driven flow occurs, as illustrated in Figure \[figure\]; the particle is located at a distance $d=h-|z|$ from the closest wall. We also assume for simplicity that the presence of the particle does not modify the nature of ionic groups in solution (1:1 monovalent ions), so that the screening lengths $\kappa^{-1}$ for the charged particle and the charged channel surface are the same, as given by equation . The particle velocity ${\bf U}_{\rm P}(z)$ will in general be $$\label{Up} {\bf U}_{\rm P}(z)= {\bf U}_{\rm hydro}(z)+{\bf U}_{\rm elec}(z)+{\bf U}_{\rm k_B T},$$ which includes three contributions. #### Hydrodynamic contribution The first component is the hydrodynamic contribution $$\label{hydro} {\bf U}_{\rm hydro}(z)=\left[1-{\cal O}\left(\frac{a}{d}\right) \right] U_{\rm PD}(z){\bf e}_x,$$ where $U_{\rm PD}$ is the local pressure-driven fluid velocity. It is modified by the presence of solid walls which slow down the motion of the suspended particle. Although the analysis is in general difficult [@Happel], walls lead to a leading-order correction to the particle velocity of order of the ratio of the particle size to the distance to the walls ${\cal O} (a/d)$; this is true as long as the particle does not come too close to the wall, in which case a different contribution arises from lubrication forces. We will assume in this paper that the particle is located sufficiently far from the walls ($a\ll d=h-|z|$) so that the influence of the walls can be neglected. Such a requirement would also have to be verified in an experiment, otherwise the presence of the wall would hinder some component of the measured slip velocity. Note that if walls were not present, a correction to the velocity accounting for the finite size of the particle and the spatial variations of the fluid velocity would also be present, but only at second order in the ratio of the particle size to the length scale over which flow variations occur [@hinch]. #### Electrical contribution In general the particle will be charged, with a zeta potential $\zeta_p$ which we assume to be uniform. Consequently, its velocity will include a contribution from electrical forces, ${\bf U}_{\rm elec}(z)$. This velocity has two components $$\label{} {\bf U}_{\rm elec}(z)={\bf U_{\rm EPH}}+U_{\rm drift}(z)\,{\bf e}_z,$$ where ${\bf U_{\rm EPH}}$ is an electrophoretic velocity due to the presence of an external electric field and $U_{\rm drift}(z)$ is a vertical drift due to the electrostatic interactions between the charged particle and the charged walls. Such drift will only be significant if the double layers around the particle and along the channel walls overlap, and will be exponentially screened otherwise [@saville]. We will assume that such requirement is met in practice $\kappa d \gtrsim {\cal O}(1)$, so that it can be neglected. When the electric field ${\bf E}_S=E_S{\bf e}_x$ is aligned with the channel direction, the electrophoretic velocity ${\bf U}_{\rm EPH}=U_{\rm EPH}\,{\bf e}_x$ is given by $$\label{phoretic} U_{\rm EPH}=\frac{\e E_S (f(\kappa a)\zeta_p-\zeta_w)} {\mu}\left[1-{\cal O}\left(\frac{a^3}{d^3}\right) \right]\cdot$$ This velocity first includes the “pure” electrophoretic mobility of the particle [@saville; @hunter; @AnnRev], characterized by the function $f(x)$, which satisfies $f(0)=2/3$ (Hückel’s result for thick screening length) and $f(\infty)=1$ (Smoluchowski’s result for thin screening length). Note that we can use these classical electrophoretic formulae because since $\kappa h \gg 1$, the perturbation of the ion distribution in the double layer around the particle is not modified by the local shear flow. The velocity also includes the electroosmotic back-flow resulting from the motion of excess charges near the channel walls and proportional to the wall zeta potential $\zeta_w$. Furthermore, the presence of a wall always influences the electrophoretic mobility at cubic order in the ratio of the particle size to the distance to the wall, as long as double layers do not overlap [@Ennis; @Yariv]; since we already assumed the particle to be located far from the wall, we will neglect the wall influence here as well. #### Thermal contribution Finally, the particle velocity has a random contribution ${\bf U}_{\rm k_B T}$ due to thermal motion, which can be significant. A solid spherical particle of radius $a$, located far from boundaries, has a diffusivity $D$ given by the Stokes-Einstein relation $D=k_B T/ 6\pi \mu a$ [@saville], corresponding to a root mean square velocity on the order of $U_{k_BT} \sim D/a \sim k_B T/ 6\pi \mu a^2$. At 25C in water, $a= 10$ nm leads to $U_{k_BT}\sim 1$ mm/s; this value is of the same order as the fluid velocity in a circular capillary of radius $R\sim 100$ $\mu$m and flow rate $Q\sim 1$ $\mu$L/min, typical values for microfluidic devices. Consequently, we cannot assume that the Peclet number, $Pe=U/U_{k_B T}=Ua/D$, is necessarily large and thermal motion cannot in general be neglected. However, in the experiments reported to date, velocity measurements are cross correlated (as in [@Meinhart]) or averaged (as in [@Pit]) so that the random thermal motion disappears, and we will therefore not consider it in this paper. #### Summary Under the previous assumptions, we can write the velocity for the particle as $$\label{summary} U_{\rm P}(z) =U_{{\rm PD}}(z)+ \frac{\e E_{S}}{\mu}(f(\kappa a)\zeta_p-\zeta_w) + {\cal O}\left(\frac{a}{d}\right),$$ where the velocity should be understood as an ensemble average over different experimental realizations. Apparent slip length -------------------- We now calculate the apparent slip length $\lambda$ that would be inferred by tracking particle motion in a pressure-driven flow. In the limit $\kappa h \gg 1$, the streaming electric field is given by equation so that the particle velocity becomes, at leading order in $a/d$ and $1/\kappa h$, $$\label{large1} U_{\rm P}(z) = -\frac{h^2}{2\mu}\frac{\d p}{\d x}\left\{1-\frac{z^2}{h^2} +\frac{2\zeta_w(f(\kappa a )\zeta_p-\zeta_w)(\e)^2}{\sigma\mu h^2} \left[1+\frac{8\kappa}{\sigma\mu h}\left(\frac{\e k_B T}{e}\right)^2 \left(\frac{t_w^2}{1-t_w^2}\right)\right]^{-1}\right\} \cdot$$ Comparing with the formula for the velocity in a flow satisfying the partial slip boundary condition , we see that the particle behaves as if it was passively advected by a pressure-driven flow with a finite slip length $\lambda$ given by $$\label{slip1} \frac{\lambda}{h}=\frac{\zeta_w(f(\kappa a)\zeta_p-\zeta_w)( \e e)^2}{\sigma\mu (eh)^2+{8\kappa h}\left({\e k_B T}\right)^2 \left(\frac{t_w^2}{1-t_w^2}\right)}\cdot$$ The condition for a positive apparent slip, $\lambda> 0$, is therefore $$\label{condlarge1} \zeta_w(f(\kappa a)\zeta_p- \zeta_w)>0.$$ This result can also be understood in the following way: (1) the particle and the wall must have the same charge sign, $\zeta_w\zeta_p>0$; this is usually the case in water where surfaces typically acquire negative charge, for example due to the ionization of sulfate or carboxylic surface groups; (2) the particle zeta potential must be sufficiently large $|\zeta_p|>|\zeta_w|/f(\kappa a)$ (or, equivalently, the wall zeta potential must be sufficiently small). If condition is not met, the slip length is in fact a “stick” length ($\lambda<0$) and the particle goes slower than the liquid. Finally, note that within the Debye-Hückel limit $t_w\ll 1$, the slip length becomes $$\label{slipdebye} \frac{\lambda}{h}=\frac{2\zeta_w(f(\kappa a)\zeta_p-\zeta_w)(\e)^2}{2\sigma\mu h^2+(\e\zeta_w)^2\kappa h} \cdot$$ Discussion ========== The results presented in the previous section allow one to calculate, for a given set of experimentally determined material and fluid parameters, the amount of apparent slip in the particle velocity which is due to the streaming potential. We present in this section some general observations on formula as well as an estimate for the order of magnitude of the effect in water and a comparison with available experimental slip measurements. Variations of the slip length ----------------------------- All the variables in can be made to vary independently except for the screening length $\kappa^{-1}$ and the bulk conductivity $\sigma$ which both depend on the ionic strength of the solution. A simple estimate for the bulk conductivity of a 1:1 solution is $ \sigma= {2 b n_0e^2}$ (see e.g. [@probstein]), where $n_0$ is the bulk ion concentration and $b$ is the ion mobility, which we approximate by the mobility of a spherical particle, $b^{-1} \approx 6\pi \mu \ell$ where $\ell$ is the effective ion size. Using equation , we see that the conductivity and the screening length are related by $$\label{cond2} \sigma \approx \frac{\e k_B T}{6 \pi \mu \ell} \kappa^2.$$ Furthermore, since the conductivity $\sigma$ and the viscosity $\mu$ only appear in as their product, the estimate shows that the apparent slip length is in fact independent of the fluid viscosity. Moreover, since $\kappa\sim n_0^{1/2}$ and $\sigma\sim n_0$, and since $f(\kappa a)$ varies only weakly with $\kappa$, we see from that the $\lambda$ is a decreasing function of the ionic strength. Also, it is clear from that the slip length always decreases with the channel size. Finally, we note the apparent slip length vanishes for two values of the wall zeta potential: $\zeta_w=0$ and $\zeta_w=\zeta_p/f(\kappa a)$. Consequently, in between these two values, the slip length reaches a maximum value $\l$ when the wall zeta potential is equal to $\zeta_w=\zeta_m^*$, [*i.e.*]{} $\d \lambda / \d\zeta_w(\zeta_m^*)=0$. This is illustrated in Figure \[maximum\] (left). ![Left: variation of the apparent slip length $\lambda$ for pure water as a function of the wall zeta potential $\zeta_w$ for $\zeta_p=50$ mV, $n_0=10^{-6}$ mol l$^{-1}$ (pure water), $\kappa h=10$ and $\kappa a \ll 1$; the slip length reaches a maximum $\lambda^*$ for $\zeta_w=\zeta_w^*$. Right: maximum value of the apparent slip length $\lambda^*$ as a function of the particle zeta potential $\zeta_p$ for $\kappa h=10$, $\kappa a \ll 1$ and three values of the ionic strength: $n_0=10^{-6}$ mol l$^{-1}$ (pure water, $\kappa^{-1}\approx 300$ nm, solid line), $n_0=10^{-4}$ mol l$^{-1}$ ($\kappa^{-1}\approx 30$ nm, dashed line), $n_0=10^{-2}$ mol l$^{-1}$ (tap water, $\kappa^{-1}\approx 3$ nm, dotted line).[]{data-label="maximum"}](maximum.eps "fig:"){width=".48\textwidth"} ![Left: variation of the apparent slip length $\lambda$ for pure water as a function of the wall zeta potential $\zeta_w$ for $\zeta_p=50$ mV, $n_0=10^{-6}$ mol l$^{-1}$ (pure water), $\kappa h=10$ and $\kappa a \ll 1$; the slip length reaches a maximum $\lambda^*$ for $\zeta_w=\zeta_w^*$. Right: maximum value of the apparent slip length $\lambda^*$ as a function of the particle zeta potential $\zeta_p$ for $\kappa h=10$, $\kappa a \ll 1$ and three values of the ionic strength: $n_0=10^{-6}$ mol l$^{-1}$ (pure water, $\kappa^{-1}\approx 300$ nm, solid line), $n_0=10^{-4}$ mol l$^{-1}$ ($\kappa^{-1}\approx 30$ nm, dashed line), $n_0=10^{-2}$ mol l$^{-1}$ (tap water, $\kappa^{-1}\approx 3$ nm, dotted line).[]{data-label="maximum"}](variation.eps "fig:"){width=".48\textwidth"} Order of magnitude for water ---------------------------- Let us address here the case of water at room temperature (T=300C, $\epsilon$=80, $\ell \approx 2$ Å). We have calculated numerically the maximum apparent slip lengths which could be obtained in an experiment, $\lambda^*$, as a function of the particle zeta potential $\zeta_p$. The results are displayed in Figure \[maximum\] (right). We first note that $\lambda^*$ increases with $|\zeta_p|$. Furthermore, the maximum slip length can take values as low as molecular sizes or below and, in the case of pure water, can be as high as hundreds of nanometers. The data for the low values of $|\zeta_p|$ display a power-law behavior, which we can analyze as follows. Let us consider formula . The two terms in the denominator will be of the same order of magnitude if $t_w$ is larger than a critical value $\tilde{t}_w$ which is given by $$\label{critical} \tilde{t}_w\approx \left(\frac{1}{1+ \frac{48\pi \ell \e k_B T}{e^2 \kappa h}}\right)^{1/2},$$ where we have used to relate the conductivity to the screening length. The smallest value of will be obtained, say, for $\kappa h \approx 10$, in which case we get $\tilde{t}_w \approx 0.86$ which corresponds to a critical wall zeta potential $\tilde{\zeta}_w\approx 135$ mV. Consequently, when $\zeta_w\lesssim \tilde{\zeta}_w$, can be simplified to $$\label{} \frac{\lambda}{h}=\frac{\zeta_w(f(\kappa a)\zeta_p-\zeta_w)(\e)^2}{\sigma\mu h^2},$$ for which it is easy to get $$\label{exp} \zeta_w^*=\frac{f(\kappa a)}{2}\zeta_p \,,\quad {\lambda^*}=\frac{(\e f(\kappa a)\zeta_p)^2}{4\sigma\mu h}\cdot$$ The exponent 2 given by equation agrees well with the power-law data presented in Figure \[maximum\] (right). Comparison with experiments --------------------------- Two comparisons with experimental results can now be given. First, we wish to comment on the general order of magnitude of the slip lengths obtained. For a review of the pressure-driven flow experiments in capillaries which report some degree of slip as summarized in the introduction, the reader is referred to [@LaugaStone]. The order of magnitude for the maximum slip lengths given by our mechanism (tens to hundreds of nanometers) are consistent with the slip lengths measured in the indirect pressure-driven slip experiments of [@Churaev; @Cheng; @Breuer]. Of course, the effect we report here does not directly apply to their pressure drop versus flow rate measurements, but the comparison shows that both effects are comparable in magnitude and therefore the apparent slip mechanism could have important consequences on experimental probing of the no-slip boundary condition. We also wish to address specifically the experiment of Tretheway & Meinhart [@Meinhart] for which our study directly applies. The channels used in their experiment have height $2h=30~\mu$m and width $2w=300~\mu$m; the separation of scale $w\gg h$ allows us to approximate the flow by that between two parallel plates with $h= 15$ $\mu$m. Details of the electrical characteristics of the water used in the experiment were not reported, but the water was deionized; we will therefore assume that the ion concentration was small and will take it to be that of pure water $n_0\approx 10^{-6}$ mol l$^{-1}$ for which $\kappa^{-1}\approx 300$ nm, so that $\kappa h \approx 50$. Particles with radius $a=$150 nm were used in the P.I.V. system, so that $\kappa a \approx 1/2$, for which we will approximate $f(\kappa a)\approx 2/3$. If we assume $|\zeta_p|=10$ mV, we obtain that $\lambda^*$ is essentially zero. If however $|\zeta_p|=50$ mV, we get $\lambda^*\approx 1$ nm and $|\zeta_p|=200$ mV leads to $\lambda^*\approx 18$ nm. Although beyond molecular size, these values are much too small to explain the data reported in [@Meinhart] where $\lambda \approx 1$ $\mu$m. As a consequence, we can conclude that the effect reported here is probably not responsible for the large slip length observed in [@Meinhart]. Alternative mechanisms would have to be invoked to explain the data, such as the presence of surface attached bubbles [@LaugaStone]. Conclusion ========== We have reported in this paper the following new mechanism. When small charged colloidal particles are used in a pressure-driven flow experiment to probe the profile of the velocity field of an electrolyte solution (e.g. P.I.V. in water), their velocities may include an “apparent slip” component even though the velocity field in the fluid does not violate the no-slip boundary condition. This apparent slip is in fact an electrophoretic velocity for the particles which are subject to the streaming potential, [*i.e.*]{}, the flow-induced potential difference that builds up along the channel due to the advection of free screening charges by the flow. A similar effect is expected to occur in shear-driven flows. The expected maximum orders of magnitude for the apparent slip lengths were given under normal conditions in water. Although the effect was found to be too small to explain the data reported in [@Meinhart], its magnitude is consistent with other indirect investigations of fluid slip in pressure-driven flow experiments. As a consequence, the analysis presented here could be a useful tool for experimentalists by allowing them to estimate quantitatively the importance of this apparent slip in their experiments. The idea that free passive particles could go faster than the surrounding flowing liquid, although counter-intuitive at first, is in fact not unnatural: a similar phenomenon occurs in electrophoresis where, beyond the double layer, the ambient liquid is at rest. We also note from equation and the scalings presented above that the effect increases when the ionic strength of the solution, and therefore its conductivity, decreases; this is because flow of an electrolyte with low ion concentration will necessary lead to the induction of a large streaming electric field to counteract the advection-of-charge electric current. The model chosen for the calculations used several simplifying assumptions. Our calculations were two-dimensional and we neglected in the model the effect of surface conductance as well as interactions between particles. We also assumed that the streaming electric field was uniform on the length scale of the particle and its double layer. We do not expect that relaxing these assumptions would change qualitatively the physical picture introduced in this paper. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Shelley Anna, Michael Brenner, Henry Chen, Todd Squires, Howard Stone, and Abraham Stroock for useful suggestions and stimulating discussions. Funding by the Harvard MRSEC is acknowledged. 1967 Introduction to Fluid Dynamics. [*Cambridge University Press, Cambridge*]{}. Goldstein S. 1938 Modern Development in Fluid Dynamics, vol. [**II**]{}, 677-680, [*Clarendon Press, Oxford*]{}. 1973 [*J. Fluid Mech.*]{} [**59**]{}, 707-719. 1988 [*Phys. Fluids*]{} [**31**]{}, 15-17. 1956 [*J. Appl. Phys.*]{} [**27**]{}, 1149-1152. 1984 [*J. Colloid. Int. Sci.*]{} [**97**]{}, 574-581. 1999 [*J. Fluid Mech.*]{} [**381**]{}, 225-238. 2002 [*Phys. Rev. E*]{} [**65**]{}, 031206. \(2002) [*Phys. Fluids*]{} [**14**]{}, L9-L12. Choi, C.-H., Johan, K., Westin, A. & Breuer, K.S. 2003 [*Phys. Fluids*]{} [**15**]{}, 2897-2902. 2000 [*Phys. Rev. Lett.*]{} [**85**]{}, 980-983. 2001 [*Langmuir*]{} [**17**]{}, 5232-5236. 2001 [*Phys. Rev. Lett.*]{} [**87**]{}, 054504. 2002 [*Phys. Rev. Lett.*]{} [**88**]{}, 076103. 2002 [*Eur. Phys. J. E*]{} [**9**]{}, 47-53. 2001 [*Phys. Rev. Lett.*]{} [**87**]{}, 096105. 2002 [*Phys. Rev. Lett.*]{} [**88**]{}, 106102. 2003 [*Phys. Rev. Lett.*]{} [**90**]{}, 144501. 1997 [*Nature*]{} [**389**]{}, 360-362. 1999 [*Phys. Rev. Lett.*]{} [**82**]{}, 4671-4674. 1823 [*Mémoires de l’Académie Royale des Sciences de l’Institut de France*]{} [**VI**]{}, 389-440. 2003 [*C.R. Phys.*]{} [**4**]{}, 241-249. 1989 Colloidal Dispersions. [*Cambridge University Press, Cambridge*]{}. 1964 [*J. Phys. Chem.*]{} [**68**]{}, 1084-1091. 1965 [*J. Phys. Chem.*]{} [**69**]{}, 4017-4024. 1975 [*J. Colloid. Int. Sci.*]{} [**52**]{}, 136-149. 1982 Zeta potential in colloid science, principles and applications. [*Academic Press, New York*]{}. 1994 Physicochemical Hydrodynamics: An Introduction [*John Wiley & Sons, New York*]{}. Israelachvili, J. 1992 Intermolecular and Surface Forces [*Academic Press, London*]{}. Happel, J.R. & Brenner, H. 1965 Low Reynolds Number Hydrodynamics [*Prentice Hall, Englewood Cliffs, NJ*]{} 1988 Hydrodynamics at low Reynolds numbers: a brief and elementary introduction, in [*Disorder and mixing*]{}, ed. E. Guyon, J.-P. Nadal, and Y. Pomeau (Kluwer Academic), 43-55. 1977 [*Ann. Rev. Mech.*]{} [**9**]{}, 321-337. Keh, H.J & Anderson, J.L. 1985 [*J. Fluid Mech.*]{} [**153**]{}, 417-439. Ennis, J. & Anderson, J.L. (1997) [*J. Colloid Interface Science*]{} [**185**]{}, 497-514. Yariv, E. & Brenner, H. (2003) [*J. Fluid Mech*]{} [**484**]{}, 85 - 111. 2003 [*J. Fluid Mech.*]{} [**489**]{}, 55-77. [^1]: The effect of the streaming electric field on the properties of the flow (the “electroviscous” effect) can be understood by evaluating the total flow rate from both and and, with , rewriting it under the form of an effective Poiseuille flow rate with a different effective shear viscosity $\mu_{\rm eff}$ [@hunter]. We find that $\mu < \mu_{\rm eff}$ so that, from the standpoint of flow rate versus pressure drop, the electrical effect effectively increases the bulk viscosity of the solution.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Using classical electrodynamics we determine the angular dependence of the light intensities radiated in second and third harmonic generation by spherical metal clusters. Forward and backward scattering is analyzed in detail. Also resonance effects in the integrated intensities are studied. Our work treats the case of intermediate cluster sizes. Thus it completes the scattering theory fo spherical clusters between [*Rayleigh*]{}-type analysis for small spheres and geometrical optics for spheres much larger than the wavelength for nonlinear optics. Since the particle size sensitivity of [*Mie*]{}-scattering is increased by nonlinearity, the results can be used to extract sizes of small particles from nonlinear optics.' address: 'Institut für Theoretische Physik der Freien Universität Berlin,Arnimallee 14, 14195 Berlin, Germany.' author: - 'J. Dewitz, W. Hübner, and K. H. Bennemann' date: 'September 30,1994' title: 'Theory for Nonlinear Mie-Scattering from Spherical Metal Clusters' --- Introduction ============ In this paper we investigate the nonlinear interaction of light with small metal particles. In nonlinear optics one expects a more pronounced sensitivity of the radiated yield to the light-wavelength and cluster-size, due to effects by nonlinear sources. The linear interaction of light with spherical objects shows certain changes depending on the size of the sphere. The limiting cases are [*Rayleigh*]{}- scattering for spheres small compared with the wavelength and reflection by spheres much larger than the wavelength (the limit of geometrical optics). If the size of the sphere and the wavelength of light have comparable magnitudes the intensities scattered by spherical metal clusters are strongly enhanced. Mie [@Mie] first calculated this classical effect and found that most of the incident light intensity is scattered in the forward direction, because of interference of waves originating from the front and the back of the sphere. This so called [*Mie-effect*]{} is increased with increasing sphere size. Bohren and Huffmann [@BH] pointed out that the extinction efficiency $Q_{ext}$ defined as the sum of scattering and absorption efficiencies ($Q_{ext}=Q_{sc}+Q_{abs}$) as a function of the sphere-radius are characterized by the interference- and ripple structure. The interference structure gives rise to regular interferences of incident and forward scattered light dominating the envelope of $Q_{ext}$, see Fig. (\[linQsc133.0\]). Superimposed on this is sharp and highly irregular ripple structure originating from resonant electromagnetic surface modes of the sphere. In general, the resonances depend on the size parameter $ka$ comparing the radius of the sphere $a$ and the wavelength expressed by the absolute value of the wave vector $k$ of the incident light. Thus resonances also occur by varying the wavelength of the incident light. For small spheres only one peak or resonance appears at $\omega_p/\sqrt{3}$ with $\omega_p$ the plasma frequency and the $1/\sqrt{3}$-factor caused by the shape of the sphere. This is the dipole-term. The larger the size of the sphere the more resonances around the $\omega_p/\sqrt{3}$-peak appear (if $ka>1$) reflecting the surface-mode resonances or higher multipoles, see[@Martin]. The nonlinear interaction of light with small particles was recently studied by Östling [*et al.*]{} [@Stampf] using a simplified model to obtain the intensities in second and third harmonic generation, SHG and THG, respectively. The calculations yield an enhancement of the $total$ intensities by a factor 5000 in SHG and 200000 in THG in comparison to a plane metal surface for spheres with sizes $a>\lambda_p/2$ in a wide frequency range. Here, $\lambda_p$ is the wavelength corresponding to $\omega_p$ using the relation $c=\lambda\cdot\omega/2\pi$ with $c$ the speed of light in vacuum. This holds especially if the diameter of the sphere roughly equals $\lambda_p$. The correlation between the scattering behavior and the properties of the sphere enables us to obtain the size and complex refractive index of the sphere and in the case of an ensemble of spheres even the size distribution from the scattered intensities. It is a goal of our investigation to show that this is improved by higher harmonics. In this paper we extend the theory of ref. [@Stampf] to obtain the [*angular dependence*]{} of the radiated second and third harmonic intensities, compare the linear and nonlinear results examining their characteristic features in higher harmonics. In particular the angular resolved intensities, the degree of polarization of the scattered SHG and THG yield are investigated. In addition, we thoroughly study parameter dependences of the ratio of forward and backward scattering and analyze the origin of the resonance structures. Our main results are an enhanced size sensitivity of the higher harmonics compared to linear $Mie$-scattering, thus pronouncing the mentioned characteristic features of the linear [*Mie-effect*]{} in higher harmonic scattering. Similarities between the linear scattering and THG reflect that the scattering is dominated by the same multipoles in both cases. In contrast, different multipoles contribute to SHG resulting in distinct changes of the angular dependence. Because of the more pronounced forward scattering enhanced backward scattering vanishes in the higher harmonics. The following section II contains the appropriate theoretical calculations. In section III we present the numerical results starting with polar plots of the intensities in SHG and THG, plots for the degree of polarization, the ratio of forward and backward scattering and the integrated intensities as a function of the size of the spheres. In section IV we will give a summary of the results. Theory ====== First, we use Mie theory to calculate the linear scattering and the radial component of the polarization $\sigma\left(\theta,\phi\right)$ at the surface of the sphere. Then, we determine the radiated intensities in second and third harmonic by matching the electromagnetic fields and the $n$-th power of $\sigma\left(\theta,\phi\right)$ with regard to the appropriate boundary conditions, as described by Östling [*et al.*]{} [@Stampf]. In section II.D the characteristic quantities of the radiation are determined. These quantities can be compared with experiments. Linear scattering by spheres ---------------------------- According to the spherical symmetry of the problem we expand the fields in the form $$\begin{aligned} {\bf E}^i\left({\bf x}\right)&=&\sum_{l,m}C\left(l\right)\left[a^i_M \left(l,m\right) f^i_l\left(k_1r\right){\bf X}_{l,m}\left(\theta,\phi\right)\right.\nonumber\\ & &\left.\qquad\qquad\qquad+\frac{m}{\left|m \right|}a^i_E\left(l,m\right)\frac{1}{\epsilon\left(\omega\right)k}\nabla \times f^i_l\left(k_1r\right){\bf X}_{l,m}\left(\theta, \phi\right)\right] \label{Elin}\end{aligned}$$ Therein, ${\bf X}_{l,m}$ is a vector spherical harmonic as introduced by Jackson [@Jackson] with $C\left(l\right)=i^l\sqrt{4\pi\left(2l+1\right)}, k=\omega/c$ and $k_1=\sqrt{\epsilon\left(\omega\right)}k$ . The multipole coefficients $a_M^i\left(l,m\right)$ and $a_E^i\left(l,m\right)$ refer to the magnetic (transverse electric) and electric (transverse magnetic) multipoles. The index $i$ specifies the incident ($i\equiv inc$), the scattered ($i\equiv sc$) or the internal ($i\equiv in$) fields. For incident waves of positive and negative helicity we have $a_M^{inc}\left(l,\pm1\right)=a_E^{inc}\left(l, \pm1\right)=1$. In this paper we use a superposition of both to give linear polarization. The spherical Hankel functions $f^{sc}_l\left(kr\right)=h_l^{\left( 1\right)}\left(kr\right)$ and Bessel functions $f^{inc,in}_l\left(kr\right)=j_ l\left(kr\right)$ describe the radial part of the field outside and inside the sphere. The magnetic field is given by the Maxwell-equation for harmonic fields $${\bf B}=-i\omega/c\cdot{\bf\nabla}\times{\bf E}\nonumber.$$ Using the boundary conditions at the surface of the sphere\ $${\bf n}\times\left({\bf E}^{sc}+{\bf E}^{inc}\right)={\bf n}\times{\bf E}^ {in} \label{GrenzbedlinE}$$ and $${\bf n}\times\left({\bf B}^{sc}+{\bf B}^{inc}\right)={\bf n}\times{\bf B}^ {in}, \label{GrenzbedlinB}$$ we obtain the expansion coefficients of the scattered wave $$\begin{aligned} a_E^{sc}\left(l,\pm1\right)&=&\left.\frac{j_l\left(kr\right)\frac{\partial} {\partial r}\left[rj_l\left(k_1r\right)\right]-\epsilon\left(\omega\right) j_l\left(k_1r\right)\frac{\partial}{\partial r}\left[rj_l\left(kr\right) \right]}{\epsilon\left(\omega\right)j_l\left(k_1r\right)\frac{\partial} {\partial r}\left[rh_l^{\left(1\right)}\left(kr\right)\right]-h_l^{\left(1 \right)}\left(kr\right)\frac{\partial}{\partial r}\left[rj_l\left(k_1r\right) \right]}\right|_{r=a},\nonumber\\ a_M^{sc}\left(l,\pm1\right)&=&\left.\frac{j_l\left(kr\right)\frac{\partial} {\partial r}\left[rj_l\left(k_1r\right)\right]- j_l\left(k_1r\right)\frac{\partial}{\partial r}\left[rj_l\left(kr\right) \right]}{j_l\left(k_1r\right)\frac{\partial} {\partial r}\left[rh_l^{\left(1\right)}\left(kr\right)\right]-h_l^{\left(1 \right)}\left(kr\right)\frac{\partial}{\partial r}\left[rj_l\left(k_1r\right) \right]}\right|_{r=a} \label{linkoeff}\end{aligned}$$ at the surface of the sphere with radius $a$. From the continuity of the electrical displacement at the surface of a perfect conductor $${\bf n}\cdot\left({\bf D}^{sc}+{\bf D}^{inc}\right)={\bf n}\cdot{\bf D}^{in}, \label{GrenzbedMetall}$$ the surface charge results as $$\sigma\left(\theta,\phi\right)=\frac{1}{4\pi}Re\left[\left({\bf E}^{sc}+ {\bf E}^{inc}-{\bf E}^{in}\right)\cdot{\bf n}\right]e^{-i\omega t}, \label{sigma}$$ where ${\bf n}={\bf r}/\left|{\bf r}\right|$ and $Re$ denots the real part. Furthermore we expand $\sigma \left(\theta,\phi\right)$ in spherical harmonics $$\sigma\left(\theta,\phi\right)=\frac{1}{2}\sum_{l,m=\pm1}a_{l,m}^{\left(1 \right)}Y_{l,m}\left(\theta,\phi\right)e^{-i\omega t}+c.c.\quad. \label{sigmaent}$$ The expansion coefficients result from the orthogonality of the spherical harmonics as $$a_{l,\pm1}^{\left(1\right)}=\frac{1}{4\pi}\left(1-\frac{1}{\epsilon\left( \omega\right)}\right)\frac{C\left(l\right)i\sqrt{l\left(l+1\right)}}{ka} \left(j_l\left(ka\right)+a_E^{sc}\left(l,\pm1\right)h_l^{\left(1\right)}\left( ka\right)\right). \label{al1}$$ Sources of the higher harmonic radiation ---------------------------------------- In analogy to the linear case we expand the n-th power of the surface charge $\sigma$ in terms of spherical harmonics: $$\sigma^n\left(\theta,\phi\right)=\frac{1}{2}\sum_{l,m}a_{l,m}^{\left(n\right)} Y_{l,m}\left(\theta,\phi\right)e^{-ni\omega t}+c.c.\quad . \label{sigmahochn}$$ Neglecting time-independent terms we obtain the coefficients in the case of second harmonic generation as $$\begin{aligned} a_{l,2}^{\left(2\right)}&=&\frac{1}{2}\sum_{l_1=1}^{\infty}\sum_{l_2=1}^{ \infty}a_{l_1,1}^{\left(1\right)}a_{l_2,1}^{\left(1\right)}\int Y_{l,2}^*Y_{ l_1,1} Y_{l_2,1}d\Omega\;,\nonumber\\ % a_{l,-2}^{\left(2\right)}&=&\frac{1}{2}\sum_{l_1=1}^{\infty}\sum_{l_2=1}^{ \infty}a_{l_1,-1}^{\left(1\right)}a_{l_2,-1}^{\left(1\right)}\int Y_{l,-2}^*Y_{l_1,-1}Y_{l_2,-1}d\Omega\;,\nonumber\\ % a_{l,0}^{\left(2\right)}&=&\frac{1}{2}\sum_{l_1=1}^{\infty}\sum_{l_2=1}^{ \infty}a_{l_1,1}^{\left(1\right)}a_{l_2,-1}^{\left(1\right)}\int Y_{l,0}^*Y_{ l_1,1} Y_{l_2,-1}d\Omega\;, \label{al2}\end{aligned}$$ and for third harmonic generation (also neglecting terms with $e^{-i\omega t}$) $$\begin{aligned} a_{l,1}^{\left(3\right)}&=&\frac{1}{2}\sum_{l_1=1}^{\infty}\sum_{l_2=1}^{ \infty}a_{l_1,-1}^{\left(1\right)}a_{l_2,-2}^{\left(2\right)}\int Y_{l,1}^*Y_{ l_1,1} Y_{l_2,2}d\Omega\;,\nonumber\\ % a_{l,-1}^{\left(3\right)}&=&\frac{1}{2}\sum_{l_1=1}^{\infty}\sum_{l_2=1}^{ \infty}a_{l_1,1}^{\left(1\right)}a_{l_2,-2}^{\left(2\right)}\int Y_{l,-1}^*Y_{ l_1,1} Y_{l_2,-2}d\Omega\;,\nonumber\\ % a_{l,3}^{\left(3\right)}&=&\frac{1}{2}\sum_{l_1=1}^{\infty}\sum_{l_2=1}^{ \infty}a_{l_1,-1}^{\left(1\right)}a_{l_2,2}^{\left(2\right)}\int Y_{l,3}^*Y_{ l_1,1} Y_{l_2,2}d\Omega\;,\nonumber\\ % a_{l,-3}^{\left(3\right)}&=&\frac{1}{2}\sum_{l_1=1}^{\infty}\sum_{l_2=1}^{ \infty}a_{l_1,-1}^{\left(1\right)}a_{l_2,2}^{\left(2\right)}\int Y_{l,-3}^*Y_{ l_1,-1}Y_{l_2,-2}d\Omega\;. \label{al3}\end{aligned}$$ The integrals can be expressed by the 3j-symbols and yield the coupling of the multipoles. Because of conservation of angular momentum, only coefficients with $m=0,\pm2$ in SHG and $m=\pm1,\pm3$ in THG differ from zero. Higher harmonic radiated fields ------------------------------- In the case of higher harmonics the electric and magnetic fields inside and outside the sphere and the sources are matched by the boundary conditions $${\bf n}\cdot\left({\bf D}^{out}-{\bf D}^{in}\right)=4\pi\sigma^n\left( \theta,\phi\right)\;, \label{GrenzbednonlinD}$$ and $${\bf n}\times\left({\bf E}^{out}-{\bf E}^{in}\right)=0\;. \label{GrenzbednonlinE}$$ As a result of spherical symmetry, only transverse magnetic waves are generated by the oscillating surface charge. Thus the fields in the nonlinear case are $${\bf E}^i\left(\theta,\phi\right)=\sum_{l,m}\frac{m}{\left|m\right|}A_E^{\left(n \right)}\left(l,m\right)\frac{1}{\epsilon\left(n\omega\right)k}\nabla\times f^ i_l\left(k_1r\right){\bf X}_{l,m}\left(\theta,\phi\right)\quad , \label{Enonlinear}$$ where $i\equiv out$ or $i\equiv in$, respectively, and $k=n\omega/c, k_1= \sqrt{\epsilon\left(n\omega\right)}k, f_l^{in}\left(kr\right)=j_l\left(k_1r \right)$ and $f_l^{out}=h_l^{\left(1\right)}\left(kr\right)$. The boundary conditions give the coefficients of the radiated field $$A^{\left(n\right)}_E\left(l,m\right)=\frac{\frac{\partial}{\partial r}\left[ rj_l\left(k_1r\right)\right]}{\epsilon\left(n\omega\right)j_l\left(k_1r\right) \frac{\partial}{\partial r}\left[rh_l^{\left(1\right)}\left(kr\right)\right]- h_l^{\left(1\right)}\left(kr\right)\,\frac{\partial}{\partial r}\left[rj_l \left(k_1r\right)\right]}\frac{\pi ka}{\sqrt{l\left(l+1\right)}}\,a_{l,m}^{ \left(n\right)}. \label{AE(l,m)}$$ In this case we have $k_1=\sqrt{\epsilon\left(n\omega\right)}k$. Calculation of quantities characterizing the radiation ------------------------------------------------------ To study the angular dependence of the scattered field we use the quantity $ \left|E_{\phi}\left(\theta,\phi\right)\right|^2+\left|E_{\theta}\left(\theta, \phi\right)\right|^2$ according to Born and Wolf [@BW], where $E_{\theta} \left(\theta,\phi\right)$ and $E_{\phi}\left(\theta,\phi\right)$ are the tangential components of ${\bf E}^{sc}\left(\theta,\phi\right)$ in the linear case and of ${\bf E}^{out}\left(\theta,\phi\right)$ in the nonlinear case. This definition is equivalent to the absolute value of the radial part of the Poynting vector $\left|{\bf n}\cdot\left({\bf E}\times{\bf H}\right)\right|$. The following formulas represent $\left|E_{\phi}\left(\theta,\phi\right) \right|^2$ and $\left|E_{\theta}\left(\theta,\phi\right)\right|^2$ in the far field approximation. We obtain after evaluating the $m$-summation in the linear case $$\begin{aligned} \left|E_{\phi}\left(\theta,\phi\right)\right|^2&=&\left|\sum_{l=1}^{\infty}C \left(l\right)\left(\frac{dP_l^1\left(\cos\theta\right)}{d\theta}a_M^{sc} \left(l,1\right)+\frac{P_l^1\left(\cos\theta\right)}{\sin\theta}a_E^{sc}\left( l,1\right)\right)\right|^2\cdot\sin^2\phi\;,\nonumber\\ % \left|E_{\theta}\left(\theta,\phi\right)\right|^2&=&\left|\sum_{l=1}^{\infty} C\left(l\right)\left(\frac{P_l^1\left(\cos\theta\right)}{\sin\theta}a_M^{sc} \left(l,1\right)+\frac{dP_l^1\left(\cos\theta\right)}{d\theta}a_E^{sc}\left(l, 1\right)\right)\right|^2\cdot\cos^2\phi\;, \label{Ephithetalin}\end{aligned}$$ for second harmonic generation $$\begin{aligned} \left|E_{\phi}\left(\theta,\phi\right)\right|^2&=&\left|\sum_{l=1}^{\infty} \sqrt{4\pi\left(2l+1\right)}\left(\frac{dP_l^0\left(\cos\theta\right)}{ d\theta}A_E^{\left(2\right)}\left(l,0\right)\right.\right.\nonumber\\ & &\qquad\qquad\qquad\left.\left.+\frac{dP_l^2\left(\cos\theta \right)}{d\theta}2K\left(l\right)A_E^{\left(2\right)}\left(l,2\right)\cos \left( 2\phi\right)\right)\right|^2\;,\nonumber\\ % \left|E_{\theta}\left(\theta,\phi\right)\right|^2&=&\left|\sum_{l=2}^{\infty} \sqrt{4\pi\left(2l+1\right)}\frac{dP_l^2\left(\cos\theta\right)}{d\theta} A_E^{\left(2\right)}\left(l,2\right)\right|^2\cdot\sin^2\left(2\phi\right)\;, \label{EphithetaSHG}\end{aligned}$$ and third harmonic generation $$\begin{aligned} \left|E_{\phi}\left(\theta,\phi\right)\right|^2&=&\left|\sum_{l=1}^{\infty} \sqrt{4\pi\left(2l+1\right)}\left(\frac{P_l^1\left(\cos\theta\right)}{\sin \theta}A_E^{\left(3\right)}\left(l,1\right)\sin\phi\right.\right.\nonumber\\ & &\qquad\qquad\qquad\left.\left.+\frac{P_l^3\left(\cos\theta \right)}{\sin\theta}K\left(l\right)A_E^{\left(3\right)}\left(l,3\right)\sin \left(3\phi\right)\right)\right|^2\;,\nonumber\\ % \left|E_{\theta}\left(\theta,\phi\right)\right|^2&=&\left|\sum_{l=1}^{\infty} \sqrt{4\pi\left(2l+1\right)}\left(\frac{dP_l^1\left(\cos\theta\right)}{d \theta}\right.\right. A_E^{\left(3\right)}\left(l,1\right)\cos\phi\nonumber\\ & &\qquad\qquad\qquad+\left.\left.\frac{dP_l^3\left(\cos\theta\right) }{d\theta}K\left(l\right)A_E^{\left(3\right)}\left(l,3\right)\cos\left(3\phi \right)\right)\right|^2\;, \label{EphithetaTHG}\end{aligned}$$ where $K\left(l\right)$ are $l$- and $m$-dependent factors. Note that the Legendre polynomials $P_l^m$ with $m=0,2$ are identically zero for $\theta=0,\pi$, since there is no direct scattering in forward nor backward direction in the second harmonic case. Furthermore the $\phi$- dependence of the linear scattering and THG is described by the interval ($0,\pi$) and by $\left(0,\frac{\pi}{2}\right)$ in SHG according to the symmetries of the sine and cosine terms. We will plot the degree of polarization defined as $$P\left(\theta\right)=\frac{I_{\parallel}-I_{\perp}}{I_{ \parallel}+I_{\perp}}\;, \label{polarisation}$$ with $$I_{\parallel}=\left|E_\theta\left(\theta,\phi=\frac{\pi}{2}\right)\right|^2+ \left|E_\phi\left(\theta,\phi=\frac{\pi}{2}\right)\right|^2\;, \label{Ipar}$$ and $$I_{\perp}=\left|E_\theta\left(\theta,\phi=0\right)\right|^2+\left|E_\phi\left( \theta,\phi=0\right)\right|^2\;. \label{Isen}$$ To measure the asymmetry of forward and backward scattering in the $Mie$-range we introduce the quantity $$R=\frac{I_{forw}-I_{back}}{I_{forw}+I_{back}}\;, \label{VorwRueck}$$ which we call the “degree of Mie-asymmetry”. In the linear case $I_{forw}$ and $I_{back}$ are the scattering intensities taken at $\theta=0$ and $\theta= \pi$ respectively. As these quantities are identically zero in SHG, we use for $I_{forw}$ and $I_{back}$ the maxima of the scattering intensities along the direction of propagation of the incident wave for $\phi=0,\frac{\pi}{4},\frac{ \pi}{2}$ and $\theta$ covering the interval $\left(0,\pi\right)$. In THG the angular dependence fo the radiated intensities is more complicated compared to the linear case and we take the maximum for $\phi=0$ and $\theta$ ranging from 0 to $\pi$. Finally, we calculate the angle-integrated scattered intensities. We obtain in the linear case $$Q_{sc}^{\left(1\right)}=\frac{1}{\pi\left(ka\right)^2}\sum_{l,m}\frac{2l+1}{l \left(l+1\right)} \left(\left|a^{sc}_E\left(l,m\right)\right|^2+\left|a_M^{sc}\left(l,m\right) \right|^2\right)\quad , \label{Qsclin}$$ and for the $n$-th harmonic $$Q^{\left(n\right)}_{sc}=\frac{1}{\pi\left(ka\right)^2}\sum_{l,m}\left|A_E^{ \left(n\right)}\left(l,m\right)\right|^2\quad. \label{Qscnonlin}$$ Here $Q^{\left(n\right)}_{sc}$ is in units of the geometric cross section of the sphere $\pi a^2$. In this formulation the optical theorem, which links the extinction efficiency to the normalized scattering amplitude in forward direction, has the form $$%% FOLLOWING LINE CANNOT BE BROKEN BEFORE 80 CHAR Q_{ext}=\frac{2}{\pi\left(ka\right)^2}\cdot\left(\left|E_{\theta}\left(\theta=0\right)\right|^2+\left|E_{\phi}\left(\theta=0\right)\right|^2\right)\;, \label{optictheo}$$ with $$Q_{ext}=\frac{1}{\pi\left(ka\right)^2}\sum_{l,m}\left(2l+1\right)\left\{Re \left[a_E^{sc}\left(l,m\right)\right]+Re\left[a_M^{sc}\left(l,m\right)\right] \right\} \label{Qext}.$$ Numerical results ================= We present numerical results for the [*angular dependence*]{} of the radiated intensities obtained using Eqs. (\[Ephithetalin\])-(\[EphithetaTHG\]). The [*degree of polarization*]{} characterizing the radiation, for example its angular dependence. The [*degree of Mie-asymmetry*]{} $R$ calculated using Eq.(\[VorwRueck\]) gives us the strength of the asymmetry along the direction of propagation according to the [*Mie-effect*]{}. The [*integrated intensities*]{} give us the resonances as a funtion of cluster size. In all cases we compare the linear results with the results of second and third harmonic generation as a function of the size parameter $ka$ and the material-properties, referring to iron and nickel at a fixed optical wavelength of 617 nm. To check the numerical accuracy we compare the linear results with those of ref.[@BH]. We find excellent agreement. Furthermore, we check the optical theorem. By determining $\Delta Q\equiv\left|Q_{ext} - 2\cdot\left(\left|E_{ \theta}\right|^2+\left|E_{\phi}\right|^2\right)_{\theta=0}/\left[\pi\left(ka \right)^2\right]\right|$ as a function of $ka$ we find that this is satisfied to an accuarcy better than $10^{-12}$.\ As input parameter we take the complex refractive indices measured by Johnson and Christy [@JohnChrist]. In the linear case, we also use for comparison with other calculations the refractive index of water droplets, as given by Bohren and Huffman [@BH]. These values are listed in Tab. I. The refractive index is constant in all figures unless something else is specified. Thus varying the size parameter $ka$ means varying the size of the sphere. Angular dependence of the intensities ------------------------------------- First, we show polar plots of $\left|E_{\phi}\right|^2+\left|E_{\theta}\right| ^2$ (see Figs. \[lin3d\] - \[shg3d\]). In general, the shape of the plots is governed by the values of the coefficients in the series expansions Eqs. (\[Ephithetalin\])-(\[EphithetaTHG\]). It is well known [@Jackson] that in the linear series (\[Ephithetalin\]) only terms with $l\leq ka$ contribute significantly. For $l>ka$ the terms decrease very rapidly, whereas for $l\ll ka$ they have comparable magnitudes. We restrict our calculation to $l<\left[{\rm Max}\left(\tilde{n}\cdot ka,\tilde{n}'\cdot ka\right)+15 \right]$ with a maximum value of $l$ of 50 where $\tilde{n}$ and $\tilde{n}'$ are the real and imaginary parts of the complex refractive index. This gives satisfactory convergence of the series up to $ka<10$ in all harmonics. To qualify the angular dependence the range of size parameters up to 5 is sufficient. In this range the $l$-values of the dominating terms are a little smaller than $ka$ and no terms with $l\ll ka$ exist. Terms with $l\leq ka$ can be very different from each other in contrast to terms with $l\ll ka$ which have comparable magnitudes. Thus, a pronounced transition range from pure dipole-scattering to pure [*Mie*]{}-scattering exists. Of course we cannot reach the transition from [*Mie*]{}-scattering to the optical limit of reflection, due to the numerical limit of a maximum value of $l$ of 50. The geometry of the scattering is specified in the inset of Fig. \[lin3d\] with the direction of propagation of the incident wave being parallel to the positive y-axis and polarization along the positive z-axis. Fig. \[lin3d\]  (a) shows [*Rayleigh*]{}-scattering according to the dipole-term with $l=1$. The characteristic $\cos^2\theta$-dependence appears along the x-z-plane. The other plots for linear optics (Figs. \[lin3d\]) show the well known results [@Mie; @BH; @BW]. For a value of $ka=1$ asymmetry of forward and backward scattering appears according to the [*Mie-effect*]{}. The ratio of forward to backward scattering $I_{forw}/I_{back}$ increases strongly with increasing size parameter $ka$ beginning at a value of $ka\approx 1$. “New” maxima grow out in the backward direction and move to the forward direction with increasing $ka$. The $\phi$-dependence is not as striking and not as complicated as the $\theta$-dependence since we have a superposition of the form $A\left(\theta\right)\cdot\cos^2\phi+B\left(\theta\right)\cdot\sin^2\phi$ , the scattering-behavior is dominated by the strong increase in forward scattering described by the $\theta$-dependence. The fact that the $\phi$- dependence is described by the intervall $\left(0,\pi\right)$ is most important for the linear case and harmonics with odd order and differs from SHG and harmonics or even order where the $\phi$-dependence is fully described by the intervall ($0,\frac{\pi}{2}$). The polar-plots in the case of THG are quite similar to the linear case up to values of $ka\approx 2$. The differences between Figs. \[lin3d\] (b) and \[thg3d\] (b) with $ka=1$ reflect the stronger increase in the ratio of forward to backward scattering in THG with increasing $ka$. The plots in Figs. \[lin3d\] (c) and \[thg3d\] (c) with $ka=2$ are very similar apart from one more maximum appearing in the third harmonic case for $\theta\approx\ frac{\pi}{2}$. For $ka=5$ the intensities parallel to the direction of polarization are much larger in third harmonic than in the linear case. Note, the different scales of the axes for different $ka$. The terms for $m=3$ in THG are negligble compared to the terms with $m=1$. So the differences in the magnitudes of the linear and third harmonic intensities are caused by the coefficients $a^{sc}_E\left(l,1\right), a^{sc}_M\left(l,1\right)$ and $A^{ \left(3\right)}_E\left(l,1\right)$ only. The angular dependence of the intensities in second harmonic is very different from the linear and third harmonic cases. The disappearing direct forward and backward intensities and the $\cos^2\left(2\phi\right)$ and $\sin^2\left(2 \phi\right)$ behavior produce the club-shaped structure, which is shown in Figure \[shg3d\]. But the main features of the linear and third harmonic plots appear also in second harmonic. The plots become asymetrical with respect to $\theta$ in the range of $ka\approx1$. The ratio of forward to backward scattering increases with $ka$ and is between the linear and third harmonic one. Values of the forward to backward scattering ratio with different $ka$-values are listed in Tab. II. In general harmonics with even order will show an angular dependence like SHG, because the Legendre polynomials of $m\neq 1$ vanish at $\theta=0,\pi$ and $\phi$ will appear in the cosine and sine-terms in connection with $n=0,2,.,2p$ ($p$ integer). Analogous harmonics with odd order will behave similar to the linear case. Polarization ------------ To learn more about the angular dependence, especially the $ka$-dependence of the intensity maxima, we calculate the degree of polarization using Eq. (\[polarisation\]). First, we plot the polarization in the case of [*Rayleigh*]{}-scattering (Fig. \[linpol\] (a)). For convenience and in agreement with the preceding section each of the figures in Fig. \[linpol\] contain the linear and third harmonic curves. The plots of [*Rayleigh*]{}- scattering (Fig. \[linpol\] (a)) are identical. With increasing size parameter the maximum of the polarization moves to smaller angles (the forward direction) in agreement with the intensity maxima. In third harmonic generation they move faster than in the linear case (Fig. \[linpol\] (b)). For $ka=2$ (Fig. \[linpol\] (c)) there is one more maximum in THG in agreement with the Figs. \[lin3d\] (c) and \[thg3d\] (c). Increasing the size parameter up to 5 destroys the correlation between the peaks in the polarization plots and the polar plots in THG. For example the “double” peaks in the third harmonic plot with $ka=5$ cannot be identified with “double” peaks in the polar plots but they reflect that the intensities at $\phi=0$ are comparable to those at $\phi=\frac{\pi}{2}$ in contradiction to the linear case. In the linear case the polarization even for values higher than $ka=5$ reproduces the position of the peaks in the polar plots and is mainly perpendicular ($P\left(\theta\right)>0$). This is an artifact of the chosen refractive index and is not characteristic for linear [*Mie*]{}- scattering in general. For imaginary parts of the refractive index close to zero, the polarization is mainly perpendicular. The polarization plots in second harmonic generation reflect the different shape of the polar plots. Since the $P\left(\theta=0\right)$ and $P\left( \theta=\pi\right)$ values do not exist and the limits $\lim_{\theta\rightarrow 0}P\left(\theta\right)$ and $\lim_{\theta\rightarrow\pi}P\left(\theta\right)$ are different, the plot in the [*Rayleigh*]{}-range is asymmetric. The decrease of $\left|P\right|$ up to zero with increasing $ka$ is characteristic for SHG. The square of the absolute value of the $\theta$- component of the electric field $\left|E_{\theta}\right|^2$ is identically zero for $\phi=0,\pi$ . Thus $P\left(\theta\right)$ is a measure of the importance of the ($m=2$)-term in $\left|E_{\phi}\left(\theta,\phi\right) \right|^2$ with respect to the ($m=0$)-term. Thus the ($m=0$)-term can be neglected with increasing $ka$. Only for very small $ka$, the structure in the polarization plots correlates with the shape of the polar plots. The first “new” maximum appears as zero in the polarization. For larger $ka$, however, there is no correlation any more. Forward vs. backward scattering ------------------------------- By computing $R$ defined in Eq. (\[VorwRueck\]) which is a measure of the difference between the forward and backward intensities as a function of the size parameter $ka$ and the real or imaginary part of the refractive index $N=\tilde{n}+i\cdot\tilde{n}'$, we want to study the development of the asymmetry of forward to backward intensities. Of particular interest will be enhanced backward scattering. Figure \[linvr\] (a) shows the increase of forward scattering with the size of the sphere in linear scattering. The oscillations correlate with intensity maxima in the backward direction resulting from maxima of the coefficients, see Probert-Jones [@ProbJones]. Enhanced backward scattering appears only in the small range of $0.5<ka<1$ in the case of iron. The curve for water droplets shows no enhanced backscattering, but the overall behavior is the same as for metals. Varying the imaginary part of the refractive index diminishes the forward scattering. In the case of $ka=1$ and $\tilde{n}'>5$ an enhancement of the backward scattering occurs. The $\tilde{n}$ dependence of $R$ is similar. In the higher harmonic case the [*Mie-effect*]{} is strongly enhanced. The limit of one is obtained earlier than in the linear case. The higher the order, the stronger is the enhancement of the forward scattering. Regions of enhanced backward scattering are hard to find. In the case of second harmonic generation, they exist only for small $\tilde{n}$ or $\tilde{n}'$ around 5 and small $ka$ whereas we could not find enhanced backward scattering for metal- clusters ($\tilde{n}'\gg 0$) in third harmonic generation so far. Integrated intensities ---------------------- [*Mie*]{}-Resonances appear with increasing radius $a$ of the sphere or absolute value of the wave vector $k$ of the incident light. In this section we will only deal with the size-dependent resonances. In Fig. \[linQsc133.0\] we show the scattering efficiency $Q_{sc}$ of water droplets as a function of the size parameter. The main features are the interference structure built up by interferences between the incident wave and forward scattered light and the ripple structure reflecting resonant surface modes. Furthermore, the results suggest the optical paradoxon $\lim_{ka \rightarrow\infty}Q_{ext}\left(ka\right)=2$. The ripple structure correlates with resonances in the coefficients $a_E^{sc}\left(l,m\right)$ and $a_M^{sc} \left(l,m\right)$. They are resonant if their imaginary part is zero. Fig. \[kres\] shows the first resonance of $a_E^{sc}\left(13,1\right)$ for a real refractive index with $\tilde{n}=1.5$ and $\tilde{n}'=0$. The resonance of $a_M^{sc}\left(13,1\right)$ occurs for $ka\approx 11$ (see Chýlek [@Chylek]). For large size parameters the distance of the ripples can be expressed directly by the refractive index. Finite imaginary parts of the refractive index would damp the resonance. Then the values of the coefficients at the resonant point would be smaller than 1. Even if we take $\tilde{n}' \approx 0.1$ the ripple structure in Fig. \[kres\] does not appear. Accordingly, there is no ripple structure in the scattering efficency of Fe and Ni as a function of size as shown in Fig. \[QscKA\] (a). In higher harmonics no structure correlated with resonances of the coefficient can be detected even for vanishing imaginary part of the refractive index up to a numerical accuracy of $10^{-10}$. Each coefficient $a^{\left(n\right)}_{ l,m}$ is a combination of all linear electric multipole coefficients determined by Eqs. (\[al2\]) and (\[al3\]), respectively. Correspondingly, the size dependence of the coefficients $A_E^{\left(n\right)}\left(l,m\right) $ has many small peaks (Fig. \[kres\] (b) and (c)). In contrast, the size resonances caused by multipole-combinations, and known as interference structure in linear scattering, are in the case of metals more pronounced in SHG (compare Fig. \[QscKA\] (a) and (b)). In THG (Fig. \[QscKA\] (c)) the first peak is even more dominant and the resonances are visible only for enhanced resolution. In both cases, the position of the resonances has a weak dependence on the refractive index, but the number of peaks in SHG is twice the peak-number in THG if $0<ka<10$. Decreasing the imaginary part of the refractive index down to zero changes the behavior drastically. Instead of a dominating first peak the scattering efficiency now shows an continuous increase for $0<ka<20$ with oscillations stronger than in the case of $\tilde{ n}'>1$. In all three cases the absolute values grow with the absolut value of the refractive index if metals are considered. Summary ======= By extending the classical model introduced by Östling [*et al.*]{} [@Stampf] we calculated the angular dependence of the second and third harmonic intensities radiated by spherical metal clusters. Therein the n-th power of the surface charge density induced by linear polarized light is used as the source of the fields radiated in higher harmonics [@add]. The source represents the discontinuity of the electrical displacement at the surface of the sphere. We find that the forward Mie-scattering is in the nonlinear case even more strongly enhanced (see Table II). The nonlinear optical response yields a stronger size sensitivity. Higher multipoles contribute already at smaller size parameters $ka$. For example, we find that the light intensities perpendicular to the forward direction divided by the light intensities in forward direction is much larger in the third harmonic than in the linear case. Since the values of the Legendre polynomials with azimuthal quantum numbers $m=0,\pm2$ vanish for $\theta=0,\pi$ and due to the $\phi$-dependence of the form $A\cdot\cos^2\left(2\phi\right)+B\cdot\sin^2\left(2\phi\right)$, see Eq. (\[EphithetaSHG\]), the angular dependence of the second harmonic intensities is very different from the linear and third harmonic distributions. Especially, direct forward and backward scattering vanishes. In contrast, the linear [*Mie*]{}-scattering results (terms with $m=\pm1$ only) and THG (terms with $m=\pm1,\pm3$) are similar. In particular the terms with $m=\pm3$ in THG and $m=0$ in SHG can be neglected, since their absolute values are much smaller than the other contributions. The more pronounced [ *Mie-effect*]{} prevents enhanced backward scattering in higher harmonics (Figs. \[nhgvrKA\]). In order to compare with our theory an experimental investigation of the nonlinear [*Mie*]{}-scattering would be interesting. [*Mie*]{}-resonances play an important role in the field of photonic bandstructure in high refractive materials (see John [@John]), but they are studied so far in the linear case only. Also applications of the theory to the study of fullerenes and problems in biophysics (detection and growth modes of tumor cells) would be interesting. We will extend the theory to ellipsoidal objects to solve the classical problem of a one to one correspondence of scattering profile and particle shape. Furthermore, the expected curvature sensitivity of the nonlinear optical response even to particles with sizes much smaller than the wavelength , should be detectable in the nonlinear scattering. Thus, the higher harmonics are a particularly useful probe for detecting small particle sizes and shapes. ------ ----- ----- ----- $ka$ lin SHG THG 1 1.4 2.7 15 2.1 8.7 10 113 4.8 64 136 206 ------ ----- ----- ----- ----------- ------------- -------------- ------------- -------------- $\tilde{n}$ $\tilde{n}'$ $\tilde{n}$ $\tilde{n}'$ $\omega$ 2.88 3.05 1.99 4.02 $2\omega$ 1.69 2.06 2.01 2.18 $3\omega$ 1.49 1.41 1.29 1.89 ----------- ------------- -------------- ------------- -------------- [99]{} G. Mie, Ann. Phys. (Leipzig) [**25**]{}, 377 (1908) C. F. Bohren and D. R. Huffman: [*Absorption and Scattering of Light by Small Particles*]{}, (Wiley, New York, 1983) S. S. Martinos, Phys. Rev. B [**31**]{}, 2029 (1985) D. Östling, P. Stampfli and K. H. Bennemann, Z. Phys. D [ **28**]{}, 169-175 (1993) J. D. Jackson: [*Classical Electrodynamics*]{} (Wiley, New York, 1975) M. Born and E. Wolf: [*Principles of Optics*]{}, (Pergamon Press, Oxford, 1975) P. B. Johnson, R. W. Christy, Phys. Rev. B [**9**]{}, 5056- 5070 (1974) J. R. Probert-Jones, J. Opt. Soc. Am. A, [**1**]{} 8, 822 (1984) P. Chýlek, J. Opt. Soc. Am., [**66**]{} 3, 285-287 (1976) In addition the following approximations are made in ref. [@Stampf]: (a) The nonlinear susceptibilities are taken to be frequency independent, $\chi^{\left(n\right)}\left(\omega\right)=$const., (b) the surface charge is assumed to be $\sigma\left(n\omega\right)=c\cdot\sigma^n \left(\omega\right)$, and (c) the proportionality-factor $c$ has not been determined by Östling [*et al.*]{} since they focus on relative intensities. It would be of interest to compare these approximations with a detailed microscopic theory of the nonlinear susceptibilities $\chi^{\left(n\right)} \left(\omega\right)$. S. John, [*Localization of Light*]{}, Physics Today, May 1991
{ "pile_set_name": "ArXiv" }
--- abstract: 'The local distribution of exciton levels in disordered cyanine-dye-based molecular nano-aggregates has been elucidated using fluorescence line narrowing spectroscopy. The observation of a Wigner-Dyson-type level spacing distribution provides direct evidence of the existence of level repulsion of strongly overlapping states in the molecular wires, which is important for the understanding of the level statistics, and therefore the ‘functional properties, of a large variety of nano-confined systems.' author: - 'R. Augulis' - 'A. V. Malyshev' - 'V. A. Malyshev' - 'A. Pugžlys' - 'P. H. M. van Loosdrecht' - 'J. Knoester' title: | Quest for Order in Chaos: Hidden Repulsive Level Statistics\ in Disordered Quantum Nanoaggregates --- [^1] [^2] [^3] One of the current dreams in the field of molecular optics is the full understanding of nature’s way to harvest and use photonic energy which ultimately could enable the development and design of highly efficient functional optical devices using molecular arrangements as building blocks. One of the crucial elements of such photonic assemblies are the ’wires’ which transport the energy between the different functional units of the devices. Natural systems often utilize structures of coupled aggregated pigments to transport energy in the form of excitonic excitations. [@vanAmerongen00; @Renger01; @Berlin06; @Scholes06] Such structures can also be mimicked in synthetic systems, greatly assisting studies aiming to understand their fundamental properties. An important class of synthetic species, on which we focus here, is found in the so-called one dimensional (1D) $J$-aggregates based on, for instance, pseudoisocyanine, porphyrin, and benzimidazole carbocyanine dyes. [@Kobayashi96; @Knoester02; @Pugzlys06] Synthetic, as well as natural systems, usually exhibit a substantial degree of disorder, arising from the environment and from vibrations and disorder within the systems themselves. In general the presence of disorder in gapped systems leads to the formation of highly localized states inside the optical, electronic or magnetic energy gap of the unperturbed system; [*i.e.*]{} to a tail of the density of states inside the gap generally referred to as the Lifshits tail. [@Lifshits88] There are many systems which optical properties are governed by exciton-like excitations highly susceptible to disorder leading to localization and level repulsion phenomena. These include conjugated oligomer aggregates [@Spano06] and polymers, [@Hadzii99] molecular $J$-aggregates, [@Kobayashi96; @Knoester02; @Pugzlys06] semiconductor quantum wells and quantum dots, [@Takagahara03] gold nanoparticles, [@Kuemmeth08] semiconductor quantum wires, [@Akiyama98] as well as photosynthetic light harvesting complexes [@vanAmerongen00; @Renger01] and proteins [@Berlin06] (see Ref.  for a recent overview). In all these systems, excitons are confined at least in one dimension at a nanometer scale. The physical and transport properties of most of the above mentioned systems are predominantly determined by the states residing in the vicinity of the energy gap, [*i.e.*]{} the gap excitation itself and the Lifshits tail below it, even at finite temperatures. [@Malyshev03] The localization of the exciton states within the Lifshits tail gives rise to a local (hidden) statistics of the levels, which deviate substantially from the overall statistics. [@Malyshev95; @Malyshev01a] Therefore, understanding the physical properties of these systems requires the use of statistical approaches; the energy level distributions become an important part of the theory and interpretation of the experimental data. For non-interacting systems it is well known that the presence of disorder leads to an energy spectrum with a Poissonian energy spacing distribution. In less trivial cases of interacting systems, the situation naturally becomes more complex leading to the concept of level repulsion, [*i.e.*]{} a vanishing probability to find two quantum states with the same energy. The level statistics of such a system is known as Wigner-Dyson statistics (see the excellent textbook by Metha Ref.  for an overview). Level repulsion phenomena in nano-confined materials has recently drawn considerable attention, in particular, concerning localized Wannier excitons in disordered quantum wells [@Haacke01; @Intonti01] and wires, [@Feltrin03] and in disordered graphene quantum dots, [@Libisch09] as well as concerning vibronic states in polyatomic molecules. [@Krivohuz08] Time-resolved resonant Rayleigh scattering [@Haacke01] and near-field spectroscopy [@Intonti01; @Feltrin03] have been used to study them. In Ref. , an alternative method has been proposed to analyze the level statistics – low-temperature time-resolved selectively excited exciton fluorescence spectroscopy, widely known as fluorescence line narrowing (FLN) spectoscopy. Under a narrow (compared to the the $J$-band width) excitation within the $J$-band, the fluorescence spectrum consist of a sharp intensive peak at the excitation energy and, growing in time, a red-shifted feature, resulting from the exciton band relaxation. It is this feature that contains information about the level statistics of the spatially overlapped states. Here we apply a variant of this method to reveal the repulsive statistics of levels residing in the Lifshits tail [@Lifshits88] of J-aggregates of pseudoisocyanine (PIC) with a chloride counter-ion (PIC-Cl). In contrast to the earlier proposal to use time dependent FLN, [@Malyshev07] we here show that also steady state FLN, which is much more simple in realization, can be utilized to extract the desired information. We extract the conditional probability distribution of the repelled states from the experimental data and show that this distribution is Wigner-Dyson-like indicating zero probability for zero energy spacing, which is a fingerprint of the level repulsion. Figure \[Non selective spectra\] shows the absorption $A$ and fluorescence $F$ spectra of $J$-aggregates of PIC-Cl, the latter measured after excitation using 400 nm light. Both spectra exhibit an intense peak arising from the dominant exciton transitions (the $J$-band) and a much less intense and broad shoulder located on the red side of the $J$-band. We relate this red feature to aggregates in the vicinity of the substrate as the relative intensity of this shoulder decreases upon increasing thickness of the aggregated film. For completeness, we note that the overall absorption spectrum of our samples (see inset in Fig. \[Non selective spectra\]) is found to be in good agreement with earlier results. [@Renge97] ![ Low temperature steady-state absorption (solid line) and fluorescence (dotted line) spectra of $J$-aggregate of PIC-Cl in the neighborhood of the $J$-band. The fluorescence spectrum was measured after off-resonance excitation far in the blue tail at temperature $T = 4$ K. The inset show the absorption spectrum in a wider spectral range. []{data-label="Non selective spectra"}](fig1.eps){width="\columnwidth"} Before turning to the main experimental results, we briefly sketch some of the theoretical background relevant for the present study; more details on the model and the energy level structure of the disordered $J$-aggregates has been discussed in Ref. . A typical realization of the calculated low-energy level structure and wavefunctions for a one dimensional aggregate of 300 chromophors is depicted in Fig. \[FirstStates-color\]. In this calculation, we used a gaussian disorder distribution of the chromophore energies with standard deviation $\sigma=0.1 J$ (disorder degree from now on), $J$ being the transfer interaction between chromophores (for more details see Ref. ). ![A typical realization of the exciton wave functions $\varphi_{\nu n}$ ($\nu =1\ldots 14$) in the neighborhood of the bare exciton band edge $E_b/J = -2.404$. The Lifshits tail of the DOS ($E < E_b$) is shaded. The origin of the energy is chosen at $E_0 = 0$, the baseline of each wavefunction represents its energy in units of $J$. Wave functions are in arbitrary units. $N^*$ denotes the typical localization size of the tail states. Filled red curves are $s$-like states which overlap weakly. Some of them appear even slightly above the bare band edge. Filled blue curves are $p$-like states which overlap well with their $s$-like partner state lying below. Higher grey-shaded states are band states. They are delocalized to a larger extent as compared to the tail states. []{data-label="FirstStates-color"}](fig2.eps){width="\columnwidth"} Without disorder wavefunctions are fully delocalized, and the lowest state is located at -2.404$J$. The presence of disorder leads to localization of the wavefunctions within so called segments, and to the appearance of highly localized states within the band gap, [*i.e.*]{} in the Lifshits tail (grey shaded area in Fig. \[FirstStates-color\]). These are the states of our primary interest since they determine the optical properties and transport in the J-aggregates. They originate from localization in well-like fluctuations of the site potential on the molecules. The optically dominant states resemble $s$-like wavefunctions which have no nodes within their localization segments. The $s$-like states lying deep in the Lifshits tail usually appear as singlets and are localized by the so called optimal fluctuations of the site energy. [@Lifshits88] Close to the band edge, however, the $s$-like states often have partners localized within the same localization segment. The latter look like $p$-states, having one node within their localization segment. Manifolds like these form the local (hidden) structure of the tail of the density of states. Since these states are localized on the same segment, one may expect level repulsion to occur for them, as is indeed observed. In contrast, states from distant (non-overlapping) manifolds can be arbitrarily close in energy. Optical experiments probe the states with a finite transition dipole moment. For the $s$-like states, the transition dipole moment scales proportionally to $\sqrt{N^\ast}$, where $N^\ast$ is the typical localization length of the states. This enhancement of the dipole moment is known as superradiant enhancement. [@Fidder90] Typically, the $p$-like states have a transition dipole moment which is several times smaller than the $s$-like states. [@Malyshev07] Nevertheless, since the $p$-states are not perfectly antisymmetric, they do have a finite transition dipole moment and these states can be optically excited too. Therefore optical experiments can be used to probe the level statistics by studying the relaxation between $p$ and $s$ like levels. [@Malyshev07] ![ Low-temperature ($T = 4$ K) fluorescence spectra of $J$-aggregates of PIC-Cl measured while selectively exciting within the $J$-band. The vertical lines show the positions of the red feature maxima. The excitation wavelengths are indicated along the right vertical axis. []{data-label="FLN spectra"}](fig3.eps){width="\columnwidth"} To experimentally study the level statistics in the neighborhood of the exciton band edge, we performed steady state resonance fluorescence measurements using a narrow excitation line to excite states within the $J$-band. Figure \[FLN spectra\] shows a number of such spectra recorded at low temperature using different excitation energies. The spectra show a strong peak at the excitation wavelength, together with a broad red-shifted emission band separated from the main peak by a pronounced dip. The red shifted emission originates from relaxation of the initially excited exciton states into states of the Lifshits tail of the DOS. The position of the maximum of this band remains almost unchanged while $J$-aggregates are excited on the blue side of the $J$-band. For the red-side excitation, the peak position moves to the red, the line shape changes considerably, and the dip washes out. This is a consequence of the changes in the relaxation pathways since red-side excitation predominantly excites $s$-like states. The dip close to the excitation energy in blue side excited spectra distribution shows that energy relaxation into states close to the excitation energy is substantially suppressed, hinting to the occurrence of level repulsion. The sheer existence of the dip, however, is not enough to conclude on the level statistics. The problem is that the relaxation process from the initially excited states to the lower lying levels is phonon assisted and, hence, the line shape is determined by the product of the level spacing distribution function and the phonon spectral density. Since the phonon spectral density vanishes for zero energy, one expects the spectral intensity to vanish at the excitation energy, even without level repulsion. Moreover, one also should bear in mind that together with the fluorescence of the relaxed excitons two more processes contribute to the red feature and affect its line shape: the phonon side-band fluorescence, [@Malyshev07] and surface-mediated fluorescence in our thin samples. All three contributions to the red-shifted feature are spectrally superposed and must be separated in order to extract the signal we are interested in. We note, however, that if the observed red-shifted feature would solely originate from the phonon side-band, its line shape and position would be virtually independent of the excitation wavelength, clearly in contradiction to the experimental observations. In the analysis of the observed spectra we limit ourselves to those spectra measured using excitation on the blue side of the $J$-band, since it is here that one expects the $p$-like states to contribute most strongly. In order to discriminate the true relaxation mediated fluorescence (RMF) from the surface mediated and the phonon sideband fluorescence we use a simple subtraction method. For this, we consider the differential spectrum between two experimental spectra with close excitation wavelengths $\lambda_2 > \lambda_1$ defined by $$\label{Differential spectrum} \Delta F(\lambda_1,\lambda_2,\lambda) = F(\lambda_{2},\lambda) - \beta\,F(\lambda_{1},\lambda-\lambda_{2} + \lambda_{1})\ ,$$ where the second term on the right hand side is the $F(\lambda_1,\lambda)$ spectrum shifted in wavelength to match its excitation peak position with that of the $F(\lambda_{2},\lambda)$ spectrum. In addition, this term is rescaled by a factor of $\beta$ in order to cancel the red tail in the spectra; any feature that is not wavelength dependent is suppressed in the difference spectrum \[Differential spectrum\], and the resulting difference spectrum represents just the RMF differential signal $\Delta R(\lambda_1,\lambda_2,\lambda)$: $$\label{After subtraction} \Delta R(\lambda_1,\lambda_2,\lambda) = R(\lambda_{2},\lambda) - \beta\,R(\lambda_{1},\lambda)\ ,$$ At the next step, we calculated the quantum efficiency of the red-shifted feature and found that it did not exceed 0.3 for most blue excitation. For the spectra we will use in the fitting procedure, the efficiency even smaller, around 0.1. This means that excitons make only one step of relaxation, moreover, the major contribution to this process comes from the intra-segment hops (see the discussion in Section 2). Then the theoretical RMF line shape $R(\lambda_e,\lambda) \sim S(\lambda - \lambda_e) P_{sp}(\lambda_e,\lambda - \lambda_e)$, and we can relate two RMF spectra taken for different (close) excitation wave length $\lambda_{2} > \lambda_{1}$ as: $$R(\lambda_{2},\lambda) \approx \frac{S(\lambda-\lambda_{2})}{S(\lambda-\lambda_{1})}\, R(\lambda_{1},\lambda) \ , \label{spectra_relation}$$ where we assumed that the energy spacing distribution function varies much slower than the phonon spectral density, the assumption which, as will be seen, is consistent with the final results. Substituting Eq. (\[spectra\_relation\]) into (\[After subtraction\]), we arrive at a relationship between the differential and ordinary RMF spectra: $$\Delta R(\lambda_1,\lambda_{2},\lambda) \approx g(\lambda) R(\lambda_{2},\lambda) \ , \label{difference_spectrum}$$ where $$g(\lambda)= 1-\beta\,\frac{S(\lambda-\lambda_{1})}{S(\lambda-\lambda_{2})} \ , \label{correction_function}$$ i.e., the lineshape of the RMF spectrum can be extracted from the lineshape of the differential RMF spectrum by dividing it by the known correction function, provided the factor $\beta$ is adjusted to cancel long red tale. It is important that the correction function is almost constant for wave lengths that are far from the excitation wave lengths ($\lambda-\lambda_{1,2}\gg|\lambda_{2}-\lambda_{1}|$). Hence, it would not change the shape of the distant features in the long red tail of the experimental fluorescence spectra. Such features will therefore be canceled in the difference spectrum which will contain only the contribution of the RMF. Finally, applying the above reasoning and formulae to the experimental FLN spectra, we can recover the MRF lineshape $R(\lambda_{e},\lambda)$ according to the formula (\[difference\_spectrum\]). ![ The conditional probability of the nearest level spacing distribution $P_{sp}$ (diamonds) obtained by dividing the experimental curve (circles) by the Debye-like spectral density $S(\lambda) \propto \lambda^{-3}$ together with the calculated level spacing distribution $P_{sp}$ (dashed curve). The experimental RMF spectrum (circles) show the data after applying the subtraction method (described in the text) to eliminate the contribution of the long red tail resulting from non-RMF transitions. Also shown are the theoretical RMF spectrum calculated for the Debye-like spectral density $S(\lambda) \propto \lambda^{-3}$ (solid curve) and the absorption spectrum (dotted curve). []{data-label="P12_alpha=3"}](fig4.eps){width="\columnwidth"} The described procedure has been applied to two spectra recorded using excitation in the blue part of the absorption spectrum ($\lambda_1=568.5$ nm and $\lambda_2=569$ nm). The result, assuming a Debye-like spectral density $S(\lambda)\propto\lambda^{-3}$ is shown in Fig. \[P12\_alpha=3\]. The extracted RMF spectrum, obtained using $\beta=1.06$, is shown in the figure by the open circles. Clearly, the non-RMF contributions, leading to the long red tail the fluorescence spectrum, is, as expected, nearly fully eliminated. Superimposed on this experimental spectrum is the result of a simulation of the RMF fluorescence spectrum, using a Gaussian disorder with standard deviation $\sigma=0.2$, again assuming a Debye model for the phonon spectral density, and an exciton-phonon scattering strength of $W_0 =22.4$ J (see for details Ref. ), which reproduces the Stokes shift presented in Fig. \[Non selective spectra\]. ![ Open circles represent the experimental red-shifted feature obtained after applying the subtraction approach (described in the text) to eliminate a contribution of the long red tail resulting from non-RMF transitions. The solid curve is the theoretical RMF spectrum calculated for the Debye spectral density $S(\lambda)\propto \lambda^{-3}$, while the dashed-dotted and dashed curves are the RMF spectra for $S(\lambda) \propto \lambda$ and $S(\lambda) \propto \lambda^{-2}$, respectively. The dotted curve denotes the absorption spectrum. []{data-label="DifferentAlphas"}](fig5.eps){width="\columnwidth"} It is to be noticed that calculations performed with non-Debye models for the phonon spectral density, do not lead to a satisfactory agreement with the experimental data. This is clearly demonstrated in Fig. \[DifferentAlphas\]. The apparent validity of the usage of the Debye model corroborates the results of Ref. , where this model has been successfully used to explain the temperature dependence of the J-band width and the radiative lifetime of J-agregates of the dye pseudoisocynine wiith different counter ions. Figure \[P12\_alpha=3\] shows the principal result of the present work: the conditional distribution of the nearest-level spacing, $P_{sp}(\lambda_e,\lambda - \lambda_e)$ (diamonds), obtained after dividing the extracted RMF spectrum presented in Fig. \[P12\_alpha=3\] (open circles) by the phonon spectral density $S(\lambda) \sim \lambda^{-3}$. We see that $P_{sp}(\lambda_e,\Delta\lambda)$ tends to zero upon $\Delta\lambda \to 0$. This is a clear signature of the repulsive statistics of the nearest level spacing. i.e., it is of a Wigner-Dyson-type. In spite of the observed distribution is strikingly close to a Wigner-Dyson one, it remains difficult to determine whether it is the trully Wigner-Dyson distribution, requiring a linear decrease to zero probability for zero spacing, which, though consistent with the data, is not fully proven by the current experiments and analysis. To conclude, we experimentally studied the statistics of the low energy spectrum of disordered molecular nano aggregates of pseudoisocyanine with the chloride counter ion in the neighborhood of the exciton band edge. The fluorescence line narrowing technique, allowing to probe the local energy level distribution, [@Malyshev07] has been exploited for this goal. We found a clear signature of a Wigner-Dyson-like distribution for the nearest level spacing, originating from the exciton states localized on the same segment of the aggregate and thus undergoing the quantum mechanical level repulsion. This is the first direct experimental prove of the existence of hidden structure of the exciton low energy spectrum, the region which dominates the aggregate optical response and low-temperature transport. Finally, we note that our finding has a wider applicability than for a simple 1D Frenkel exciton system considered here. The reason is that the nature of the band edge states (mostly in the Lifshits tail) is shared by a large variety of systems, such as gold nanoparticles[@Kuemmeth08], quantum wells and quantum wires [@Alessi00; @Klochikhin04; @Feltrin04a]. [**Acknowledgments.**]{} This work is part of the research program of the Stichting voor Fundamenteel Onderzoek der Materie (FOM), which is financially supported by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO). A. V. M. and V. A. M. acknowledge support from NanoNed, a national nanotechnology programme coordinated by the Dutch Ministry of Economic Affairs. A. V. M. also acknowledges support from the program Ramón y Cajal (Ministerio de Ciencia y Tecnolog[í]{}a de Espa[ñ]{}a). [99]{} H. van Amerongen, L. Valkunas and R. van Grondelle, [*Photosynthetic Excitons*]{} (World Scientific, Singapore, 2000); R. van Grondelle and V. I. Novoderezhkin, Phys. Chem. Chem. Phys. [**8**]{}, 793 (2006). T. Renger, V. May, and O. Kühn, Phys. Rep. [**343**]{}, 137 (2001). Y. Berlin, A. Burin, J. Friedrich, and J. Köhler, Phys. Life Rew. [**3**]{}, 262 (2006); [**4**]{}, 64 (2007). G. D. Scholes and G. Rumbles, Nature Mat. [**5**]{}, 683 (2006); G. D. Scholes, ACS Nano [**2**]{}, 523 (2008). See contributions to [*J-aggregates*]{}, ed. T. Kobayashi (World Scientific, Singapore, 1996) J. Knoester, In [*Organic Nanostructures: Science and Applications*]{}, Eds. V. M. Agranovich and G.C. La Rocca (IOS Press: Amsterdam, 2002); J. Knoester, [*Int. J. Photochem.*]{} 2006 (Hindawi: New York, 2006). A. Pugžlys, R. Augulis, P. H. M. van Loosdrecht, C. Didraga, V. A. Malyshev, and J. Knoester, J. Phys. Chem. B [**110**]{}, 20268 (2006). B. Halperin and M. Lax, Phys. Rev. [**148**]{}, 722 (1966); I. M. Lifshits, Zh. Experim. Theor. Fiz. [**53**]{}, 743 (1967) \[Sov. Phys. JETP [**26**]{}, 462 (1968)\]; I. M. Lifshits, S. A. Gredeskul, and L. A. Pastur, [*Introduction to the Theory of Disordered Systems*]{} (Wiley, New York, 1988). F. C. Spano, Annu. Rev. Phys. Chem. [**57**]{}, 217 (2006). See, e.g., contributions to [*Semiconducting Polymers - Chemistry, Physics, and Engineering*]{}, eds. G. Hadziioannou and P. van Hutten (VCH, Weinheim, 1999). see contributions to [*Quantum Coherence, Correlation and Decoherence in Semiconductor Nanostructures*]{}, ed. T. Takagahara, (Elsevier Science, USA, 2003). F. Kuemmeth, K. I. Bolotin, S. F. Shi, D. C. Ralph, Nano Lett. [**8**]{}, 4506 (2008). H. Akiyama, J. Phys.: Condens. Matter [**10**]{}, 3095 (1998); X.-L. Wang and V. Voliotis, J. Appl. Phys. [**99**]{}, 121301 (2006). A. V. Malyshev, V. A. Malyshev and F. Dom[í]{}nguez-Adame, Chem. Phys. Lett. [**371**]{}, 417(2003); J. Phys. Chem. B [**107**]{}, 4418 (2003); A. Malyshev, Phys. Stat. Sol. (c) [**3**]{}, 3539 (2006). V. Malyshev and P. Moreno, Phys. Rev. B [**51**]{}, 14587 (1995). A. V. Malyshev and V. A. Malyshev, Phys. Rev. B [**63**]{}, 195111 (2001). M. L. Metha, [*Random Matrices*]{}, 3d Edition, Elsevier: Amsterdam, 2004. V. Savona and R. Zimmermann, Phys. Rev B [**60**]{}, 4928 (1999); V. Savona, S. Haacke, and B. Deveaud, Phys. Rev. Lett. [**84**]{}, 183 (2000); S. Haacke, Rep. Prog. Phys. [**64**]{}, 737 (2001). F. Intonti, V. Emiliani, C. Lienau, T. Elsaesser, V. Savona, E. Runge, R. Zimmermann, R. Nützel, and K. H. Ploog, Phys. Rev. Lett. [**87**]{}, 76801 (2001); C. Lienau,F. Intoni, T. Guenther, T. Elsaesser, V. Savona, R. Zimmermann, and E. Runge, Phys. Rev. B [**69**]{}, 085302 (2004). A. Feltrin, R. Idrissi Kaitouni, A. Crottini, J. L. Staehli, B. Deveaud, V. Savona, X. L. Wang, and M. Ogura, Phys. Stat. Sol (c) [**0**]{}, 1417 (2003); A. Feltrin, R. Idrissi Kaitouni, A. Crottini, M.-A. Dupertuis, J. L. Staehli, B. Deveaud, V. Savona, X. L. Wang, and M. Ogura, Phys. Stat. Sol. (c) [**1**]{}, 506 (2004); A. Feltrin, R. Idrissi Kaitouni, A. Crottini, M.-A. Dupertuis, J. L. Staehli, B. Deveaud, V. Savona, X. L. Wang, and M. Ogura, Phys. Rev. B [**69**]{}, 205321 (2004). F. Libisch, C. Stampfer, anf J. Burgdörfer, Phys. Rev B [**79**]{}, 115423 (2009). M. Krivohuz, J. Cao, and S. Mukamel, J. Phys. Chem. B [**112**]{}, 15999 (2008). A. V. Malyshev, V. A. Malyshev, and J. Knoester, Phys. Rev. Lett. [**98**]{}, 087401 (2007). I. Renge and U. P. Wild, J. Phys. Chem. A [**101**]{}, 7977 (1997). H. Fidder, J. Knoester, and D. A. Wiersma, Chem. Phys. Lett. [**171**]{}, 529 (1990); J. Chem. Phys. [**95**]{}, 7880 (1991). M. Bednarz, V. A. Malyshev, and J. Knoester, J. Lumin. [**112**]{}, 411 (2005); M. Bednarz and P. Reineker, J. Lumin. [**119-120**]{}, 482 (2006). D. J. Heijs, V. A. Malyshev, and J. Knoester, Phys. Rev. Lett. [**95**]{}, 177402 (2005). M. Grassi Alessi, F. Fragano, A. Patané, and M. Capizzi, E. Runge, and R. Zimmermann, Phys. Rev. B [**61**]{}, 10985 (2000). A. Klochikhin, A. Reznitsky, B. Dal Don, H. Priller, H. Kalt, and C. Klingshirn, S. Permogorov, and S. Ivanov, Phys. Rev. B [**69**]{}, 085308 (2004). A. Feltrin, J. L. Staehli, and B. Deveaud, V. Savona, Phys. Rev. B [**69**]{}, 233309 (2004). [^1]: Present address: Chemistry Center, Lund University, Getingevägen 60, S-22241, Lund, Sweeden [^2]: On leave from Ioffe Physiko-Technical Institute, 26 Politechnicheskaya str., 194021 St.-Petersburg, Russia [^3]: Present address: Photonics Institute, Vienna University of Technology, Gusshausstrasse 27/387, 1040, Vienna, Austria
{ "pile_set_name": "ArXiv" }
Department of Statistics,\ North Carolina State University,\ Raleigh, NC 27695\ ridgeway@stat.ncsu.edu [We apply the algebraic theory of infinite classical lattices from Part I to write an axiomatic theory of measurements, based on Mackey’s axioms for quantum mechanics. The axioms give a complete theory of measurements in the sense of Haag and Kastler, taking the traditional form of a logic of propositions provided with a classical spectral theorem. The results are expressed in terms of probability distributions of individual measurements. As applications, we give a separation theorem for states by the set of observables and discuss its relationship to the equivalence of ensembles in the thermodynamic-limit program. We also introduce a weak equivalence of states based on the theory.]{} MSC 46A13 (primary), 46M40 (secondary) Introduction. ============= [**[I Introduction]{}**]{} There are two standard approaches to the study of infinite lattice systems, the algebraic approach from quantum field theory (QFT) ([@brat87], [@emch72], [@emch84], [@sega47] ) and the thermodynamic limit (TL) ([@lanf73], [@ruel69] ). In Part I of this series [@ridg05], we presented an algebraic theory of infinite classical lattices, constructed using the axioms of Haag and Kastler [@haag64] from the QFT. We showed that the two approaches may be regarded as two aspects of a single theory, linked by a unique relation between their states based on expectation values. The kinds of questions they can ask are different, however. TL theory is designed to study states, especially the equilibrium states, and the expectation values they assign to observables. We shall find that with the algebraic theory, one may study the statistical properties of the individual measurement. This is therefore the setting for a theory of measurement. In this paper, we show that the abstract system of algebraic observables of the theory satisfies the Mackey axioms I-VI from quantum mechanics. This will permit us to base the axiomatization on a logic of propositions provided with a classical spectral theorem. Following Birkhoff and von Neumann, the theory is then centered on the question, “If I measure a certain quantity on a lattice prepared in a given state, what is the probability the outcome will lie in a fixed interval $(a,b)$?” ([@birk36], [@jauc69], [@mack63]). [**[II Classical measurements ]{}**]{} [**A. The Haag-Kastler frame.**]{} In agreement with Haag and Kastler, we should treat measurements such that their “state and operation are defined in terms of laboratory procedures” [@haag64 p.850]. For this purpose, we view the lattice as representing a finite system immersed homogeneously in an (infinite) surround which acts as a generalized temperature bath. We denote by $\mathcal P$ the set of all possible systems, indexed by $\mathbf J$. According to the axioms, the algebraic structure is derived from the local [*texture*]{}, [*i.e.,*]{} the pairing of each system with the set of functions representing measurements on that system. We denote the configuration space of the lattice by $\Omega$, written as the Cartesian product of the single-site configurations, so that for any system $\Lambda_t$, $t \in \mathbf J$, we may write $\Omega = \Omega_{\Lambda_t} \times \Omega_{\Lambda_t^{\prime}}$. To each system $\Lambda_t$, we assign the set ${\mbox{$\mathfrak{W}(\mathfrak{A}^{t})$}}$ of functions on $\Omega$ representing measurements on $\Lambda_t$ and the compact set $E_t$ of states on ${\mbox{$\mathfrak{W}(\mathfrak{A}^{t})$}}$. The axioms then direct formation of the algebraic theory from the texture. We showed that for any compact convex set $K$ of algebraic states, we may construct the triple $\{X, {\mbox{${\mathcal C}(X)$}}, {\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}\}$, dependent on $K$, where the Segal algebra ${\mbox{${\mathcal C}(X)$}}$ is the set of continuous functions on a compact space $X$, and ${\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$ is the set of states on ${\mbox{${\mathcal C}(X)$}}$. The triple has the following structure: 1. ${\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$ is isomorphic with $K$. 2. All of the local functions $({\mbox{$\mathfrak{W}(\mathfrak{A}^{t})$}})$ representing measurements on finite systems of the lattice map to unique points in ${\mbox{${\mathcal C}(X)$}}$. Functions measuring the same physical quantity on different systems map to the same point in ${\mbox{${\mathcal C}(X)$}}$. 3. $X$ is homeomorphic with the set $\partial_e {\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$ of extremal points of ${\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$. 4. By the Riesz representation theorem, for every state $\zeta \in {\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$, there exists a unique Radon probability measure $\sigma$ on $X$ such that $$\zeta(f) = \int_X f(x) d\sigma(x) \hspace{3mm} \forall f \in {\mbox{${\mathcal C}(X)$}}$$ [ ]{}This is formally the expectation value of a point $f \in {\mbox{${\mathcal C}(X)$}}$ for a lattice in state $\zeta$. Physically, it is the expectation value of any local measurement that maps to $f$. Note especially that this is a decomposition theorem, [*i.e.*]{} it decomposes any state $\zeta \in {\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$ into an integral over the pure states of ${\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$. In this paper we shall only be concerned with a particular choice of $K$, the compact convex set of all stationary states of the lattice. For this case, we showed the crucial additional fact that \(5) the space $X$ is a Stonean (compact extremely disconnected) topolog- ical space (Theorem V.3). The point is that this triple is an algebraic theory well-defined by these five properties [*without any reference to an underlying structure.*]{} We apply Mackey’s axioms to [*this*]{} structure and derive our theory of measurements in terms of it. The similarity of eq. (2.1) to the integration over phase space in ordinary CSM to obtain expectation values might lead to the question of whether the Mackey axioms could be applied directly to the classical problem with its configurational phase space. However, usually the set of continuous functions is not large enough to represent the observables of a classical problem. For example, the $({\mbox{$\mathfrak{W}(\mathfrak{A}^{t})$}})$ are the sets of all bounded measurable functions compatible with the preparation of systems for measurement. Furthermore, a phase space with a Stonean topology, which will be essential in the following, excludes most interesting mechanical problems. We shall have occasion to use a term “microcanonical state” for the lattice. This term clearly refers to local states. There are two ways of describing states of the infinite system. The one is in terms of ${\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$, the states (positive linear functionals of norm 1) on the algebra ${\mbox{${\mathcal C}(X)$}}$. Part I gives another way, namely, in terms of an inverse limit object of the $((E_t)_{t \in J})$, denoted there by ${\mbox{$E_{\infty}$}}$. The elements of ${\mbox{$E_{\infty}$}}$ are threads $(\mu_t)_{t \in J}$ giving the local state of each finite system in the lattice. It is shown in Part I that the two sets ${\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$ and ${\mbox{$E_{\infty}$}}$ are isomorphic. The “microcanonical state” refers to a state on ${\mbox{${\mathcal C}(X)$}}$ identified with a thread $(\mu_t) \in {\mbox{$E_{\infty}$}}$ in which all local states are microcanonical. [**B. Measurements**]{} We adopt Segal’s interpretation of the algebraic observables. In Segal’s terminology, the elements of ${\mbox{${\mathcal C}(X)$}}$ are the [*observables*]{}, and the values $f(x),$ $ x \in X,$ the [*spectral values*]{} of $f \in {\mbox{${\mathcal C}(X)$}}$. They are the only possible values of any measurement $f^t \in \mathfrak{W}(\mathfrak{A}^t)$ representable by $f$. The mathematical states ${\mbox{$\zeta_{\mu}$}}\in {\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$ define ensembles or [*distributions*]{} of the [*pure states*]{} $X$, so that the [*expectation values*]{} of measurements are the quantities ${\mbox{$\zeta_{\mu}$}}(f)$. A description based on this terminology requires something conceptually close to the following algebraic picture of the classical measurement. In the preparation for measurement, the lattice is brought into a given state $\zeta \in {\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$. The measurement begins with an instantaneous isolation that leaves it in a MC state $x_{\mu} \in X $ randomly chosen from the ensemble defined by $\zeta$. The [*outcome*]{} of the measurement $f \in {\mbox{${\mathcal C}(X)$}}$ is the MC average $f(x_{\mu})$, the result of time averaging, say. Its [*expectation value*]{} is $\zeta(f)$, the integral over the possible outcomes $x \in X$. [**[III Mackey’s axioms ]{}**]{} This description of a measurement is readily turned into an axiomatic theory based on Mackey’s axioms for a quantum theory [@mack63]. It will have the traditional form of a logic of propositions introduced by Birkhoff and von Neumann ([@birk36]). For the commutative case, a Mackey system is defined by six axioms. In the following, we construct such a system from our triple $\{X, {\mbox{${\mathcal C}(X)$}}, \mathcal{K}{\mbox{${\mathcal C}(X)$}}\}$. A theorem in Mackey gives sufficient conditions for a lattice of observables and its states to display his six axioms [@mack63 p.68]. The next two propositions satisfy these conditions. The first pertains to observables. The mapping $\phi: {\mbox{${\mathfrak P}$}}\rightarrow {\mbox{$\mathcal{B}$}}(X)$ by $\phi({\mbox{$\chi^{(X)}_{F}$}}) = F$ is a lattice-isomorphism from the class ${\mbox{${\mathfrak P}$}}$ of idempotents of ${\mbox{${\mathcal C}(X)$}}$ onto the topology ${\mathcal B}(X)$ of $X$. Hence, ${\mbox{${\mathfrak P}$}}$ is a complete Boolean algebra. Observe first that ${\mbox{$\chi^{(X)}_{F}$}} \in {\mbox{${\mathcal C}(X)$}}$ iff $F$ is clopen (=closed-and-open). It was shown in Part I that all open sets are clopen (Theorem VI.3). Hence, the idempotents are exactly the characteristic functions ${\mbox{$\chi^{(X)}_{F}$}}$, $F \in {\mathcal B}(X)$. Then $\phi$ is clearly 1:1 and onto, i.e., ${\mbox{${\mathfrak P}$}}= {\mathcal B}(X)$. Also, $\forall E, F \in {\mbox{$\mathcal{B}$}}(X)$, $\phi({\mbox{$\chi^{(X)}_{E}$}} \bigvee {\mbox{$\chi^{(X)}_{F}$}}) = \phi({\mbox{$\chi^{(X)}_{E \bigcup F}$}}) = E \bigcup F = \phi({\mbox{$\chi^{(X)}_{E}$}}) \bigcup \phi({\mbox{$\chi^{(X)}_{F}$}})$, and $\phi({\mbox{$\chi^{(X)}_{E}$}} \bigwedge {\mbox{$\chi^{(X)}_{F}$}}) = \phi({\mbox{$\chi^{(X)}_{E \bigcap F}$}}) = E \bigcap F = \phi({\mbox{$\chi^{(X)}_{E}$}})$ $ \bigcap \phi({\mbox{$\chi^{(X)}_{F}$}})$. The complementation is defined by $({\mbox{$\chi^{(X)}_{F}$}})^{\prime} = {\bf 1} - {\mbox{$\chi^{(X)}_{F}$}} = {\mbox{$\chi^{(X)}_{F^{\prime}}$}}$ and hence $\phi(({\mbox{$\chi^{(X)}_{F}$}})^{\prime}) = F^{\prime}$. Hence, $\phi$ is a lattice isomorphism. For completeness, note simply that for an arbitrary net $({\mbox{$\chi^{(X)}_{F_i}$}})$, $\bigcup F_i \in {\mathcal B}(X)$ (clopen), so that $\bigvee {\mbox{$\chi^{(X)}_{F_i}$}} = {\mbox{$\chi^{(X)}_{\bigcup F_i}$}} \in {\mbox{${\mathfrak P}$}}$. One shows similarly that the lattice ${\mbox{${\mathfrak P}$}}$ is distributive. The distributive property for infinite operations is given by Semadeni [@sema71 Proposition 16.6.3] The completeness of the lattice ${\mbox{${\mathfrak P}$}}$ is equivalent to having $X$ Stonean [@port88 6.2.4], as pointed out in Part I. It is analogous to the completeness of the lattice of projections of the von Neumann algebra in algebraic QFT [@take79 Proposition V.1.1]. The distributive lattices are exactly those with a set representation [@port88 Birkhoff-Stone theorem, p. 104]. Since the distributive property assures the pairwise compatibility of measurements, this Proposition assures that we are dealing with classical theory. The second condition pertains to states. Denote by $\mathcal{S}$ the set of all restrictions $\{ {\mbox{$\zeta_{\mu}$}}|_{{\mbox{${\mathfrak P}$}}}, \mu \in {\mbox{$E_{\infty}$}}\}$. Then $\mathcal{S}$ is a full and strongly convex set of states on ${\mbox{${\mathfrak P}$}}$. [ ]{}[*Proof.*]{} The state ${\mbox{$\zeta_{\mu}$}}\in \mathcal{S}$ is a state on ${\mbox{${\mathfrak P}$}}$ in Mackey’s sense if, in addition to ${\mbox{$\zeta_{\mu}$}}(\chi^{(X)}_{\emptyset}) = 0$ and ${\mbox{$\zeta_{\mu}$}}(\chi^{(X)}_X) = 1$, one has that for all sets of questions $(\chi^{(X)}_{F_n} ) \in {\mbox{${\mathfrak P}$}}$ with $F_i \bigcap F_j = \emptyset \hspace{2mm} \forall i \neq j$, ${\mbox{$\zeta_{\mu}$}}(\bigvee \chi^{(X)}_{F_n} ) = {\mbox{$\zeta_{\mu}$}}({\mbox{$\chi^{(X)}_{\bigcup F_n}$}}) = \sum {\mbox{$\zeta_{\mu}$}}(\chi^{(X)}_{F_n})$. Certainly for all finite subsets of $(\chi^ {(X)}_{F_n} )$, ${\mbox{$\zeta_{\mu}$}}(\bigvee_{i=1}^k \chi^{(X)}_{F_{n_i}} ) = \sum_{i=1}^k {\mbox{$\zeta_{\mu}$}}(\chi^{(X)}_{F_{n_i}})$. The result then follows by continuity. Now note that for any pair $\chi^{(X)}_E$, $\chi^{(X)}_F$, if $\chi^{(X)}_E$ is not $ \leq \chi^{(X)}_F$, then there exists $x_{\mu} \in X$ such that $\chi^{(X)}_E(x_{\mu}) = 1$ and $\chi^{(X)}_F(x_{\mu}) = 0.$ But $\delta_{x_{\mu}} \in {\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$, while $\delta_{x_{\mu}}(\chi^{(X)}_E) = 1$ and $\delta_{x_{\mu}}(\chi^{(X)}_F) = 0$. Hence, $\mathcal{S}$ is a [*full*]{} set of states, i.e., if ${\mbox{$\zeta_{\mu}$}}(\chi^{(X)}_E) \leq {\mbox{$\zeta_{\mu}$}}(\chi^{(X)}_F)$ for all ${\mbox{$\zeta_{\mu}$}}\in \mathcal{S}$, then $\chi^{(X)}_E \leq \chi^{(X)}_F$. Finally, the set of states $\mathcal{S}$ is [*strongly convex*]{} in Mackey’s sense if for any sequence $(t_n) \in [0,1]$ such that $\sum _1^{\infty} t_n = 1$ and any set $(\zeta_{\mu_n}) \in \mathcal{S}$, $\sum_1^{\infty} t_n \zeta_{\mu_n} \in \mathcal{S}$. Certainly $\sum_1^{\infty} t_n \zeta_{\mu_n}$ is a positive linear functional on ${\mbox{${\mathcal C}(X)$}}$ by continuity. Furthermore, $\| \sum_1^{\infty} t_n \zeta_{\mu_n} \| = \sup_{ \| f \| \leq 1} \sum_1^{\infty} t_n \zeta_{\mu_n}(f) = \sum_1^{\infty} t_n \zeta_{\mu_n}(\chi_X) = \sum_1^{\infty} t_n = 1.$ Hence, $\sum_1^{\infty} t_n \zeta_{\mu_n} \in {\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$. The Mackey axioms are in terms of a class of functions of the following form. Denote by ${\mbox{$\mathcal{B}$}}$ the Borel sets of the real line ${\mbox{$\mathbf{\sf R}$}}$. The function $Q:{\mathcal B} \rightarrow {\mbox{${\mathfrak P}$}}$ is called a [**[*${\mbox{${\mathfrak P}$}}$-valued measure*]{}**]{} on ${\mbox{$\mathbf{\sf R}$}}$ iff the following obtain: \(a) $ Q(\emptyset) = 0,$ $ Q($[**[R]{}**]{}) = 1; \(b) If $(B_n)$ is any family in ${\mathcal B}$, and $B_i \cap B_j = \emptyset \hspace{2mm}\forall i \neq j$, then $Q(\cup B_n) = \bigvee Q(B_n)$. [ ]{}Note that $\bigvee Q(B_n) \in {\mbox{${\mathfrak P}$}}$ because ${\mbox{${\mathfrak P}$}}$ is complete. Let $\mathcal O$ be the set of all ${\mbox{${\mathfrak P}$}}$-valued measures on $({\mbox{$\mathbf{\sf R}$}}, {\mbox{$\mathcal{B}$}})$. $\mathcal O$ is the set of observables of the Mackey system $(\mathcal{O}, \mathcal{S}, {\mbox{$\mathcal{B}$}})$. With Propositions III.1 and III.2, we have proven the following. The triple $\{\mathcal{O}, \mathcal{S}, {\mbox{$\mathcal{B}$}}\}$ is a Mackey system, satisfying Axioms I - VI. [ ]{}It is noteworthy that with ${\mbox{${\mathfrak P}$}}$ a complete lattice, the system $\{X, {\mbox{${\mathcal C}(X)$}}, \mathcal{K}{\mbox{${\mathcal C}(X)$}}\}$ likewise satisfies the axioms of Piron from QFT [@piro64]. [**[IV The theory of measurement ]{}**]{} We divide discussion into two sections, dealing respectively with observables and states. [**A. Observables**]{} The role of the quasilocal observables depends on their identification with the elements of $\mathcal O$. Observe first that $\mathcal{O}$ is a large set. In fact, if $f \in {\mbox{${\mathcal C}(X)$}}$ is any observable, and $B \in {\mathcal B}$, define $Q^f: {\mbox{$\mathcal{B}$}}\rightarrow \mathbf{Q}$ by $Q^f(B) \equiv Q^f_B = \chi_B \circ f = {\mbox{$\chi^{(X)}_{[f \in B]}$}}$. Recall that $\bigvee {\mbox{$\chi^{(X)}_{B_i}$}} = {\mbox{$\chi^{(X)}_{\cup B_i}$}}$. Hence $Q^f \in \mathcal{O}$. Axiom VI says that all of $\mathcal O$ is of this form: For any $f \in {\mbox{${\mathcal C}(X)$}}$, define $Q^f: {\mbox{$\mathcal{B}$}}\rightarrow {\mbox{${\mathcal C}(X)$}}$ by $Q^f_B = {\mbox{$\chi^{(X)}_{[f \in B]}$}}$. Then $Q^f \in {\mathcal O}$. Conversely, if $Q \in \mathcal{O}$ is any ${\mbox{${\mathfrak P}$}}$-valued measure, then there exists a function $f \in {\mbox{${\mathcal C}(X)$}}$ such that $Q = Q^f$. Thus, $ {\mathcal O} = {\mbox{${\mathcal C}(X)$}}$. This gives a classical spectral theorem for ${\mbox{${\mathcal C}(X)$}}$ as follows: For any $f \in {\mbox{${\mathcal C}(X)$}}$, define $Q^f(\lambda) = \chi^{(X)}_{[f \leq \lambda]}. \forall \lambda \in {\mbox{$\mathbf{\sf R}$}}$. Then one may write any $f \in {\mbox{${\mathcal C}(X)$}}$ in the following integral form: $$f = \int_{-\infty}^{\infty} \lambda dQ^f(\lambda)$$ Furthermore, for any continuous Borel function of $f$, $$g \circ f = \int_{-\infty}^{\infty} g(\lambda) dQ^f(\lambda).$$ [ ]{}[*Proof.*]{} Eq.(4.1) follows from the fact that for all $x \in X,$ $Q^f(.)(x)$ is a nondecreasing function on ${\mbox{$\mathbf{\sf R}$}}$ [@hewi65 Theorem III.8.7]). Eq.(4.2) is by Mackey’s axiom III. [ ]{}Using the language from Hilbert spaces, we call the ${\mbox{${\mathfrak P}$}}$-valued measure $Q^f$ the [*spectral measure*]{} corresponding to the observable $f$, and eq. (4.1) the [*spectral decomposition*]{} of $f$. Birkhoff and von Neumann motivated their logic of quantum mechanics with the epistemological judgment that “Before a phase-space can become imbued with reality, its elements and subsets must be correlated in some way with experimental propositions” , [*i.e.,*]{} with the Borel sets of the real line ${\mbox{$\mathbf{\sf R}$}}$ and its products ${\mbox{$\mathbf{\sf R}$}}^n$ [@birk36 p.825]. The designation of the space $X$ as the algebraic theory’s “phase space” is their terminology. Each spectral measure $Q^f \in {\mathcal O}$ defines a correlation of the Borel sets in ${\mbox{$\mathbf{\sf R}$}}$ with sets in the algebraic phase space $X$ as follows: For any $f \in {\mbox{${\mathcal C}(X)$}}$, the measure $Q^f \in {\mathcal O}$ is a lattice homomorphism on ${\mathcal B}$ into the lattice ${\mbox{${\mathfrak P}$}}$, transforming the operations $(\subseteq, \bigcup,\bigcap, ^{\prime})$ to $(\leq, \bigvee, \bigwedge, ^{\prime})$ and preserving set inclusion. Hence, for any $f \in {\mbox{${\mathcal C}(X)$}}$, the compose $\phi \circ Q^f: {\mbox{$\mathcal{B}$}}\rightarrow {\mbox{$\mathcal{B}$}}(X)$ is a lattice homomorphism on the Borel sets of ${\mbox{$\mathbf{\sf R}$}}$ into ${\mbox{${\mathcal B}(X)$}}$, where $\phi$ is the isomorphism defined in Proposition III.1. Recall that ${\mbox{${\mathfrak P}$}}$ is a complete lattice. One has $Q^f_{B_1 \bigcap B_2} = {\mbox{$\chi^{(X)}_{[f \in B_1] \bigcap [f \in B_2]}$}} = {\mbox{$\chi^{(X)}_{[f \in B_1]}$}} \bigwedge$ $ {\mbox{$\chi^{(X)}_{[f \in B_2]}$}}$ and $Q^f_{B_1 \bigcup B_2} = {\mbox{$\chi^{(X)}_{[f \in B_1 \bigcup B_2]}$}} = {\mbox{$\chi^{(X)}_{[f \in B_1]}$}} \bigvee {\mbox{$\chi^{(X)}_{[f \in B_2]}$}}$ for the meet and join, and $Q^f_{B^{\prime}} = {\bf 1} - {\mbox{$\chi^{(X)}_{[f \in B]}$}}$ for complementation. Furthermore, $E, F \in {\mbox{$\mathcal{B}$}}$, $E \subset F$, goes to ${\mbox{$\chi^{(X)}_{E}$}} \leq {\mbox{$\chi^{(X)}_{F}$}}$. [ ]{}This establishes the role of the algebraic observables in the theory. [**B. States**]{} The probability of the set $[f \in B]$ in the initial state ${\mbox{$\zeta_{\mu}$}}\in {\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$ is just the expectation value with respect to the probability measure $\sigma_{\mu}$ of the random variable ${\mbox{$\chi^{(X)}_{[f \in B]}$}}$: $${\mbox{$\zeta_{\mu}$}}({\mbox{$\chi^{(X)}_{[f \in B]}$}}) = \int_X {\mbox{$\chi^{(X)}_{[f \in B]}$}} d\sigma_{\mu} = \int_{[f \in B]} d\sigma_{\mu} = \sigma_{\mu}([f \in B] )$$ [ ]{}This is the probability the measurement finds the system in a MC state $x$ belonging to the set $[f \in B] \subseteq X$ when the lattice is in state ${\mbox{$\zeta_{\mu}$}}$. It is given in terms of the spectral measures $Q^f$ by Mackey’s Axiom I as follows: For any observable $f \in {\mbox{${\mathcal C}(X)$}}$ and any state ${\mbox{$\zeta_{\mu}$}}\in {\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$, the function $p$ defined by $$p(Q^f, {\mbox{$\zeta_{\mu}$}},B) = {\mbox{$\zeta_{\mu}$}}({\mbox{$\chi^{(X)}_{[f \in B]}$}})$$ [ ]{}is a probability measure on $({\mbox{$\mathbf{\sf R}$}}, {\mbox{$\mathcal{B}$}})$. [ ]{}Following Haag and Kastler we call an algebraic theory a [*complete*]{} theory of measurement if for all Borel sets $F \in \mathcal{B}(X)$, and for all algebraic states ${\mbox{$\zeta_{\mu}$}}\in {\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$ one can write the probability of finding the system in a MC state $x \in F \subseteq X$, given that it is initially in the state ${\mbox{$\zeta_{\mu}$}}$, . To show that we have a complete theory in this sense, set $f = {\mbox{$\chi^{(X)}_{F}$}}$. Then $f \in {\mbox{${\mathcal C}(X)$}}$, and from eq. (4.3), ${\mbox{$\zeta_{\mu}$}}({\mbox{$\chi^{(X)}_{[f \in (1/2,3/2)]}$}}) = \sigma_{\mu}(\{x \in F\})$. We have treated the measurements themselves as represented by local observables ${\mbox{$\mathfrak{W}(\mathfrak{A}^{t})$}}$ and their states $E_t$ referred to a particular system $\Lambda_t$. Haag and Kastler regard operations of the form $f = {\mbox{$\chi^{(X)}_{F}$}}$ as filters, passing the MC states in $F$ and blocking the rest. Correspondingly, they call the probabilities $\sigma_{\mu}(\{x \in F\})$ [*transmission probabilities.*]{} [**[V Applications ]{}**]{} We conclude with two applications of the axiomatic theory. They pertain especially to results on the equivalence of ensembles in the thermodynamic-limit program. The first is a very basic question for the axiomatic theory itself: is the theory’s set of observables ${\mbox{${\mathcal C}(X)$}}$ large enough? Given any two states ${\mbox{$\zeta_{\mu}$}}, \zeta_{\nu} \in {\mbox{$\mathcal{K}{\mbox{${\mathcal C}(X)$}}$}}$, is there an observable $f \in {\mbox{${\mathcal C}(X)$}}$ such that ${\mbox{$\zeta_{\mu}$}}(f) \neq \zeta_{\nu}(f)$? Since the mapping in Proposition IV.3 is into, not onto, the correlation provided by a measure $Q^f \in {\mathcal O}$ does not in general define a state ${\mbox{$\zeta_{\mu}$}}$ on all of ${\mbox{$\mathcal{B}$}}(X)$. Nevertheless, it has the ability to distinguish two states ${\mbox{$\zeta_{\mu}$}}$ and $\zeta_{\nu}$ by measurements, as follows. If $p(Q^f,{\mbox{$\zeta_{\mu}$}},B)= p(Q^g, {\mbox{$\zeta_{\mu}$}},B)$ for all $\mu \in {\mbox{$E_{\infty}$}}$, $B \in {\mbox{$\mathcal{B}$}}$, then $f=g$. Conversely, if $p(Q^f, {\mbox{$\zeta_{\mu}$}},B) = p(Q^f, \zeta_{\nu}, B)$ for all $f \in {\mbox{${\mathcal C}(X)$}}$, $B \in {\mbox{$\mathcal{B}$}}$, then the states ${\mbox{$\zeta_{\mu}$}}= \zeta_{\nu}$. [ ]{}[*Proof.*]{} Axiom III. [ ]{}That is, states separate observables, and observables separate states. ${\mbox{${\mathcal C}(X)$}}$ contains points that do not represent measurements because ${\mbox{${\mathfrak W}$}}$ is the completion of ${\mbox{${\mathfrak W}^{\infty}$}}$. Nevertheless, the set W = $\psi_K \circ \Delta_K({\mbox{${\mathfrak W}^{\infty}$}})$ is (strongly) dense in ${\mbox{${\mathcal C}(X)$}}$ (Part A, Corollary II.12, Theorem II.15), so that if any $f \in C$ separates the states ${\mbox{$\zeta_{\mu}$}}, \zeta_{\nu}$, we may construct a convergent sequence of functions $(g_n) \in$ W such that $g_n \rightarrow f$, and it does contain points that separate these states. It is noteworthy that this separation property does not conflict with results on the equivalence of ensembles. These theorems have to do with the convergence of sequences of local observables when the lattice is in one of the standard ensembles (MC, canonical, grand canonical). They show that the sequences converge in probability to the same limit functions for all three ensembles [@lanf73 Theorem A5.8], [*i.e.,*]{} as the sizes of systems get larger and larger, measurements of any quantity give the [*same values*]{} in the three ensembles except possibly on sets of configurations of decreasing probability. But in general, $f^t \overset{P}{\rightarrow} f$ does not assure that $\int f^t d\mu_t \rightarrow \int f d\mu$ unless there exists an integrable dominating function $g$ such that $|f^t| \leq |g|$ for all $t \in \mathbf{J}$ (Lebesgue Dominated Convergence Theorem [@loev63 Theorem 7.2.C]). Thus, agreement of the limit functions does not assure agreement of the limits of their expectation values. In physical terms, the dominating function has the effect of excluding large fluctuations from the limiting value of an observable. For the second application, we show that there is a weak equivalence of states if one allows some experimental error in measurements. Specifically, suppose the expectation value of a particular observable $f \in {\mbox{${\mathcal C}(X)$}}$ in a given initial state ${\mbox{$\zeta_{\mu}$}}$ is only determined (or estimated) to within an accuracy of ${\mbox{$\zeta_{\mu}$}}(f) \pm \varepsilon$. Then the measurement cannot be used to separate ${\mbox{$\zeta_{\mu}$}}$ from any state $\zeta_{\nu}$ in the wk\*-neighborhood of ${\mbox{$\zeta_{\mu}$}}$ defined by the basic open set ${\mathcal N}({\mbox{$\zeta_{\mu}$}}; f, \varepsilon) = \{\zeta_{\nu}: |{\mbox{$\zeta_{\mu}$}}(f) - \zeta_{\nu}(f)| < \varepsilon\}$. With repeated measurements, one can estimate the relative frequency (or probability) of a set $[f \in {\mbox{$\mathcal{B}$}}]$ to any degree of precision. However, this cannot exclude these considerations with any finite number of measurements. States close together in this sense are essentially physically equivalent. [Acknowledgement]{}. The author wishes to express his gratitude to Rudolf Haag for his many suggestions during the writing of this manuscript. [9999999]{} Bohr, N. Discussion with Einstein on epistemological problems in atomic physics. In Albert Einstein: Philosopher-Scientist, vol. I, 3rd ed. Editor P. A. Schilpp. La Salle: Open Court 1969. Birkhoff, G., von Neumann, J.: The logic of quantum mechanics. Ann. Math. [**37**]{}, 823-843 (1936). Bratteli, O., Robinson, D. W.: Operator Algebras and Quantum Statistical Mechanics. I.$C$\*- and $W$\*-algebras, Symmetry Groups, Decomposition of States. 2nd Edition. New York: Springer 1987. Emch, G. G.: Algebraic Methods in Statistical Mechanics and Quantum Field Theory. New York: Wiley 1972. Emch, G. G. Mathematical and Conceptual Foundations of 20th-Century Physics. New York: Noth-Holland, 1984. Haag, R.: Local Quantum Physics. Fields, Particles, Algebras. New York: Springer 1996. Haag, R., Kastler, D.: An algebraic approach to quantum field theory. J. Math. Phys. [**5**]{}, 848-861 (1964) Hewitt, E., Stromberg, K.: Real and Abstract Analysis. Berlin: Springer 1965. Jauch, J. M., and Piron, C. On the structure of quantum proposition systems. Helv. Phys. Acta [**42**]{}, 842 (1969) Lanford, O. E.: Entropy and Equilibrium States in Classical Statistical Mechanics. In Lecture Notes in Physics 20: Statistical Mechanics and Mathematical Problems, Ed. A Lenard. New York: Springer 1973. Loève, M.: Probability Theory. 3rd Ed. Princeton: Van Nostrand 1963 Mackey, G. W.: Mathematical Foundations of Quantum Mechanics. New York: Benjamin 1963. Piron, C. Axiomatique quantique: Helv. Phys. Acta 37, 439-468 (1964) Porter, J. R., Wood, R. G.: Extensions and Absolutes of Hausdorff Spaces. New York: Springer 1988. Ridgeway, D. An algebraic theory of infinite classical lattices I: General theory. arXiv math-ph/0501041. Ruelle, D.: Thermodynamic Formalism. Reading: Addison-Wesly 1978. Ruelle, D.: Statistical Mechanics. Rigorous Results. Reading: Benjamin 1969. Segal, I. E.: Postulates for general quantum mechanics. Ann. Math. [**48**]{}, 930-948 (1947) Semadeni, Z.: Banach Spaces of Continuous Functions. Warszawa: PWN—Polish Scientific Publishers 1971 Takesaki, M.: Theory of Operator Algebras I. New York: Springer 1979.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Elliptical galaxies contain X-ray emitting gas that is subject to continuous ram pressure stripping over timescales comparable to cluster ages. The gas in these galaxies is not in perfect hydrostatic equilibrium. Supernova feedback, stellar winds, or active galactic nuclei (AGN) feedback can significantly perturb the interstellar medium (ISM). Using hydrodynamical simulations, we investigate the effect of subsonic turbulence in the hot ISM on the ram pressure stripping process in early-type galaxies. We find that galaxies with more turbulent ISM produce longer, wider, and more smoothly distributed tails of the stripped ISM than those characterised by weaker ISM turbulence. Our main conclusion is that even very weak internal turbulence, at the level of $\la 15\%$ of the average ISM sound speed, can significantly accelerate the gas removal from galaxies via ram pressure stripping. The magnitude of this effect increases sharply with the strength of turbulence. As most of the gas stripping takes place near the boundary between the ISM and the intracluster medium (ICM), the boost in the ISM stripping rate is due to the “random walk” of the ISM from the central regions of the galactic potential well to larger distances, where the ram pressure is able to permanently remove the gas from galaxies. The ICM can be temporarily trapped inside the galactic potential well due to the mixing of the turbulent ISM with the ICM. The galaxies with more turbulent ISM, yet still characterised by very weak turbulence, can hold larger amounts of the ICM. We find that the total gas mass held in galaxies decreases with time slower than the mass of the original ISM, and thus the properties of gas retained inside galaxies, such as metallicity, can be altered by the ICM over time. This effect increases with the strength of the turbulence, and is most significant in the outer regions of galaxies.' date: - Released 2009 Xxxxx XX - 'Accepted ... Received ..; in original form ..' title: 'Ram pressure stripping in elliptical galaxies: I. the impact of the interstellar medium turbulence' --- hydrodynamics –- methods: numerical –- galaxies: clusters: general –- galaxies: evolution –- galaxies: ISM –- galaxies: intergalactic medium INTRODUCTION ============ Ram pressure stripping removes gas from galaxies moving relative to the ICM [@1972ApJ...176....1G]. Numerous theoretical studies examined the consequences of this effect for the galaxy and cluster evolution by quantifying the amount of gas loss from galaxies, including star formation in galaxies and their ram-pressure stripping tails, and determining the metal enrichment of the ICM by stripping metal-rich gas from galaxies . Previous theoretical investigations of ram pressure stripping did not include all non-thermal energy components in the ISM and ICM. In general, non-thermal components include turbulent kinetic energy, magnetic fields, and cosmic-rays. Only few theoretical studies incorporated some of these components. First simulations of ram pressure stripping including the dynamical effects of the magnetic fields were presented in @2012arXiv1203.1343R for late-type galaxies. We aim to systematically investigate how non-thermal components of the ISM and ICM affect ram pressure stripping in elliptical galaxies. This is our first paper in a series of papers on this subject, and it focuses on the effect of turbulent ISM on the ram pressure stripping rates, morphologies of the stripping tails, and mixing between the ISM and ICM. Although observational constraints on the turbulence properties of the hot ISM in early-type galaxies are still uncertain, there is little doubt that the hot ISM is characterised by weak turbulence and randomly oriented weak magnetic fields . Recent X-ray observations began to place meaningful constraints on the magnitude of turbulent motions in the hot gas of massive early-type galaxies, proving that the turbulence is subsonic . Stellar winds, supernovae, and active galactic nuclei are considered to be the main energy sources for these turbulent motions [@1996MNRAS.279..229M; @2009ApJ...699..923B; @2011ApJ...728..162D]. We study the morphology of the ram pressure stripping tails. Sharp edges characteristic of ram pressure stripping have been detected in X-ray maps of a galaxy falling into the Fornax cluster [@2005ApJ...621..663M], and X-ray tails are sometimes observed in ellipticals undergoing ram pressure stripping [@2008ApJ...688..208R; @2008ApJ...688..931K; @2006ApJ...644..155M]. When the strength of stripping is not significant, or the duration of the process is short, galaxies are likely to show only somewhat elongated gas distributions instead of long tails [e.g., @2010MNRAS.405.1624M]. In addition to the morphology of the gas distribution, ram pressure stripping also can be probed by tracking how well the ISM and ICM are mixed together for different ISM stripping rates . Most previous simulations focused on how material is expelled from galaxies into the ICM. Here we also examine how much mass can be mixed into galaxies from the ICM due to the random motions of the ISM in ram pressure stripping. A direct consequence of this mixing can be a significant change of the metallicity in the galactic gas. The organisation of the paper is as follows. We describe the simulation setup in Section 2. In Section 3, we present results of our simulations, emphasising the differences in the impact of various strengths of turbulent motions on the ram pressure stripping rates and tail morphologies. Finally, we present conclusions and discussion in Section 4. SIMULATIONS =========== Initial conditions ------------------ Our galaxy model consists of dark matter halo and stellar mass distributions as well as the hot ISM. We assume that the gravitational field is dominated by the static distributions of the stellar and dark matter masses. Since galaxy cluster environments have strong tidal fields, we assume that the galactic gravitational field is truncated at the truncation radius $R_{t}=100$ kpc. Therefore, the gas experiences no gravitation acceleration once it escapes beyond $R_{t}$. The stellar mass distribution is described by a spherical Jaffe model, while the total mass distribution follows a $r^{-2}$ law [e.g., @2009MNRAS.393..491C; @2009ApJ...699...89C; @2010ApJ...711..268S]. Inside the effective radius of the stellar mass distribution, the mass of the dark matter halo is equal to the stellar mass. This setup has the effective radius of 3 kpc and the total stellar mass of ${\sim \rm 10^{11} M_{\odot}}$. We setup the initial distribution of the ISM in hydrostatic equilibrium for the gravitational potential described above. However, this distribution is weakly perturbed by a stirring process which is explained in Section 2.3. Therefore, the precision of implementing the hydrostatic equilibrium condition is not critical in our initialisation. We note that satisfying the exact hydrostatic equilibrium condition is intrinsically difficult with finite volume methods with explicit time-stepping [@2002ApJS..143..539Z] because gravitational acceleration acts as a sink term in the momentum equation and can cause the growth of an instability [@Lian20101909]. The initial ISM temperature profile has the following form $$T(r) = \left\{ \begin{array}{rl} T_{i} & \mbox{ if $r < r_{i}$} \\ 2T_{0}/(1 + (r / r_{0})^{\beta}) & \mbox{ otherwise}, \end{array} \right.$$ where $T_{i}=8 \times 10^{6}$ K, $r_{i}=50.9$ kpc, $T_{0}=1.3\times 10^{7}$ K, $\beta=-3$, and $r_{0}=66.6$ kpc. The total mass of the ISM is ${\sim 4.4 \times 10^{10} M_{\odot}}$ inside $R_{t}$. The ICM is initialised with a constant density and temperature beyond $R_{t}$. The ICM temperature and density are equal to their corresponding ISM values at $R_{t}$. We assume that ICM mean molecular weight and equation of state are same as those in the ISM. This assumption simplifies simulations by allowing us to use a single phase fluid. The temperature and density of the ICM are $2 \times 10^{7}$ K and $3 \times 10^{-28}$ ${\rm g ~ cm^{-3}}$, respectively. The size of the simulation box is about 1150 kpc along $x$-axis which is the direction of the inflowing ICM. The length of the box is about 500 kpc in both $y$- and $z$-directions. We make all zones cube-shaped by including more cells along the $x$-axis. Numerical methods ----------------- We use the [FLASH3]{} adaptive mesh refinement code with the most recent patches to solve Euler equations with gravity [@2000ApJS..131..273F; @2009JCoPh.228..952L]. We employ a directionally unsplit staggered mesh hydro solver, Roe’s approximate Riemann solver with van Leer flux limiter, the ideal gas equation of state with solar metallicity, and assume the gas to be fully ionised. We use two passive tracers – a passive fluid and passive particles. The passive advection fluid (hereafter called “colour”) is used to track the ISM mass fraction in cells. If a cell includes only the ISM, the value of this advection quantity is 1. Using this colour quantity we can measure mixing between two different materials [e.g., @2008ApJ...680..336S]. We can also determine where the mixed gas originated from by assigning different tag numbers to passive ISM and ICM particles that follow the fluid [@2009AnRFM..41..375T; @2007MNRAS.374..787H]. We distribute 8168 particles, which correspond to the ISM, uniformly within $R_{t}$ at the initial time. We use the colour quantity as a refinement variable because we focus on resolving the mixing patterns of the ISM after it is stripped from the galaxy. Since there is no well-defined rule to choose specific refinement conditions or refinement variables [@1989JCoPh..82...64B; @2010JCoAM.233.3139L for discussion], we adopt the standard refinement method in the [FLASH]{} code[^1]. Regions exhibiting stronger variations in the passive scalar magnitude are more finely resolved. Specifically, we use [refine\_cutoff]{}=0.8, [derefine\_cutoff]{}=0.2, and [refine\_filter]{}$=10^{-2}$. The refinement level outside the truncation radius is allowed to vary between 3 and 6, but it is fixed at 5 for smaller distances. The maximum spatial resolution is 1 kpc outside $R_{t}$, and 2 kpc inside this radius. Stirring and inflow ------------------- ------- --------------------------- --------------------- Name ISM injection energy ISM 1D RMS velocity (${\rm cm^{2} ~ s^{-3}}$) (Mach number) Run 0 $2.5 \times 10^{-8}$ 0.022 Run 1 $2.5 \times 10^{-7}$ 0.038 Run 2 $5.0 \times 10^{-7}$ 0.048 Run 3 $1.0 \times 10^{-6}$ 0.067 Run 4 $2.0 \times 10^{-6}$ 0.093 Run 5 $4.0 \times 10^{-6}$ 0.119 ------- --------------------------- --------------------- : Simulation runs[]{data-label="tab:run"} We use a stirring module in the [FLASH3]{} code [@1988CF.....16..257E; @DBLP:journals/ibmrd/FisherKLDPCCCFPAARGASRGN08; @2010ApJ...713.1332R] and modify it to restrict kinetic energy injection to the region inside $R_{t}$. We investigate six different strengths of turbulent motions as shown in Table \[tab:run\]. The injection energies quoted in Table \[tab:run\] correspond to the energy per unit mass per mode. We consider 152 driving modes, and inject energy on scales between 49 to 50 kpc (see Appendix for possible effects of the injection scales). Our random forcing scheme uses stochastic Ornstein-Uhlenbeck process and a correlation timescale of 0.01 Gyr. These parameters result in turbulent motions with mass-weighted root-mean-square 1D Mach numbers of approximately 0.022, 0.038, 0.048, 0.067, 0.093, and 0.119 for Run 0 to 5, respectively, before the onset of the ICM inflow at 0.5 Gyr. The inflow velocity of the ICM is maintained at $\sim 170 {\rm km / s}$, which corresponds to Mach 0.25 with respect to the ICM sound speed. Even though galaxies in groups and clusters can move faster than the speed of sound, we note that the average speed for the entire orbit can vary between subsonic and supersonic depending on properties of galaxies and clusters. Moreover, velocities of early-type galaxies are likely to be biased toward lower values than those of spiral galaxies . Here, we focus on the subsonic case as a simple model. If there is no obstacle, it takes about 6.6 Gyr for this flow to cross the whole simulation domain along the $x$-axis. All boundaries, except for the low-x (inflow) boundary, are outflow boundaries. We also test nine times higher ram pressure in Run 0 and 5 by adopting the inflow velocity of Mach 0.75. These simulations (hereafter, Run 0h and 5h) allow us find out how strongly the effects of the turbulent ISM change depending on the strength of the ram pressure. We consider two different stirring cases. In Case A, we continuously stir the ISM with the constant injection energy as explained above. This case corresponds to the situation when the turbulent energy sources, such as active galactic nuclei or supernova explosions, continue to operate inside the galaxies. In Case B, the turbulent energy injection is stopped after 0.5 Gyr, at which point the inflow of the ICM begins. All simulations run up to 6 Gyr. RESULTS ======= We performed twelve simulations for six different strengths and two different durations of the turbulent energy injection, that is, Cases A and B. We also simulated four strong ram pressure stripping cases for two strengths of turbulence driving in Case A and B. We now present the results of these simulations, focusing on relative differences among the runs. Overall evolution ----------------- Figure \[fig:3D\] shows the time sequence of the evolution of the ISM in a galaxy that is subject to the ram pressure stripping. Left column corresponds to Run 1 and right one to Run 4. Run 0 corresponds to extremely weak stirring and can be thought of as a reference non-turbulent case. Both columns are for Case A, where the stirring energy is continuously supplied to the ISM. From top to bottom, each row corresponds to 0.75, 2, 4, and 6 Gyr. This figure demonstrates that the turbulence is well developed before the ICM wind begins to interact with the galaxy. The ram pressure produces a long turbulent tail, and the tail properties vary with the strength of the ISM turbulence. In particular, Run 1 reveals a more discontinuous tail than Run 4. This difference becomes more evident at later times. Moreover, Run 4 corresponds to a longer and broader tail than Run 1. This is due to the fact that the stripping from the outer ISM layers is enhanced in Run 4. We quantify these morphological differences in the following subsection. ![image](fig1_small.eps){width="165mm"} ![image](fig2a_small.eps) ![image](fig2b_small.eps) Tail morphology --------------- In order to quantify the evolution of the spatial distribution of the ISM, we use colour weighting (see above). The colour quantity $C$ represents the ISM fraction in cells, and we use it to obtain the mass of the ISM in each cell. We then employ the following equations to describe the evolution of the tail properties $$\langle x_{\rm ISM}\rangle = \frac{\sum_{i} C_{i} \rho_{i} V_{i} x_{i}}{\sum_{i} C_{i} \rho_{i} V_{i}}, \label{eq:xavg}$$ $$\delta x_{\rm ISM} = \sqrt{ \frac{\sum_{i} C_{i} \rho_{i} V_{i} ( x_{i} -\langle x_{\rm ISM}\rangle )^{2}}{\sum_{i} C_{i} \rho_{i} V_{i}} }, \label{eq:xstd}$$ where the index $i$ represents cell number, and $\rho$, $V$, and $x$, correspond to density, cell volume, and cell $x-$coordinates, respectively. The quantity ${\rm\langle x_{\rm ISM}\rangle}$ traces the overall shift of the ISM mass after it is stripped from the galaxy. Finally, ${\rm \delta x_{\rm ISM}}$, and its $y-$ direction counterpart ${\rm \delta y_{\rm ISM}}$, quantify the widths the distributions of the ISM along the $x$ and $y$ axes. ![image](fig3a.eps) ![image](fig3b.eps) Figure \[fig:color\_global\] shows the evolution of ${\rm\langle x_{\rm ISM}\rangle}$, ${\rm \delta x_{\rm ISM}}$, and ${\rm \delta y_{ISM}}$. These measurements quantify what is shown in Figure \[fig:3D\]. When the stirring process is continuous (Case A; left panel), Run 4 creates a longer tail than Run 1, and this difference increases with time up to 60 kpc at 6 Gyr. However, the difference is much smaller in Case B runs (right panel). This comparison also confirms that the initial expansion of the ISM caused by the injected energy from the stirring process is not the cause for the difference between Runs 1 and 4 in Case A. If the initial expansion was the main reason for the differences among different runs in Case A, the same effect should be found in Case B too. Yet, in the absence of the continuous stirring (Case A), no significant differences are seen among different runs. Therefore, the continuous supply of the turbulent energy causes differences found between Cases A and B. ![image](fig4a_small.eps) ![image](fig4b_small.eps) Name ${\rm \Delta t_{10}}$ (Gyr) ${\rm \Delta t_{20}}$ (Gyr) ${\rm \Delta t_{30}}$ (Gyr) ------- ----------------------------- ----------------------------- ----------------------------- Run 0 0.746 2.314 4.010 Run 1 0.729 1.192 3.721 Run 2 0.709 1.079 3.433 Run 3 0.672 0.977 2.815 Run 4 0.619 0.896 2.132 Run 5 0.546 0.789 1.242 : Times corresponding to the removal of 10%, 20%, and 30% of the ISM in Case A (measured from the onset of the ICM inflow at 0.5 Gyr.[]{data-label="tab:time_scale"} Figure \[fig:color\_global\] shows that stronger turbulence in Case A results in wider dispersions of the ISM parallel and perpendicular to the direction of the ram pressure. Even though in Case B the energy is not supplied continuously to the ISM, the evolution of ${\rm \delta x_{ISM}}$ is similar to what we find in Case A. We do not find a significant effect of turbulence on ${\rm \delta y_{ISM}}$ in Case B. Although Case B does not exhibit significant deviations among different runs, general trends in the tail evolution Case B is the same as in Case A. In particular, at around 2 Gyr, the initial stripping generates short and narrow tail immediately behind the galaxy (see Figure \[fig:3D\]). This is due to a converging ICM flow behind the galaxy. This narrowing is a transient feature and the tail widens after 2 Gyr. Origin of the ISM in the tails ------------------------------ The origin of the ISM stripped away from the galaxy can be used to understand what kind of materials are transported to the ICM by the ram pressure stripping. As explained in the previous section, we map the initial positions of the particles to their temporal positions. This allows us to check where exactly the stripped material in the tail came from. Figure \[fig:particle\_global\] shows the distributions of particle surface densities in the two-dimensional space defined by the initial and current particle positions measured with respect to the galactic centre. Left panel corresponds to 2 Gyr and the right one to 4.5 Gyr. Initially, all particles were distributed uniformly inside the truncation radius. The results presented in Figure \[fig:particle\_global\] allow us to make to points. First, most of the stripped gas originates from the outer parts of the galaxy near the truncation radius, and, second, higher turbulence levels enable gas removal from deeper layers of the galactic atmosphere. Regarding the first point, the gas originally located near the truncation radius is always transported to largest radii, independently of the turbulence level. Regarding the second point, a significant amount of the ISM initially residing between 60 and 80 kpc is stripped beyond 200 kpc at 2 Gyr in Run 4. This makes the distribution of the particle density in Run 4 appear wider at a given distance beyond 200 kpc than in Run 1. This is again because more vigorous ISM turbulence in Run 4 more efficiently transports the ISM from small radii to the ISM-ICM interface from which the gas is permanently removed. Figure \[fig:particle\_global\] also shows that the narrow part of the tail found in Run 1 (see Figure \[fig:3D\]) is caused by inefficient stripping of the ISM. At 4.5 Gyr, Run 1 has fewer particles at $\sim$300 kpc than Run 4, which leads to narrower tail. This low-density structure of the tail is caused by the inefficient stripping of the ISM that initially resided at radii larger than 60 kpc. Evolution of the ISM mass retained in the galaxy ------------------------------------------------ Since we use the passive scalar quantity to identify ISM and ICM separately, we can follow the evolution of the gas that originally belonged to the ISM. Figure \[fig:mass\_color\_ISM\] shows the evolution of the ISM mass in four different radial bins inside ${R_{t}}$. We find significant differences between Case A and B. A continuous supply of the turbulent energy enhances internal mixing of the ISM, resulting in the increase in the net mass loss rate of the ISM. This effect is not seen in Case B. We find that the strength of turbulence has a noticeable effect on the distribution of the intrinsic ISM inside ${\rm R_{t}}$ in Case A. As shown in Figure \[fig:mass\_color\_ISM\], in Case A at 6 Gyr, Run 0 retains about ${\rm 2.87 \times 10^{10} ~ M_{\odot}}$ of the intrinsic ISM, while Run 5 has about ${\rm 1.51 \times 10^{10} ~ M_{\odot}}$, i.e. about 2 times less. \[tab:time\_scale\] summarises time scales of 10, 20, and 30% ISM mass loss after the inflow of ICM produces ram pressure in Case A. In Case B, although we find the same general trend as Case A, the difference is smaller. ![Evolution of the total gas mass inside $R_{t}=100$ kpc for Case A. From top to bottom, each panel shows the mass in four different radial zones: $r <$ 100 kpc, $r <$ 20 kpc, 20 kpc $\leq r <$ 50 kpc, and 50 kpc $\leq r <$ 100 kpc. The dotted line corresponds to 0.5 Gyr when the inflow of the ICM starts to enter the simulation box. The colour coding of the different lines is the same as in Figure \[fig:color\_global\].[]{data-label="fig:mass_ISM"}](fig5_small.eps) ![Evolution of the ratio of the intrinsic ICM mass over the intrinsic ISM mass inside ${R_{t}}$ for Case A. The colour scheme is the same as in Figure \[fig:color\_global\].[]{data-label="fig:ICM_over_ISM"}](fig6_small.eps) ![Evolution of the intrinsic ISM mass ([*top*]{}) and the total gas mass ([*bottom*]{}) inside ${R_{t}}$ with respect to masses for Run 0 in Case A. The colour scheme is the same as in Figure \[fig:color\_global\].[]{data-label="fig:mass_ISM_ICM"}](fig7.eps) This evolution of the intrinsic ISM mass hints at the possibility that, in Case A, a significant amount of the inflowing ICM penetrates the galaxy and mixes with the ISM. The volume initially occupied by the intrinsic ISM can be partially refilled by the inflowing ICM if the ISM-ICM mixing is efficient within ${R_{t}}$. Evolution of the total gas mass inside the galaxy ------------------------------------------------- In Figure \[fig:mass\_ISM\], we show the evolution of the total gas mass including both the ISM and ICM inside $R_{t}$. We find that the ICM temporarily accumulates mainly over $50 \leq r < 100$ kpc in Case A. The total mass of the gas increases up to ${\rm \sim 3.3 \times 10^{10} ~ M_{\odot}}$ over $50 \leq r < 100$ kpc as the ICM compresses the ISM and then blends with the ISM. However, the ICM caught in the galaxy is finally expelled after $\sim$1 Gyr by the combined action of the ram pressure stripping and continuous supply of turbulent energy. As shown in Figure \[fig:mass\_color\_ISM\], the mass of the intrinsic ISM does not depend sensitively on the level of turbulence in Case B. Similarly, for the total gas mass within $R_{t}$, we do not observe strong trends with the turbulence strength in Case B, and therefore we do not show Case B in Figure \[fig:mass\_ISM\]. However, overall evolution of the total mass as a function of radius in Case B is similar to that in Case A. ![image](fig8a_small.eps) ![image](fig8b_small.eps) The fractional change in the total gas mass within $R_{t}$ is smaller than the fractional change in the intrinsic ISM retained within $R_{t}$ (c.f. top panel in Figure \[fig:mass\_ISM\] and top left panel in Figure \[fig:mass\_color\_ISM\], respectively). This suggests that the ICM mixes with the intrinsic ISM. Figure \[fig:ICM\_over\_ISM\] summarises how much mass is contributed by the ICM and ISM inside $R_{t}$ as a function of time for Case A. As the strength of the turbulent motions in the ISM increases, the fraction of the ICM penetrating into the galaxy increases, resulting in about 1.5 times more ICM mass than the ISM mass inside $R_{t}$ in Run 5 at 6 Gyr. Considering only the outer region of the galaxy over $50 \leq r < 100$ kpc, this fraction is about 2.6 in Run 5 while it becomes about 1.6 in Run 0. Figure \[fig:mass\_ISM\_ICM\] presents the relative differences in the mass loss caused by different strengths of the ISM turbulence. At 6 Gyr, Run 5 retains almost twice less ISM than Run 0, which corresponds to weak ISM turbulence. Yet, considering the total mass including ISM and ICM inside $R_{t}$, Run 5 has only 20% less mass than Run 0, because more ICM is blended with the ISM in Run 5 than in Run 0. Importantly, the differences among different runs show strongly non-linear dependence on the ISM velocity dispersion. In Run 4, the intrinsic ISM mass is about 25% less than in Run 0 at 6 Gyr, while twice less intrinsic ISM is retained in Run 5 than in Run 0. Ram pressure stripping for higher ICM inflow velocity ----------------------------------------------------- By comparing results from Run 0 and 5 to those from Run 0h and 5h, we investigate how the increasing strength of the ram pressure alters the effects of the turbulent ISM on the stripping efficiency. Figure \[fig:mass\_color\_ISM\_high\_speed\] shows that the increased ram pressure of the ICM enhances stripping in both Runs 0 and 5 Because the ram pressure is nine times stronger in Run 0h than in Run 0, the mass of the ISM left inside the galaxy is much lower in Run 0h than in Run 0. Similarly, Run 5h retains $\sim$13 times less gas than Run 5 at $\sim$3.5 Gyr. The increased inflow speed significantly increases the efficiency of the initial stripping in the outer 50 kpc $\leq r <$ 100 kpc region. As the stripping process continues, some amount of the ISM originally located at the centre of the galaxy (i.e., $r < 20$ kpc) is moved to the outer regions gradually by the turbulent ISM, and then is stripped from the galaxy. Therefore, the increase in the stripping efficiency is larger in Run 5h than 0h. For example, top left panel in Figure \[fig:mass\_color\_ISM\_high\_speed\] shows that Run 0h retains $\sim$3 times more intrinsic ISM at $\sim$3 Gyr since the onset of stripping than Run 5h. Comparison of Cases A and B in Figure \[fig:mass\_color\_ISM\_high\_speed\], shows that continuous supply of the turbulence energy amplifies the efficiency of ram pressure stripping. In Run 5h, Case A keeps about 4 times less ISM than Case B at around 3.5 Gyr. The difference between Case A and B is particularly striking in the central regions ($r < 20$ kpc). We note that we continuously supply the turbulence energy in Case A even after the substantial amount of the ISM is stripped at around 2 Gyr. This might not be a realistic assumption for turbulence driven by stellar and/or AGN processes. After the large amount of the cold ISM has been removed from a galaxy, the galaxy may not be able to generate strong turbulence due to processes such star formation, supernova explosions, and AGN feedback. Consequently, the strong ram pressure stripping results beyond $\sim$3 Gyr may not be reliable. However, we argue that these cases serve to bracket the range of possible solutions. Discussion and Conclusions ========================== We show that the continuous supply of small to moderate amount of turbulent kinetic energy to the ISM enhances the ISM mass loss rate in elliptical galaxies experiencing ram pressure stripping, and increases the penetration of the ICM into the galaxies (see Figure \[fig:mass\_ISM\_ICM\]). The spatial distribution of the stripped ISM can be wider and more extended along the direction of galaxy motion (see Figures \[fig:3D\], \[fig:color\_global\], and \[fig:particle\_global\]), when AGN feedback and/or stellar processes such as star formation are present. Our results imply that early-type galaxies characterised by the turbulent ISM should efficiently disperse their ISM throughout galaxy clusters. The origin of the stripped ISM in the tails shows that the ram pressure stripping with the turbulent motions in the ISM boosts the mixing between the central region and the outer region of the galaxy. This implies that the distributions of gas properties in the tails can be used to infer the distribution of the intrinsic ISM properties such as gas metallicity inside galaxies. Since the distant part of the tail in Run 5 is more mixed with the central gas inside the galaxy than in Run 0, we expect that the properties of the stripped ISM should show weaker gradients of the gas properties along the tail in Run 5 than in Run 0. For example, there might be a gradient in metallicity distribution along the ram pressure stripping tail. In general, the ISM in the central regions of early-type galaxies is more metal-rich than in the outer regions [e.g., @2011MNRAS.418.2744M; @2011ApJ...729...53H]. Therefore, very low levels of the ISM turbulence in early-type galaxies make the stripping tail have only low metallicity ISM along the tail, contributing negligibly to the ICM metal enrichment [see, @2008SSRv..134..363S; @2008ApJ...688..931K for discussion of the ICM enrichment efficiency]. However, we note that this depends on the initial metallicity distribution in the galaxy. If a galaxy has a shallow metallicity gradient before experiencing ram pressure stripping, it can flatten the metallicity distribution along the tail and lead to more significant ICM enrichment even when the strength of the ISM turbulence is weaker. The evolution of the mass inside the galaxy implies that a significant fraction of the gas mass measured in observations can be explained by the ICM gas that got temporarily incorporated into the ISM. As Figure \[fig:ICM\_over\_ISM\] shows, galaxies with the strong turbulent motions in the ISM easily blend the inflowing ICM with the ISM. Therefore, the properties of the hot X-ray emitting ISM in the galaxy experiencing ram pressure stripping might have been altered by the inflowing ICM, in particular, in the outer regions. For example, the metallicity of the ICM in low-redshift galaxy clusters is about 0.5 $Z_{\odot}$ [@2009ApJ...698..317A] and can be much lower than that of the ISM [@2006ApJ...639..136H; @2009ApJ...696.2252J], Thus, mixing of the ICM with galactic gas can alter the metallicity of the ISM and the metallicity gradient inside cluster galaxies. This contamination can be particularly significant in the outer regions of galaxies when the ISM is turbulent. Interestingly, if the galaxy has an initially flat metallicity profile at the level of 2 $Z_{\odot}$, and if the metallicity of the ICM is about 0.5$Z_{\odot}$, then the ram pressure stripping will lower the mass-weighted metallicity to around 0.9 and 1.9 $Z_{\odot}$ in 50 $\leq r <$ 100 kpc and $r <$ 20 kpc, respectively (with the mass ratios shown in Figure \[fig:ICM\_over\_ISM\] for Run 5 at 6 Gyr) This specific case illustrates how ram pressure stripping in the presence of turbulent ISM can steepen ISM metallicity profiles. This steepening effect is expected to be more pronounced as the strength of ISM turbulence increases. The models presented here allow one to study the effects of turbulence on the ram pressure stripping process via a conceptually simple approach. The advantage of this approach lies in providing a clear intuitive picture of how the turbulent ISM affects the gas stripping. These models form a framework for future studies that will relax some of the assumptions made in the present work. Our current simulations do not include a few important physical processes that are required to make detailed observational predictions for the ram pressure stripping process. First, we do not include radiative cooling processes [see, @2008SSRv..134..155K for a review], which will lead to the formation of dense cold gas clouds [e.g., @2007ApJ...671..190S; @2010ApJ...717..147S; @2010ApJ...722..412Y]. Second, self-gravity of the gas is not included. Self-gravity can alter the evolution of the stripped ISM by accelerating the collapse of these dense cold gas clouds. Third, the spatial resolution of our simulations is not high enough to fully cover an extremely broad inertial range of the turbulent ISM [see @1998AnRFM..30..539M; @2011RPPh...74d6901B for a review]. Fourth, we have neglected magnetic fields, which may affect the efficiency of mixing of the ISM and ICM, suppress viscosity and thermal conduction between the stripping tail of the cold gas and the hot ICM, and introduce non-trivial dynamical effects. Finally, the energy sources of the turbulence in our simulations are not directly controlled by the relevant astrophysical processes such as star formation and AGN. Continuous mass loss by ram pressure stripping can affect star formation and AGN feedback , increasing or decreasing energy injected to the turbulent ISM. Our main concern is the fact that our model currently does not take into account a possible coupling between the efficiency of stirring of the gas by star formation and AGN and the efficiency of stripping. For example, it is conceivable that enhanced stellar or AGN feedback could increase the level of turbulence, accelerate the mass removal from the galaxy, and thus reduce the fuel supply for these feedback processes and the efficiency of the ram pressure stripping process. Consequently, less gas would be available to fuel AGN and star formation, and the stirring efficiency would slow down. Our model currently does not incorporate such mechanism. However, we show that the efficiency of ram pressure stripping depends sensitively on the duration of stirring, and our models for the continuous (Case A) and initial (Case B) stirring likely bracket the range of possibilities. In future work, we will relax some of the assumptions and simplifications made here. The second paper in this series will investigate the effect of weakly magnetised turbulent ISM in elliptical galaxies on the ram pressure stripping process. Acknowledgements {#acknowledgements .unnumbered} ================ We are grateful to Karen Yang and Dongwook Lee for useful discussions. We thank the referee (Eugene Churazov) for his valuable comments that improved this manuscript. MR acknowledges NSF grant 1008454. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number OCI-1053575. The software used in this work was in part developed by the DOE NNSA-ASC OASCR Flash Center at the University of Chicago. C., [Biviano]{} A., [Mazure]{} A., 1998, A&A, 331, 439 M. E., [Bregman]{} J. N., [Butler]{} S. C., [Mullis]{} C. R., 2009, ApJ, 698, 317 A., [Rees]{} M. J., 1992, MNRAS, 255, 346 D., [Livio]{} M., [O’Dea]{} C. P., 1994, ApJ, 437, 83 M. J., [Colella]{} P., 1989, Journal of Computational Physics, 82, 64 L. G., [Benson]{} A. J., 2010, ApJ, 716, 810 A., [Nordlund]{} [Å]{}., 2011, Reports on Progress in Physics, 74, 046901 J. N., [Parriott]{} J. R., 2009, ApJ, 699, 923 G. E., [Smith]{} R. K., [Foster]{} A., [Cottam]{} J., [Loewenstein]{} M., [Mushotzky]{} R., [Shafer]{} R., 2012, ApJ, 747, 32 E., [Forman]{} W., [Vikhlinin]{} A., [Tremaine]{} S., [Gerhard]{} O., [Jones]{} C., 2008, MNRAS, 388, 1062 E., [Tremaine]{} S., [Forman]{} W., [Gerhard]{} O., [Das]{} P., [Vikhlinin]{} A., [Jones]{} C., [B[ö]{}hringer]{} H., [Gebhardt]{} K., 2010, MNRAS, 404, 1165 L., [Morganti]{} L., [de Zeeuw]{} P. T., 2009a, MNRAS, 393, 491 L., [Ostriker]{} J. P., [Proga]{} D., 2009b, ApJ, 699, 89 L. P., [O’Sullivan]{} E., [Jones]{} C., [Giacintucci]{} S., [Vrtilek]{} J., [Raychaudhury]{} S., [Nulsen]{} P. E. J., [Forman]{} W., [Sun]{} M., [Donahue]{} M., 2011, ApJ, 728, 162 J., [Zhuravleva]{} I., [Werner]{} N., [Kaastra]{} J. S., [Churazov]{} E., [Smith]{} R. K., [Raassen]{} A. J. J., [Grange]{} Y. G., 2012, A&A, 539, A34 W., [Mair]{} M., [Kapferer]{} W., [van Kampen]{} E., [Kronberger]{} T., [Schindler]{} S., [Kimeswenger]{} S., [Ruffert]{} M., [Mangete]{} O. E., 2006, A&A, 452, 795 B. G., [Scalo]{} J., 2004, ARA&A, 42, 211 V., [Pope]{} S. B., 1988, Computers and Fluids, 16, 257 Fisher R. T., Kadanoff L. P., Lamb D. Q., Dubey A., Plewa T., Calder A., Cattaneo F., Constantin P., Foster I. T., Papka M. E., Abarzhi S. I., Asida S. M., Rich P. M., Glendenin C. C., Antypas K., Sheeler D. J., Reid L. B., Gallagher B., Needham S. G., 2008, IBM Journal of Research and Development, 52, 127 B., [Olson]{} K., [Ricker]{} P., [Timmes]{} F. X., [Zingale]{} M., [Lamb]{} D. Q., [MacNeice]{} P., [Rosner]{} R., [Truran]{} J. W., [Tufo]{} H., 2000, ApJS, 131, 273 J. E., [Gott]{} III J. R., 1972, ApJ, 176, 1 D., [Krause]{} M., [Alexander]{} P., 2007, MNRAS, 374, 787 J. A., 2006, ApJ, 647, 910 P. J., [Buote]{} D. A., 2006, ApJ, 639, 136 P. J., [Buote]{} D. A., [Brighenti]{} F., [Gebhardt]{} K., [Mathews]{} W. G., 2012, ArXiv e-prints/1205.0256 P. J., [Buote]{} D. A., [Canizares]{} C. R., [Fabian]{} A. C., [Miller]{} J. M., 2011, ApJ, 729, 53 H. S., [Lee]{} M. G., 2008, ApJ, 676, 218 P., [K[ö]{}ppen]{} J., [Palou[š]{}]{} J., [Combes]{} F., 2009, A&A, 500, 693 J., [Irwin]{} J. A., [Athey]{} A., [Bregman]{} J. N., [Lloyd-Davies]{} E. J., 2009, ApJ, 696, 2252 J. S., [Paerels]{} F. B. S., [Durret]{} F., [Schindler]{} S., [Richter]{} P., 2008, Space Sci. Rev., 134, 155 W., [Kronberger]{} T., [Ferrari]{} C., [Riser]{} T., [Schindler]{} S., 2008, MNRAS, 389, 1405 W., [Sluka]{} C., [Schindler]{} S., [Ferrari]{} C., [Ziegler]{} B., 2009, A&A, 499, 87 D.-W., [Kim]{} E., [Fabbiano]{} G., [Trinchieri]{} G., 2008, ApJ, 688, 931 T., [Yi]{} S. K., [Khochfar]{} S., 2011, ApJ, 729, 11 T., [Kapferer]{} W., [Ferrari]{} C., [Unterguggenberger]{} S., [Schindler]{} S., 2008, A&A, 481, 337 D., [Deane]{} A. E., 2009, Journal of Computational Physics, 228, 952 S., 2010, Journal of Computational and Applied Mathematics, 233, 3139 Lian C., Xia G., Merkle C. L., 2010, Computers & Fluids, 39, 1909 M., [Dosaj]{} A., [Forman]{} W., [Jones]{} C., [Markevitch]{} M., [Vikhlinin]{} A., [Warmflash]{} A., [Kraft]{} R., 2005, ApJ, 621, 663 M., [Jones]{} C., [Forman]{} W. R., [Nulsen]{} P., 2006, ApJ, 644, 155 W. G., [Brighenti]{} F., 2003, ARA&A, 41, 191 L., [Mastropietro]{} C., [Wadsley]{} J., [Stadel]{} J., [Moore]{} B., 2006, MNRAS, 369, 1021 I. G., [Frenk]{} C. S., [Font]{} A. S., [Lacey]{} C. G., [Bower]{} R. G., [Mitchell]{} N. L., [Balogh]{} M. L., [Theuns]{} T., 2008, MNRAS, 383, 593 E. T., [Allen]{} S. W., [Werner]{} N., [Taylor]{} G. B., 2010, MNRAS, 405, 1624 E. T., [Werner]{} N., [Simionescu]{} A., [Allen]{} S. W., 2011, MNRAS, 418, 2744 P., [Mahesh]{} K., 1998, Annual Review of Fluid Mechanics, 30, 539 D., [Shukurov]{} A., 1996, MNRAS, 279, 229 I., [Babul]{} A., 1999, MNRAS, 309, 161 S., [Stinson]{} G., [Couchman]{} H. M. P., [Bailin]{} J., [Wadsley]{} J., 2011, MNRAS, 415, 257 P. E. J., 1982, MNRAS, 198, 1007 G. A., [Hatch]{} N. A., [Simionescu]{} A., [B[ö]{}hringer]{} H., [Br[ü]{}ggen]{} M., [Fabian]{} A. C., [Werner]{} N., 2010, MNRAS, 406, 354 K., [Vollmer]{} B., 2003, A&A, 402, 879 C., [Dursi]{} J. L., 2010, Nature Physics, 6, 520 S., [Nulsen]{} P., [Forman]{} W. R., [Jones]{} C., [Machacek]{} M., [Murray]{} S. S., [Maughan]{} B., 2008, ApJ, 688, 208 E., [Br[ü]{}ggen]{} M., 2007, MNRAS, 380, 1399 E., [Br[ü]{}ggen]{} M., 2008, MNRAS, 388, L89 M., [Bruggen]{} M., [Lee]{} D., [Shin]{} M.-S., 2012, ArXiv e-prints/1203.1343 M., [Oh]{} S. P., 2010, ApJ, 713, 1332 J. S., [Fabian]{} A. C., [Smith]{} R. K., 2011, MNRAS, 410, 1797 J. S., [Fabian]{} A. C., [Smith]{} R. K., [Peterson]{} J. R., 2010, MNRAS, 402, L11 S., [Diaferio]{} A., 2008, Space Sci. Rev., 134, 363 S., [Kapferer]{} W., [Domainko]{} W., [Mair]{} M., [van Kampen]{} E., [Kronberger]{} T., [Kimeswenger]{} S., [Ruffert]{} M., [Mangete]{} O., [Breitschwerdt]{} D., 2005, A&A, 435, L25 M.-S., [Ostriker]{} J. P., [Ciotti]{} L., 2010, ApJ, 711, 268 M.-S., [Ostriker]{} J. P., [Ciotti]{} L., 2012, ApJ, 745, 13 M.-S., [Stone]{} J. M., [Snyder]{} G. F., 2008, ApJ, 680, 336 S., [Rieke]{} M. J., [Rieke]{} G. H., 2010, ApJ, 717, 147 M., [Donahue]{} M., [Voit]{} G. M., 2007, ApJ, 671, 190 T. E., [Cora]{} S. A., [Tissera]{} P. B., [Abadi]{} M. G., [Lagos]{} C. D. P., 2010, MNRAS, 408, 2008 S., [Bryan]{} G. L., 2008, ApJ, 684, L9 S., [Bryan]{} G. L., [van Gorkom]{} J. H., 2007, ApJ, 671, 1434 F., [Bodenschatz]{} E., 2009, Annual Review of Fluid Mechanics, 41, 375 B., [Soida]{} M., [Otmianowska-Mazur]{} K., [Kenney]{} J. D. P., [van Gorkom]{} J. H., [Beck]{} R., 2006, A&A, 453, 883 K., [Frank]{} A., [Cunningham]{} A. J., 2010, ApJ, 722, 412 M., [Dursi]{} L. J., [ZuHone]{} J., [Calder]{} A. C., [Fryxell]{} B., [Plewa]{} T., [Truran]{} J. W., [Caceres]{} A., [Olson]{} K., [Ricker]{} P. M., [Riley]{} K., [Rosner]{} R., [Siegel]{} A., [Timmes]{} F. X., [Vladimirova]{} N., 2002, ApJS, 143, 539 Effect of turbulence driving scales =================================== The properties of the ISM turbulence can affect the stripping efficiency. In particular, the stripping efficiency can depend on the outer turbulence driving scale $l_{turb}$. This is expected because the effective diffusion coefficient $\sim ~ l_{turb} \, v_{turb}$, where $v_{turb}$ is a characteristic turbulence velocity. In order to investigate this dependence, we perform an additional simulation which corresponds to Run 5 for Case A, but where the outer turbulence scale is reduced from $\sim$ 50 kpc to $\sim$ 25 kpc, and where the energy injection rate per mode is adjusted such that the total amount of the turbulent energy injection within $R_{t}$ is the same as in the original Run 5 for Case A. As shown in Figures \[fig:mass\_ISM\_add\] and \[fig:mass\_ICM\_over\_ISM\_add\], the new simulation shows less stripping and weaker mixing between the ICM and ISM than in the original Run 5. The mass-weighted root-mean-square 1D Mach number of the ISM in the new simulation is about 0.1 which is almost the same as in the original Run 5. This Mach number corresponds to the state of the ISM before the onset of the ICM inflow. Since the energy injection rate is unchanged, and thermal energy dominates over the time-integrated dissipation rate of the injected energy, the Mach number does not change significantly. Since the new simulation lacks large-scale motions in the ISM, the diffusion of the ISM with the ICM becomes inefficient. At 6 Gyr, about 20% larger amount of the ISM survives stripping in the new simulation than in the original Run 5. Figure \[fig:mass\_ICM\_over\_ISM\_add\] shows that the suppression of gas stripping is relatively strong at intermediate radii. Specifically, for 20 $\le$ r $<$ 50 kpc, two times less ISM is retained in this shell at 6 Gyr. ![Evolution of the intrinsic ISM mass for Run 5 with smaller turbulent driving scale ([*dashed line*]{}) in Case A, compared to the original run ([*solid line*]{}). The dotted line corresponds to 0.5 Gyr when the inflow of the ICM starts to enter the simulation box. []{data-label="fig:mass_ISM_add"}](fig9.eps) Figure \[fig:2D\_ICM\_density\] shows the ICM density distribution in a plane centred on the galaxy and parallel to the direction of the ICM inflow. Comparison of the panels in the same columns demonstrates that mixing is reduced in the new simulation: the area occupied by lower density gas is larger, and the penetration of the ICM deeper into the galactic potential is suppressed. ![Evolution of the ratio of the intrinsic ICM mass to the intrinsic ISM mass inside ${R_{t}}$ in Case A for Run 5 with smaller turbulence driving scale ([*dashed line*]{}), compared to the original run ([*solid line*]{}). The dotted line corresponds to 0.5 Gyr when the inflow of the ICM starts to enter the simulation box. []{data-label="fig:mass_ICM_over_ISM_add"}](fig10.eps) ![image](fig11_small.eps) [^1]: <http://www.asci.uchicago.edu/site/flashcode/user_support/flash3_ug_3p3/node14.html#SECTION05163000000000000000>
{ "pile_set_name": "ArXiv" }
--- abstract: 'The peak of the de-absorbed energy distribution of the TeV emitting blazars, all of the BL Lacertae (BL) class, can reach values up to $\sim 10$ TeV. In the context of synchrotron-self Compton (SSC) models of relativistic uniformly moving blobs of plasma, such high energy peak emission can be reproduced only by assuming Doppler factors of $\delta \sim 50$. However, such high values strongly disagree with the unification of FR I radio galaxies and BLs. Additionally, the recent detections of slow, possibly sub-luminal velocities in the sub-pc scale jets of the TeV BLs MKN 421 and MKN 501 suggest that the jets in these sources decelerate very early to mildly relativistic velocities ($\Gamma\sim$ a few). In this work we examine the possibility that the relativistic flows in the TeV BLs are longitudinally decelerating. In this case, modest Lorentz factors ($\Gamma \sim 15$), decelerating down to values compatible with the recent radio interferometric observations, can reproduce the $\sim $ few TeV peak energy of these sources. Furthermore, such decelerating flows are shown to reproduce the observed broadband BL - FR I luminosity ratios.' address: 'Laboratory for High Energy Astrophysics, NASA Goddard Space Flight Center, Code 661, Greenbelt, MD 20771, US' author: - 'Markos Georganopoulos & Demosthenes Kazanas' title: Decelerating Flows in TeV Blazars --- galaxies: active — quasars: general — radiation mechanisms: nonthermal — X-rays: galaxies Introduction ============ The blazar TeV emission reaches the Earth partially absorbed by the Diffuse InfraRed Background (DIRB) radiation which pair-produces with the TeV photons [@stecker92]. The de-absorbed spectra depend on the source redshifts and the still elusive energy distribution of the DIRB. However, for all expected DIRB forms, both the intrinsic peak energy $E_p$ and peak luminosity $L_p$ of the TeV spectral component are higher than those observed. Even for the nearby ($z=0.031$) MKN 421, $E_p$ can increase by a factor of $\sim 10$ after de-absorption to $\sim $ a few TeV [@dejager02]. The de-absorbed spectrum of H1426+428 at z=0.129 is even more extreme, characterized by $E_p \sim 10$ TeV [@aharonian02]. The BL synchrotron spectra exhibit a break at energies $\epsilon'_{b} \sim 10^{-4}-10^{-6}$ (primes denote energies on the flow rest frame, all energies normalized to $m_ec^2$) such that the greatest fraction of the synchrotron luminosity is at energies $\epsilon' > \epsilon'_{b}$. For this reason and because of the K-N decrease in the Compton scattering cross section, electrons with $\gamma \gsim 1/\epsilon'_{b}$ will inverse Compton scatter a decreasing fraction of the available photons; as a result, the maximum IC luminosity will occur at energies $\epsilon'_p \simeq 1/ \epsilon'_{b}$, or $\epsilon'_{p}\epsilon'_{b}\simeq 1$. The values of $\epsilon'_p, \, \epsilon'_{b}$ observed at the lab frame are $\epsilon_p = \delta \epsilon'_p$ and $\epsilon_{b} = \delta \epsilon'_{b}$, yielding the following relation between $\delta$ and the observed energies $\epsilon_p, \, \epsilon_{b}$ $$\delta \gsim (\epsilon_{b}\; \epsilon_{p})^{1/2} = 40\;(\nu_{b,16} \;E_{p,\,10\,\rm TeV})^{1/2}, \label{d_constr}$$ where $\nu_{b,16}$ is the observed synchrotron break frequency in units of $10^{16}$ Hz, $E_{p,\,10\,\rm TeV}$ is the energy of the de-absorbed TeV peak in units of 10 TeV and $\delta=1/\Gamma(1-\beta\cos\theta)$, with $\beta$ the dimensionless flow speed and $\theta$ its angle to the observer’s line of sight. These values of $\delta (\simeq \Gamma)$ are in strong conflict [@chiaberge00] with the unification of BLs and FR I radio galaxies [@urry95], which require Lorentz factors $\Gamma \sim 3-7$. Additional constrains for the inner jet flow of the TeV blazars come from the small, possibly sub-luminal apparent velocities observed interferometrically in the sub-pc scale jet of MKN 421 and 501 [@marscher99; @piner99; @edwards02], suggestive of a decelerating flow in the inner jet of TeV blazars [@marscher99]. Decelerating relativistic flows: application in TeV blazars =========================================================== Motivated by the above analysis and the unification of the multiwavelength spectra of the hotspots of FR II radio galaxies and quasars in terms of decelerating relativistic flows [@georganopoulos03], we propose that the same considerations are applicable in resolving the conflict of the high $\delta$’s with TeV blazar unification. So, we assume that a power law electron distribution is injected at the base of a relativistic flow that decelerates while at the same time cooling radiatively. The highest frequencies of its synchrotron component originate preferentially at its fast base where the electrons are more energetic and its Lorentz factor largest. As both the flow velocity and electron energy drop with radius, the locally emitted synchrotron spectrum shifts to lower energies while its beaming pattern becomes wider. The observed synchrotron spectrum is the convolution of the comoving emission from each radius weighted by the beaming amplification at each radius. At small angles the observed spectrum is harder than that observed at larger angles. This is the result of the progressively smaller $\Gamma$ of the flow with distance in combination with the concomitant lower electron energies due to cooling. The inverse Compton emission of such a flow behaves in a more involved way: Electrons will upscatter the locally produced synchrotron seed photons, giving rise to a local SSC emission with $\delta-$dependence similar to to that of synchrotron. However, the electrons of a given radius scatter not only the locally produced synchrotron photons, but also those produced downstream in the flow. The energy density of the latter, will appear Doppler boosted in the fast (upstream) part of the flow by $\sim \Gamma_{rel}^2$ [@dermer95], where $\Gamma_{rel}$ is the relative Lorentz factor between the fast and slow part of the flow. With their maximum energy being lower (because of cooling) and their energy density amplified they can now contribute to the IC emission at energies higher than expected on the basis of uniform velocity models (see section 1). Also the $\delta-$dependence of this [*upstream Compton (UC)*]{} emission will be different from that of SSC and more akin to that of external Compton (EC) [@dermer95]. ![image](f1_georganopoulos.ps){width="2.75in"} ![image](f2_georganopoulos.ps){width="2.75in"} In the left panel of fig. 1 we plot the SED for a flow decelerating from $\Gamma=15$ to $\Gamma=4$ for two angles $\theta=3^{\circ}$ and $\theta= 6^{\circ}$ over a distance $Z = 2\times 10^{16}$ cm. The radius of the cylindrical flow is set to $R = Z = 2\times 10^{16}$ cm, while the electron distribution at the base of the flow is $n(\gamma)\propto \gamma^{-2}$, $\gamma \leq 3\times 10^7$ and the magnetic field $B=0.1$ G, half its equipartition value. At $\theta=3^{\circ}$ this model achieves a peak energy for the high energy component at $\sim 10$ TeV, using a modest initial Lorentz factor of $\Gamma=15$. Note the strong dependence of synchrotron peak energy on $\theta$ discussed above. Finally, note that, in contrast to uniform velocity homogeneous SSC models, the Compton component is more sensitive to orientation than synchrotron, an indication that UC scattering dominates the $\sim $ TeV observed luminosity. We now turn our attention to the problem of the unification of BLs with FR I sources. A comparison of a sample of FR I nuclei to BLs of similar extended radio power shows that FR I’s are overluminous by a factor of $10-10^4$ compared to their expected luminosity [@chiaberge00; @trussoni03], under the assumption that BLs are characterized by Lorentz factors $\Gamma\sim15$, and that FR I’s are seen under an average angle of $60^{\circ}$. In particular the average BL to FR I nucleus luminosity ratio at radio, optical and X-ray bands was found to be: $\log (L_{BL}/L_{FR\;I})_R \approx 2.4$, $\log (L_{BL} /L_{FR\;I})_{opt} \approx 3.9$, $\log (L_{BL}/L_{FR\;I})_{X} \approx 3.5$. In the right panel of fig. 1 we plot as vertical bars the luminosity separation of BLs and FR Is according to [@chiaberge00; @trussoni03]. We also plot the SED of a decelerating flow with physical parameters similar to the one described above but with $\gamma_{max}= 2\times 10^{5}$ to produce an SED synchrotron peak similar to those of the intermediate BLs that correspond in extended radio power to the FR Is of [@chiaberge00; @trussoni03]. As can be seen, the range of the model SEDs between $\theta=60^{\circ}$ (FR I) and $\theta=1/\Gamma$ (BL) reproduce well the observed range in luminosities, while a uniform velocity model of $\Gamma = 15$ (as demanded by fits of the $\gamma-$ray spectra) would produce at $\theta=60^{\circ}$ an SED many orders of magnitude smaller than shown in the figure. [999]{} Stecker, F.W., De Jager, O. C., Salamon, M. H., 1992, ApJ, [**390**]{}, L49 de Jager, O. C., & Stecker, F. W. 2002, ApJ, [**566**]{}, 738 Aharonian, F. et al. 2002, A&A, [**384**]{}, L23 Krawczynski, H., Coppi, P. S., & Aharonian, F. 2002, MNRAS, [**336**]{}, 721 Chiaberge, M. et al. 2000, A&A, [**358**]{}, 104 Urry, C. M., & Padovani, P. 1995, PASP, [**107**]{}, 803 Marscher, A. P. 1999, Astrop. Phys., [**11**]{}, 19 Piner, B. G. et al. 1999, ApJ, [**525**]{}, 176 Edwards, P. G. & Piner, B. G. 2002, ApJ, [**579**]{}, L67 Georganopoulos, M. & Kazanas, D. 2003, ApJ, [**589**]{}, L5 Trussoni, E. et al. 2003, A&A, [**403**]{}, 889 Dermer, C. D. 1995, ApJ, [**446**]{}, L63
{ "pile_set_name": "ArXiv" }
--- abstract: 'A new method of verifying the subnormality of unbounded Hilbert space operators based on an approximation technique is proposed. Diverse sufficient conditions for subnormality of unbounded weighted shifts on directed trees are established. An approach to this issue via consistent systems of probability measures is invented. The role played by determinate Stieltjes moment sequences is elucidated. Lambert’s characterization of subnormality of bounded operators is shown to be valid for unbounded weighted shifts on directed trees that have sufficiently many quasi-analytic vectors, which is a new phenomenon in this area. The cases of classical weighted shifts and weighted shifts on leafless directed trees with one branching vertex are studied.' address: - 'Katedra Zastosowań Matematyki, Uniwersytet Rolniczy w Krakowie, ul. Balicka 253c, PL-30198 Kraków' - 'Instytut Matematyki, Uniwersytet Jagielloński, ul. Łojasiewicza 6, PL-30348 Kraków, Poland' - 'Department of Mathematics, Kyungpook National University, Daegu 702-701, Korea' - 'Instytut Matematyki, Uniwersytet Jagielloński, ul. Łojasiewicza 6, PL-30348 Kraków, Poland' author: - Piotr Budzyński - 'Zenon Jan Jab[ł]{}oński' - Il Bong Jung - Jan Stochel title: Unbounded subnormal weighted shifts on directed trees --- [^1] Introduction ============ The theory of bounded subnormal operators was originated by P. Halmos in [@hal1]. Nowadays, its foundations are well-developed (see [@con2]; see also [@c-f] for a recent survey article on this subject). The theory of unbounded symmetric operators had been established much earlier (see [@jvn] and the monograph [@stone]). In view of Naimark’s theorem, these particular operators resemble [*unbounded*]{} subnormal operators, i.e., operators having normal extensions in (possibly larger) Hilbert spaces. The first general results on unbounded subnormal operators appeared in [@bis] and [@foi] (see also [@sli]). A systematic study of this class of operators was undertaken in the trilogy [@StSz3; @StSz1; @StSz4]. The theory of unbounded subnormal operators has intimate connections with other branches of mathematics and quantum physics (see [@sz2; @at-cha1; @at-cha2] and [@jor; @stob; @sz1; @j-s]). It has been developed in two main directions, the first is purely theoretical (cf.[@m-s; @jin; @StSz2; @e-v; @vas1; @dem1; @dem2; @dem3; @vas2; @vas3; @al-vas]), the other is related to special classes of operators (cf. [@c-j-k; @kou; @k-t1; @k-t2]). In this paper, we will focus our attention mostly on the class of weighted shifts on directed trees. The notion of a weighted shift on a directed tree generalizes that of a weighted shift on the $\ell^2$ space, the classical object of operator theory (see e.g., the monograph [@nik] on the unilateral shift operator, [@shi] for a survey article on bounded unilateral and bilateral weighted shifts, and [@ml] for basic facts on unbounded ones). In a recent paper [@j-j-s], we have studied some fundamental properties of weighted shifts on directed trees. Although considerable progress has been made in this field, a number of fundamental questions have not been answered. Our aim in this paper is to continue investigations along these lines with special emphasis put on the issue of subnormality of unbounded operators, the case which is essentially more complicated and not an easy extension of the bounded one. The main difficulty comes from the fact that the celebrated Lambert characterization of subnormality of bounded operators (cf. [@Lam]) is no longer valid for unbounded ones (see Section \[subs1\]; see also [@j-j-s4] for a surprising counterexample). A new criterion (read: sufficient condition) for subnormality of unbounded operators has been invented recently in [@c-s-sz]. By using it, we will show that subnormality is preserved by the operation of taking a certain limit (see Theorem \[tw1\]). This enables us to perform the approximation procedure relevant to unbounded weighted shifts on directed trees. What we get is Theorem \[main\], which is the main result of this paper. It provides a criterion for subnormality of unbounded weighted shifts on directed trees written in terms of consistent systems of measures (which is new even in the case of bounded operators). Roughly speaking, for bounded and some unbounded operators having dense set of $C^\infty$-vectors, the assumption that $C^\infty$-vectors generates Stieltjes moment sequences implies subnormality. As discussed in Section \[subs1\], there are unbounded operators for which this is not true (the reverse implication is always true, cf. Proposition \[necess-gen\]). It is a surprising fact that there are non-hyponormal operators having dense set of $C^\infty$-vectors generating Stieltjes moment sequences. These are carefully constructed weighted shifts on a leafless directed tree with one branching vertex (cf.[@j-j-s4]). The same operators do not satisfy the consistency condition $2^\circ$ of Lemma \[charsub2\] and none of them has consistent system of measures. Under some additional assumption, the criterion for subnormality formulated in Theorem \[main\] becomes a full characterization (cf. Corollary \[necessdet2\]). This is the case in the presence of quasi-analytic vectors (cf. Theorem \[main-0\]), which is the first result of this kind (see Section \[cfs\] for more comments). It is worth mentioning that our method of proving Theorem \[main\] depends essentially on the passage through weighted shifts that may have zero weights. The assumption that all basic vectors coming from vertices of the directed tree are $C^\infty$-vectors diminishes the class of weighted shifts to which Theorem \[main\] can be applied. Note that there are weighted shifts on directed trees with nonzero weights, whose squares have trivial domain (directed trees admitting such pathological weighted shifts are the largest possible, cf. [@j-j-s3]). Unfortunately, the known criteria for subnormality that can be applied to such operators seems to be useless (see Section \[cfsub\] for more comments). It was shown in [@j-j-s2] that, in most cases, a normal extension of a nonzero subnormal weighted shift on a directed tree ${{\mathscr T}}$ with nonzero weights could not be modelled as a weighted shift on a directed tree $\hat {{\mathscr T}}$ (no relationship between ${{\mathscr T}}$ and $\hat{{\mathscr T}}$ is required); the only exceptional cases are those in which the directed tree ${{\mathscr T}}$ is isomorphic either to ${\mathbb Z}$ or to ${\mathbb Z}_+$. Though our Theorem \[main\] provides only sufficient conditions for subnormality of weighted shifts on directed trees, in the case of classical weighted shifts it gives the full characterization (cf.Section \[cws\]). The case of leafless directed trees with one branching vertex is discussed in Section \[obv\] (see [@j-j-s4] for new phenomena that happen for weighted shifts on such simple directed trees). Preliminaries ============= Notation and terminology ------------------------ Let ${\mathbb Z}$, ${\mathbb R}$ and ${\mathbb C}$ stand for the sets of integers, real numbers and complex numbers respectively. Define $$\begin{aligned} \text{${\mathbb Z}_+ = \{0,1,2,3,\ldots\}$, ${\mathbb N}= \{1,2,3,4,\ldots\}$ and ${\mathbb R}_+ = \{x \in {\mathbb R}\colon x {\geqslant}0\}$.} \end{aligned}$$ We write ${{\mathfrak B}({\mathbb R}_+)}$ for the $\sigma$-algebra of all Borel subsets of ${\mathbb R}_+$. The closed support of a positive Borel measure $\mu$ on ${\mathbb R}_+$ is denoted by ${\mathrm{supp}\,\mu}$. We write $\delta_0$ for the Borel probability measure on ${\mathbb R}_+$ concentrated at $0$. We denote by ${\mathrm{card}(Y)}$ the cardinal number of a set $Y$. Let $A$ be an operator in a complex Hilbert space ${\mathcal H}$ (all operators considered in this paper are linear). Denote by ${{\EuScript D}(A)}$ and $A^*$ the domain and the adjoint of $A$ (in case it exists). Set ${{\EuScript D}^\infty(A)} = \bigcap_{n=0}^\infty{{\EuScript D}(A^n)}$; members of ${{\EuScript D}^\infty(A)}$ are called [*$C^\infty$-vectors*]{} of $A$. A linear subspace ${\mathcal E}$ of ${{\EuScript D}(A)}$ is said to be a [*core*]{} of $A$ if the graph of $A$ is contained in the closure of the graph of the restriction $A|_{{\mathcal E}}$ of $A$ to ${\mathcal E}$. If $A$ is closed, then ${\mathcal E}$ is a core of $A$ if and only if $A$ coincides with the closure of $A|_{{\mathcal E}}$. A closed densely defined operator $N$ in ${\mathcal H}$ is said to be [*normal*]{} if $N^*N=NN^*$ (equivalently: ${{\EuScript D}(N)}={{\EuScript D}(N^*)}$ and $\|N^*h\|=\|Nh\|$ for all $h \in {{\EuScript D}(N)}$). For other facts concerning unbounded operators (including normal ones) that are needed in this paper we refer the reader to [@b-s; @weid]. A densely defined operator $S$ in ${\mathcal H}$ is said to be [*subnormal*]{} if there exists a complex Hilbert space ${\mathcal K}$ and a normal operator $N$ in ${\mathcal K}$ such that ${\mathcal H}\subseteq {\mathcal K}$ (isometric embedding) and $Sh = Nh$ for all $h \in {{\EuScript D}(S)}$. It is clear that subnormal operators are closable and their closures are subnormal. In what follows, ${\boldsymbol B({\mathcal H})}$ stands for the $C^*$-algebra of all bounded operators $A$ in ${\mathcal H}$ such that ${{\EuScript D}(A)}={\mathcal H}$. We write $\operatorname{\mbox{\sc lin}}{\mathcal F}$ for the linear span of a subset ${\mathcal F}$ of ${\mathcal H}$. Directed trees -------------- Let ${{\mathscr T}}=(V,E)$ be a directed graph (i.e., $V$ is the set of all vertices of ${{\mathscr T}}$ and $E$ is the set of all edges of ${{\mathscr T}}$). If for a given vertex $u \in V$, there exists a unique vertex $v\in V$ such that $(v,u)\in E$, then we say that $u$ has a parent $v$ and write ${\operatorname{{\mathsf{par}}}(u)}$ for $v$. Since the correspondence $u \mapsto {\operatorname{{\mathsf{par}}}(u)}$ is a partial function (read: a relation) in $V$, we can compose it with itself $k$-times ($k \in {\mathbb N}$); the result is denoted by $\operatorname{{\mathsf{par}}}^k$ ($\operatorname{{\mathsf{par}}}^0$ is the identity mapping on $V$). A vertex $v$ of ${{\mathscr T}}$ is called a [*root*]{} of ${{\mathscr T}}$, or briefly $v \in {\operatorname{{\mathsf{Root}}}({{\mathscr T}})}$, if there is no vertex $u$ of ${{\mathscr T}}$ such that $(u,v)$ is an edge of ${{\mathscr T}}$. Note that if ${{\mathscr T}}$ is connected and each vertex $v \in V^\circ:=V\setminus {\operatorname{{\mathsf{Root}}}({{\mathscr T}})}$ has a parent, then the set ${\operatorname{{\mathsf{Root}}}({{\mathscr T}})}$ has at most one element (cf. [@j-j-s Proposition 2.1.1]). If ${\operatorname{{\mathsf{Root}}}({{\mathscr T}})}$ is a one-point set, then its unique element is denoted by $\operatorname{{\mathsf{root}}}$. We say that a directed graph ${{\mathscr T}}$ is a [*directed tree*]{} if ${{\mathscr T}}$ is connected, has no circuits and each vertex $v \in V^\circ$ has a parent ${\operatorname{{\mathsf{par}}}(v)}$. Let ${{\mathscr T}}=(V,E)$ be a directed tree. Set ${\operatorname{{\mathsf{Chi}}}(u)} = \{v\in V\colon (u,v)\in E\}$ for $u \in V$. A member of ${\operatorname{{\mathsf{Chi}}}(u)}$ is called a [*child*]{} (or [*successor*]{}) of $u$. We say that ${{\mathscr T}}$ is [*leafless*]{} if $V = V^\prime$, where $V^\prime:=\{u \in V \colon {\operatorname{{\mathsf{Chi}}}(u)} \neq \varnothing\}$. It is clear that every leafless directed tree is infinite. A vertex $u \in V$ is called a [*branching vertex*]{} of ${{\mathscr T}}$ if ${\mathrm{card}({\operatorname{{\mathsf{Chi}}}(u)})} {\geqslant}2$. It is well-known that (see e.g., [@j-j-s Proposition 2.1.2]) if ${{\mathscr T}}$ is a directed tree, then ${\operatorname{{\mathsf{Chi}}}(u)} \cap {\operatorname{{\mathsf{Chi}}}(v)} = \varnothing$ for all $u, v\in V$ such that $u \neq v$, and $$\begin{aligned} \label{roz} V^\circ= \bigsqcup_{u\in V} {\operatorname{{\mathsf{Chi}}}(u)}. \end{aligned}$$ (The symbol “$\bigsqcup$” denotes disjoint union of sets.) For a subset $W \subseteq V$, we put ${\operatorname{{\mathsf{Chi}}}(W)} = \bigsqcup_{v \in W} {\operatorname{{\mathsf{Chi}}}(v)}$ and define ${\operatorname{{\mathsf{Chi}}}^{\langle0\rangle}(W)} = W$, ${\operatorname{{\mathsf{Chi}}}^{\langlen+1\rangle}(W)} = {\operatorname{{\mathsf{Chi}}}({\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(W)})}$ for $n\in {\mathbb Z}_+$ and ${{\operatorname{{\mathsf{Des}}}(W)}} = \bigcup_{n=0}^\infty {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(W)}$. By induction, we have $$\begin{aligned} \label{n+1} {\operatorname{{\mathsf{Chi}}}^{\langlen+1\rangle}(W)} & = \bigcup_{v \in {\operatorname{{\mathsf{Chi}}}(W)}} {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(\{v\})}, \quad n \in {\mathbb Z}_+, \\ \label{chmn} {\operatorname{{\mathsf{Chi}}}^{\langlem\rangle}({\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(W)})} & = {\operatorname{{\mathsf{Chi}}}^{\langlem+n\rangle}(W)}, \quad m,n \in {\mathbb Z}_+. \end{aligned}$$ We shall abbreviate ${\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(\{u\})}$ and ${{\operatorname{{\mathsf{Des}}}(\{u\})}}$ to ${\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}$ and ${{\operatorname{{\mathsf{Des}}}(u)}}$ respectively. We now state some useful properties of the functions ${\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(\cdot)}$ and ${{\operatorname{{\mathsf{Des}}}(\cdot)}}$. If ${{\mathscr T}}$ is a directed tree, then $$\begin{aligned} \label{num4} {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)} &= \{w \in V\colon \operatorname{{\mathsf{par}}}^n(w)=u\}, \quad n \in {\mathbb Z}_+,\, u \in V, \\ \label{dzinn2} {\operatorname{{\mathsf{Chi}}}^{\langlen+1\rangle}(u)} & = \bigsqcup_{v \in {\operatorname{{\mathsf{Chi}}}(u)}} {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(v)}, \quad n \in {\mathbb Z}_+,\, u \in V, \\ \label{num1} {\operatorname{{\mathsf{Chi}}}^{\langlen+1\rangle}(u)} & = \bigsqcup_{v \in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}} {\operatorname{{\mathsf{Chi}}}(v)}, \quad n \in {\mathbb Z}_+,\, u \in V, \\ {{\operatorname{{\mathsf{Des}}}(u)}} & = \bigsqcup_{n=0}^\infty {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}, \quad u\in V, \label{num3} \\ {{\operatorname{{\mathsf{Des}}}(u_1)}} \cap {{\operatorname{{\mathsf{Des}}}(u_2)}} & = \varnothing, \quad u_1, u_2 \in {\operatorname{{\mathsf{Chi}}}(u)},\, u_1 \neq u_2,\, u \in V. \label{num3+} \end{aligned}$$ Equality follows by induction on $n$. Combining with the fact that the sets ${\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}$, $u \in V$, are pairwise disjoint for every fixed integer $n {\geqslant}0$, we get . Equality follows from the definition of ${\operatorname{{\mathsf{Chi}}}^{\langlen+1\rangle}(u)}$ and . Using the definition of $\operatorname{{\mathsf{par}}}$ and the fact that ${{\mathscr T}}$ has no circuits, we deduce that the sets ${\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}$, $n \in {\mathbb Z}_+$, are pairwise disjoint. Hence, holds. Assertion can be deduced from and . \[przem\] If ${{\mathscr T}}$ is a directed tree with root, then $V = {{\operatorname{{\mathsf{Des}}}(\operatorname{{\mathsf{root}}})}} = \bigsqcup_{n=0}^\infty {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(\operatorname{{\mathsf{root}}})}$. Weighted shifts on directed trees --------------------------------- In what follows, given a directed tree ${{\mathscr T}}$, we tacitly assume that $V$ and $E$ stand for the sets of vertices and edges of ${{\mathscr T}}$ respectively. Denote by $\ell^2(V)$ the Hilbert space of all square summable complex functions on $V$ with the standard inner product ${\langlef,g\rangle} = \sum_{u \in V} f(u) \overline{g(u)}$. For $u \in V$, we define $e_u \in \ell^2(V)$ to be the characteristic function of the one-point set $\{u\}$. Then $\{e_u\}_{u\in V}$ is an orthonormal basis of $\ell^2(V)$. Set ${{\mathscr{E}_V}}= \operatorname{\mbox{\sc lin}}\{e_u\colon u \in V\}$. Given ${{\boldsymbol\lambda}}= \{\lambda_v\}_{v \in V^\circ} \subseteq {\mathbb C}$, we define the operator ${S_{\boldsymbol \lambda}}$ in $\ell^2(V)$ by $$\begin{aligned} \begin{aligned} {{\EuScript D}({S_{\boldsymbol \lambda}})} & = \{f \in \ell^2(V) \colon \varLambda_{{\mathscr T}}f \in \ell^2(V)\}, \\ {S_{\boldsymbol \lambda}}f & = \varLambda_{{\mathscr T}}f, \quad f \in {{\EuScript D}({S_{\boldsymbol \lambda}})}, \end{aligned} \end{aligned}$$ where $\varLambda_{{\mathscr T}}$ is the mapping defined on functions $f\colon V \to {\mathbb C}$ via $$\begin{aligned} \label{lamtauf} (\varLambda_{{\mathscr T}}f) (v) = \begin{cases} \lambda_v \cdot f\big({\operatorname{{\mathsf{par}}}(v)}\big) & \text{ if } v\in V^\circ, \\ 0 & \text{ if } v=\operatorname{{\mathsf{root}}}. \end{cases} \end{aligned}$$ We call ${S_{\boldsymbol \lambda}}$ a [*weighted shift*]{} on the directed tree ${{\mathscr T}}$ with weights ${{\boldsymbol\lambda}}=\{\lambda_v\}_{v \in V^\circ}$. Now we select some properties of weighted shifts on directed trees that will be needed in this paper (see Propositions 3.1.2, 3.1.3, 3.1.8, 3.4.1, 3.1.7 and 3.1.10 in [@j-j-s]). In what follows, we adopt the convention that $\sum_{v\in\varnothing} x_v=0$. \[bas\] Let ${S_{\boldsymbol \lambda}}$ be a weighted shift on a directed tree ${{\mathscr T}}$ with weights ${{\boldsymbol\lambda}}= \{\lambda_v\}_{v \in V^\circ}$. Then the following assertions hold[*:*]{} 1. ${S_{\boldsymbol \lambda}}$ is closed, 2. $e_u \in {{\EuScript D}({S_{\boldsymbol \lambda}})}$ if and only if $\sum_{v\in{\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2 < \infty$[*;*]{} if $e_u \in {{\EuScript D}({S_{\boldsymbol \lambda}})}$, then $$\begin{aligned} \label{eu} {S_{\boldsymbol \lambda}}e_u = \sum_{v\in{\operatorname{{\mathsf{Chi}}}(u)}} \lambda_v e_v \quad \text{and} \quad \|{S_{\boldsymbol \lambda}}e_u\|^2 = \sum_{v\in{\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2, \end{aligned}$$ 3. $\overline{{{\EuScript D}({S_{\boldsymbol \lambda}})}}=\ell^2(V)$ if and only if ${{\mathscr{E}_V}}\subseteq {{\EuScript D}({S_{\boldsymbol \lambda}})}$, 4. if $\overline{{{\EuScript D}({S_{\boldsymbol \lambda}})}}=\ell^2(V)$, then ${{\mathscr{E}_V}}$ is a core of ${S_{\boldsymbol \lambda}}$, 5. ${S_{\boldsymbol \lambda}}\in {\boldsymbol B(\ell^2(V))}$ if and only if $\alpha_{{{\boldsymbol\lambda}}}:=\sup_{u\in V}\sum\nolimits_{v\in{\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2 < \infty$[*;*]{} if ${S_{\boldsymbol \lambda}}\in {\boldsymbol B(\ell^2(V))}$, then $\|{S_{\boldsymbol \lambda}}\|^2=\alpha_{{{\boldsymbol\lambda}}}$, 6. if $\overline{{{\EuScript D}({S_{\boldsymbol \lambda}})}}=\ell^2(V)$, then ${{\mathscr{E}_V}}\subseteq {{\EuScript D}({S_{\boldsymbol \lambda}}^*)}$ and $$\begin{aligned} \label{sl*} {S_{\boldsymbol \lambda}}^*e_u= \begin{cases} \overline{\lambda_u} e_{{\operatorname{{\mathsf{par}}}(u)}} & \text{if } u \in V^\circ, \\ 0 & \text{if } u = \operatorname{{\mathsf{root}}}, \end{cases} \quad u \in V, \end{aligned}$$ 7. ${S_{\boldsymbol \lambda}}$ is injective if and only if ${{\mathscr T}}$ is leafless and $\sum_{v\in{\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2 > 0$ for every $u\in V$, 8. if $\overline{{{\EuScript D}({S_{\boldsymbol \lambda}})}}=\ell^2(V)$ and $\lambda_v \neq 0$ for all $v \in V^\circ$, then $V$ is at most countable. Backward extensions of Stieltjes moment sequences ------------------------------------------------- We say that a sequence $\{t_n\}_{n=0}^\infty$ of real numbers is a [*Stieltjes moment sequence*]{} if there exists a positive Borel measure $\mu$ on ${\mathbb R}_+$ such that $$\begin{aligned} t_{n}=\int_0^\infty s^n \operatorname{d}\mu(s),\quad n\in {\mathbb Z}_+, \end{aligned}$$ where $\int_0^\infty$ means integration over the set ${\mathbb R}_+$; $\mu$ is called a [*representing measure*]{} of $\{t_n\}_{n=0}^\infty$. A Stieltjes moment sequence is said to be [*determinate*]{} if it has only one representing measure. By the Stieltjes theorem (cf.[@sh-tam Theorem  1.3] or [@ber Theorem 6.2.5]), a sequence $\{t_n\}_{n=0}^\infty \subseteq {\mathbb R}$ is a Stieltjes moment sequence if and only if the sequences $\{t_n\}_{n=0}^\infty$ and $\{t_{n+1}\}_{n=0}^\infty$ are positive definite (recall that a sequence $\{t_n\}_{n=0}^\infty \subseteq {\mathbb R}$ is said to be [*positive definite*]{} if $\sum_{k,l=0}^n t_{k+l} \alpha_k \overline{\alpha_l} {\geqslant}0$ for all $\alpha_0,\ldots, \alpha_n \in {\mathbb C}$ and $n \in {\mathbb Z}_+$). It is clear from the definition that $$\begin{aligned} \label{st+1} \text{if $\{t_n\}_{n=0}^\infty$ is a Stieltjes moment sequence, then so is $\{t_{n+1}\}_{n=0}^\infty$.} \end{aligned}$$ The converse is not true in general. For example, the sequence of the form $\{t_{n}\}_{n=0}^\infty=\{t_0,1, 0, 0, \ldots\}$ is never a Stieltjes moment sequence, but $\{t_{n+1}\}_{n=0}^\infty = \{1, 0, 0, \ldots\}$ is (see Lemma \[bext\] below for more detailed discussion of this issue). Moreover, if $\{t_n\}_{n=0}^\infty$ is an indeterminate Stieltjes moment sequence, then so is $\{t_{n+1}\}_{n=0}^\infty$ (see Lemma \[bext\]; see also [@sim Proposition 5.12]). The converse implication fails to hold (cf. [@sim Corollary 4.21]; see also [@j-j-s4]). The question of backward extendibility of Hamburger moment sequences has well-known solutions (see e.g., [@wri] and [@sz]). Below, we formulate a solution of a variant of this question for Stieltjes moment sequences (see [@j-j-s Lemma 6.1.2] for the special case of compactly supported representing measures; see also [@cur Proposition 8] for a related matter). \[bext\] Let $\{t_n\}_{n=0}^\infty$ be a Stieltjes moment sequence and let $\vartheta$ be a positive real number. Set $t_{-1}=\vartheta$. Then the following are equivalent[*:*]{} 1. $\{t_{n-1}\}_{n=0}^\infty$ is a Stieltjes moment sequence, 2. $\{t_{n-1}\}_{n=0}^\infty$ is positive definite, 3. there is a representing measure $\mu$ of $\{t_n\}_{n=0}^\infty$ such that[^2] $\int_0^\infty \frac 1 s \operatorname{d}\mu(s) {\leqslant}\vartheta$. Moreover, if [*(i)*]{} holds, then the mapping ${\mathscr M}_0(\vartheta) \ni \mu \to \nu_{\mu} \in {\mathscr M}_{-1}(\vartheta)$ defined by $$\begin{aligned} \label{nu} \nu_{\mu}(\sigma) = \int_\sigma \frac 1 s \operatorname{d}\mu(s) + \Big(\vartheta - \int_0^\infty \frac 1 s \operatorname{d}\mu(s)\Big) \delta_0(\sigma), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \end{aligned}$$ is a bijection with the inverse $ {\mathscr M}_{-1}(\vartheta) \ni \nu \to \mu_{\nu} \in{\mathscr M}_0(\vartheta)$ given by $$\begin{aligned} \label{mu} \mu_{\nu} ( \sigma) = \int_\sigma s \operatorname{d}\nu (s), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \end{aligned}$$ where ${\mathscr M}_0(\vartheta)$ stands for the set of all representing measures $\mu$ of $\{t_n\}_{n=0}^\infty$ such that $\int_0^\infty \frac 1 s \operatorname{d}\mu(s) {\leqslant}\vartheta$, and ${\mathscr M}_{-1}(\vartheta)$ for the set of all representing measures $\nu$ of $\{t_{n-1}\}_{n=0}^\infty$. In particular, $\nu_{\mu}(\{0\})=0$ if and only if $\int_0^\infty \frac 1 s \operatorname{d}\mu(s)=\vartheta$. If [*(i)*]{} holds and $\{t_n\}_{n=0}^\infty$ is determinate, then $\{t_{n-1}\}_{n=0}^\infty$ is determinate, the unique representing measure $\mu$ of $\{t_n\}_{n=0}^\infty$ satisfies the inequality $\int_0^\infty \frac 1 s \operatorname{d}\mu(s) {\leqslant}\vartheta$, and $\nu_{\mu}$ is the unique representing measure of $\{t_{n-1}\}_{n=0}^\infty$. Equivalence (i)$\Leftrightarrow$(ii) follows from the Stieltjes theorem. (iii)$\Rightarrow$(i) Clearly, if $\mu \in {\mathscr M}_0(\vartheta)$, then $t_{n-1}= \int_0^\infty s^n \operatorname{d}\nu_{\mu}(s)$ for all $n \in {\mathbb Z}_+$, which means that $\{t_{n-1}\}_{n=0}^\infty$ is a Stieltjes moment sequence and $\nu_{\mu} \in {\mathscr M}_{-1}(\vartheta)$. (i)$\Rightarrow$(iii) Take $\nu \in {\mathscr M}_{-1}(\vartheta)$. Setting $\mu:=\mu_{\nu}$ (cf. ), we see that $$\begin{aligned} \label{tnrep} t_n = t_{(n+1)-1} = \int_0^\infty s^n s\operatorname{d}\nu(s) = \int_0^\infty s^n \operatorname{d}\mu(s), \quad n \in {\mathbb Z}_+. \end{aligned}$$ It is clear that $\mu(\{0\})=0$ and thus $$\begin{aligned} \int_0^\infty \frac 1 s \operatorname{d}\mu(s) & = \int_{(0,\infty)} \operatorname{d}\nu(s) = \nu((0,\infty)) \\ & = \int_{[0,\infty)} s^0 \operatorname{d}\nu(s) - \nu(\{0\}) = \vartheta - \nu(\{0\}), \end{aligned}$$ which implies that $\int_0^\infty \frac 1 s \operatorname{d}\mu(s) {\leqslant}\vartheta$. This, combined with , shows that $\mu \in {\mathscr M}_0(\vartheta)$. Since $\nu({\mathbb R}_+)=\vartheta$, we deduce from and the definition of $\mu$ that $$\begin{aligned} \nu_{\mu}(\sigma) &= \int_{\sigma\setminus \{0\}} \frac 1 s \operatorname{d}\mu(s) + \Big(\vartheta-\int_0^\infty \frac 1 s \operatorname{d}\mu(s)\Big) \delta_0(\sigma \cap \{0\}) \\ &= \nu(\sigma\setminus \{0\}) + \Big(\vartheta- \nu((0, \infty))\Big) \delta_0(\sigma \cap \{0\}) \\ &= \nu(\sigma\setminus \{0\}) + \nu(\{0\}) \delta_0(\sigma \cap \{0\}) = \nu(\sigma), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \end{aligned}$$ which yields $\nu_{\mu} = \nu$. We have proved that, under the assumption (i), the mapping ${\mathscr M}_0(\vartheta) \ni \mu \to \nu_{\mu} \in {\mathscr M}_{-1}(\vartheta)$ is well-defined and surjective. Its injectivity follows from the equality $$\begin{aligned} \mu(\sigma) = \mu(\sigma \setminus \{0\}) = \int_{\sigma \setminus \{0\}} s \operatorname{d}\nu_{\mu}(s), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \, \mu \in {\mathscr M}_0(\vartheta). \end{aligned}$$ This yields the determinacy part of the conclusion. Let us discuss some consequences of Lemma \[bext\]. Suppose that $\{t_n\}_{n=0}^\infty$ is a determinate Stieltjes moment sequence with a representing measure $\mu$. If $\int_0^\infty \frac 1 s \operatorname{d}\mu(s) = \infty$ (e.g., when $\mu(\{0\}) > 0$), then the sequence $\{\vartheta, t_0, t_1, \ldots\}$ is never a Stieltjes moment sequence. In turn, if $\int_0^\infty \frac 1 s \operatorname{d}\mu(s) < \infty$, then the sequence $\{\vartheta, t_0, t_1, \ldots\}$ is a determinate Stieltjes moment sequence if $\vartheta {\geqslant}\int_0^\infty \frac 1 s \operatorname{d}\mu(s)$, and it is not a Stieltjes moment sequence if $\vartheta < \int_0^\infty \frac 1 s \operatorname{d}\mu(s)$. Under the assumptions of Lemma \[bext\], if $\{t_{n-1}\}_{n=0}^\infty$ is a Stieltjes moment sequence and $t_0 > 0$, then $t_n > 0$ for all $n \in {\mathbb Z}_+$ and $$\begin{aligned} \sup_{n \in {\mathbb Z}_+}\frac{t_n^2}{t_{2n+1}} {\leqslant}\int_0^\infty \frac{1}{s} \operatorname{d}\mu(s) {\leqslant}\vartheta, \quad \mu \in {\mathscr M}_0(\vartheta). \end{aligned}$$ Indeed, since $t_0 > 0$ and $\mu(\{0\})=0$, we verify that $t_n > 0$ for all $n \in {\mathbb Z}_+$. By the Cauchy-Schwarz inequality, we have $$\begin{aligned} t_n^2 = \Big(\int_{(0,\infty)} s^{-\nicefrac12}s^{n+\nicefrac12} \operatorname{d}\mu(s)\Big)^2 {\leqslant}\int_0^\infty \frac{1}{s} \operatorname{d}\mu(s) \int_0^\infty s^{2n+1} \operatorname{d}\mu(s), \quad n \in {\mathbb Z}_+. \end{aligned}$$ Note that if $\{t_n\}_{n=0}^\infty$ is indeterminate, then there is a smallest $\vartheta$ for which the sequence $\{t_{n-1}\}_{n=0}^\infty$ is a Stieltjes moment sequence (see [@j-j-s4] for more details). A General Setting for Subnormality ================================== \[cfsub\]Criteria for subnormality ---------------------------------- The only known general characterization of subnormality of unbounded Hilbert space operators is due to Bishop and Foiaş (cf. [@bis; @foi]; see also [@FHSz] for a new approach via sesquilinear selection of elementary spectral measures). Since this characterization refers to semispectral measures (or elementary spectral measures), it seems to be useless in the context of weighted shifts on directed trees. The other known criteria for subnormality require the operator in question to have an invariant domain (with the exception of [@sz4]). Since a closed subnormal operator with an invariant domain is automatically bounded (see [@las Lemma 2.2(ii)], see also [@ota; @oka]) and a weighted shift operator ${S_{\boldsymbol \lambda}}$ on a directed tree is always closed (cf. Proposition \[bas\](i)), we have to find a smaller subspace of ${{\EuScript D}({S_{\boldsymbol \lambda}})}$ which is an invariant core of ${S_{\boldsymbol \lambda}}$. This will enable us to apply the aforesaid criteria for subnormality of operators with invariant domains in the context of weighted shift operators on directed trees (see Section \[sf-a\]). We begin by recalling a characterization of subnormality invented in [@c-s-sz]. \[chsub\] Let $S$ be a densely defined operator in a complex Hilbert space ${\mathcal H}$ such that $S({{\EuScript D}(S)}) \subset {{\EuScript D}(S)}$. Then the following conditions are equivalent[*:*]{} 1. $S$ is subnormal, 2. for every finite system $\{a_{p,q}^{i,j}\}_{p,q = 0, \ldots, n}^{i,j=1, \ldots, m} \subset {\mathbb C}$, if $$\begin{aligned} \label{1} \sum_{i,j=1}^m \sum_{p,q=0}^n a_{p,q}^{i,j} \lambda^p \bar \lambda^q z_i \bar z_j {\geqslant}0, \quad \lambda, z_1, \ldots, z_m \in {\mathbb C}, \end{aligned}$$ then $$\begin{aligned} \sum_{i,j=1}^m \sum_{p,q=0}^n a_{p,q}^{i,j} {\langleS^p f_i,S^q f_j\rangle} {\geqslant}0, \quad f_1, \ldots, f_m \in {{\EuScript D}(S)}. \end{aligned}$$ Using the above characterization, we show that some weak-type limit procedure preserves subnormality (this can also be done with the help of either [@StSz1 Theorem 3] or [@StSz2 Theorem 37]; however these two characterizations take more complicated forms). This is a key tool for proving Theorem \[main\]. \[tw1\] Let $\{S_{\omega}\}_{\omega \in \varOmega}$ be a net of subnormal operators in a complex Hilbert space ${\mathcal H}$ and let $S$ be a densely defined operator in ${\mathcal H}$. Suppose that there is a subset ${\mathcal X}$ of ${\mathcal H}$ such that 1. ${\mathcal X}\subseteq {{\EuScript D}^\infty(S)} \cap \bigcap_{\omega \in \varOmega}{{\EuScript D}^\infty(S_{\omega})}$, 2. ${\mathcal F}:= \operatorname{\mbox{\sc lin}}\bigcup_{n=0}^\infty S^n({\mathcal X})$ is a core of $S$, 3. ${\langleS^m x,S^n y\rangle} = \lim_{\omega \in \varOmega} {\langleS_{\omega}^m x,S_{\omega}^n y\rangle}$ for all $x,y \in {\mathcal X}$ and $m,n \in {\mathbb Z}_+$. Then $S$ is subnormal. Set ${\mathcal F}_{\omega}=\operatorname{\mbox{\sc lin}}\bigcup_{n=0}^\infty S_{\omega}^n({\mathcal X})$ for $\omega \in \varOmega$. It is clear that $S_{\omega}|_{{\mathcal F}_{\omega}}$ is a subnormal operator in $\overline{{\mathcal F}_{\omega}}$ with an invariant domain. Take a finite system $\{a_{p,q}^{i,j}\}_{p,q = 0, \ldots, n}^{i,j=1, \ldots, m}$ of complex numbers satisfying . Let $f_1, \ldots, f_m$ be arbitrary vectors in ${\mathcal F}$. Then for every $i \in \{1, \ldots, m\}$, there exists a positive integer $r$ and a system $\{\zeta_{x,k}^{(i)}\colon x \in {\mathcal X}, k= 1, \ldots, r\}$ of complex numbers such that the set $\{x \in {\mathcal X}\colon \zeta_{x,k}^{(i)} \neq 0\}$ is finite for every $k\in \{1, \ldots, r\}$, and $f_i = \sum_{x \in {\mathcal X}}\sum_{k=1}^r \zeta_{x,k}^{(i)} S^k x$. Set $f_{i,\omega} = \sum_{x \in {\mathcal X}}\sum_{k=1}^r \zeta_{x,k}^{(i)} S_{\omega}^k x$ for $i \in \{1, \ldots, m\}$ and $\omega \in \varOmega$. Then $f_{i,\omega} \in {\mathcal F}_{\omega}$ for all $i \in \{1, \ldots, m\}$ and $\omega \in \varOmega$. Applying Theorem \[chsub\] to the subnormal operators $S_{\omega}|_{{\mathcal F}_{\omega}}$, we get $$\begin{gathered} \sum_{i,j=1}^m \sum_{p,q=0}^n a_{p,q}^{i,j} {\langleS^p f_i,S^q f_j\rangle} = \sum_{i,j=1}^m \sum_{p,q=0}^n \sum_{x,y \in {\mathcal X}} \sum_{k,l=1}^r a_{p,q}^{i,j} \zeta_{x,k}^{(i)} \overline{\zeta_{y,l}^{(j)}} {\langleS^{p+k} x,S^{q+l} y\rangle} \\ \overset{{\rm (iii)}}= \lim_{\omega \in \varOmega} \sum_{i,j=1}^m \sum_{p,q=0}^n \sum_{x,y \in {\mathcal X}} \sum_{k,l=1}^r a_{p,q}^{i,j} \zeta_{x,k}^{(i)} \overline{\zeta_{y,l}^{(j)}} {\langleS_{\omega}^{p+k} x,S_{\omega}^{q+l} y\rangle} \\ = \lim_{\omega \in \varOmega} \sum_{i,j=1}^m \sum_{p,q=0}^n a_{p,q}^{i,j} {\langleS_{\omega}^p f_{i,\omega},S_{\omega}^q f_{j,\omega}\rangle} {\geqslant}0. \end{gathered}$$ This means that the operator $S|_{{\mathcal F}}$ satisfies condition (ii) of Theorem \[chsub\]. Since $S|_{{\mathcal F}}$ has an invariant domain, we deduce from Theorem \[chsub\] that $S|_{{\mathcal F}}$ is subnormal. Combining the latter with the assumption that ${\mathcal F}$ is a core of $S$, we see that $S$ itself is subnormal. This completes the proof. We say that a densely defined operator $S$ in a complex Hilbert space ${\mathcal H}$ is [*cyclic*]{} with a [*cyclic vector*]{} $e \in {\mathcal H}$ if $e \in {{\EuScript D}^\infty(S)}$ and $\operatorname{\mbox{\sc lin}}\{S^n e\colon n=0,1, \ldots\}$ is a core of $S$. Let $\{S_{\omega}\}_{\omega \in \varOmega}$ be a net of subnormal operators in a complex Hilbert space ${\mathcal H}$ and let $S$ be a cyclic operator in ${\mathcal H}$ with a cyclic vector $e$ such that 1. $e \in \bigcap_{\omega \in \varOmega}{{\EuScript D}^\infty(S_{\omega})}$, 2. ${\langleS^m e,S^n e\rangle} = \lim_{\omega \in \varOmega} {\langleS_{\omega}^m e,S_{\omega}^n e\rangle}$ for all $m,n \in {\mathbb Z}_+$. Then $S$ is subnormal. The following fact can be proved in much the same way as Theorem \[tw1\]. \[tw1+1\] Let $S$ be a densely defined operator in a complex Hilbert space ${\mathcal H}$. Suppose that there are a family $\{{\mathcal H}_\omega\}_{\omega \in \varOmega}$ of closed linear subspaces of ${\mathcal H}$ and an upward directed family $\{{\mathcal X}_\omega\}_{\omega \in \varOmega}$ of subsets of ${\mathcal H}$ such that 1. ${\mathcal X}_\omega \subseteq {{\EuScript D}^\infty(S)}$ and $S^n({\mathcal X}_\omega) \subseteq {\mathcal H}_\omega$ for all $n\in {\mathbb Z}_+$ and $\omega \in \varOmega$, 2. ${\mathcal F}_\omega:=\operatorname{\mbox{\sc lin}}\bigcup_{n=0}^\infty S^n({\mathcal X}_\omega)$ is dense in ${\mathcal H}_\omega$ for every $\omega \in \varOmega$, 3. $S|_{{\mathcal F}_\omega}$ is a subnormal operator in ${\mathcal H}_\omega$ for every $\omega \in \varOmega$, 4. ${\mathcal F}:=\operatorname{\mbox{\sc lin}}\bigcup_{n=0}^\infty S^n\big(\bigcup_{\omega \in \varOmega} {\mathcal X}_\omega\big)$ is a core of $S$. Then $S$ is subnormal. Clearly, the families $\{{\mathcal F}_\omega\}_{\omega \in \varOmega}$ and $\{{\mathcal H}_\omega\}_{\omega \in \varOmega}$ are upward directed, $S({\mathcal F}_\omega) \subseteq {\mathcal F}_\omega$ for all $\omega \in \varOmega$, ${\mathcal F}= \bigcup_{\omega \in \varOmega} {\mathcal F}_\omega$ and $S({\mathcal F}) \subseteq {\mathcal F}$. Hence, we can argue as in the proof of Theorem  \[tw1\]. \[subs1\]Necessity ------------------ We begin by recalling a well-known fact that $C^\infty$-vectors of a subnormal operator always generate Stieltjes moment sequences. \[necess-gen\] If $S$ is a subnormal operator in a complex Hilbert space ${\mathcal H}$, then ${{\EuScript D}^\infty(S)} = {\mathscr S(S)}$, where ${\mathscr S(S)}$ stands for the set of all vectors $f \in {{\EuScript D}^\infty(S)}$ such that the sequence $\{\|S^n f\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence. Let $N$ be a normal extension of $S$ acting in a complex Hilbert space ${\mathcal K}\supseteq {\mathcal H}$ and let $E$ be the spectral measure of $N$. Define the mapping $\phi \colon {\mathbb C}\to {\mathbb R}_+$ by $\phi(z)=|z|^2$, $z \in {\mathbb C}$. Since evidently ${{\EuScript D}^\infty(S)} \subseteq {{\EuScript D}^\infty(N)}$, we deduce from the measure transport theorem (cf. [@b-s Theorem 5.4.10]) that for every $f \in {{\EuScript D}^\infty(S)}$, $$\begin{aligned} \|S^n f\|^2 = \|N^n f\|^2 &= \Big\|\int_{{\mathbb C}} z^n E(\operatorname{d}z)f\Big\|^2 \\ &= \int_{{\mathbb C}} \phi(z)^n {\langleE(\operatorname{d}z)f,f\rangle} = \int_0^\infty t^n {\langleF(\operatorname{d}t)f,f\rangle}, \quad n \in {\mathbb Z}_+, \end{aligned}$$ where $F$ is the spectral measure on ${\mathbb R}_+$ given by $F(\sigma) = E(\phi^{-1}(\sigma))$ for $\sigma \in {{\mathfrak B}({\mathbb R}_+)}$. This implies that ${{\EuScript D}^\infty(S)} \subseteq {\mathscr S(S)}$. Note that there are closed symmetric operators (that are always subnormal due to [@a-g Theorem 1 in Appendix I.2]) whose squares have trivial domain (cf. [@nai; @cher]). It follows from Proposition \[necess-gen\] that if $S$ is a subnormal operator in a complex Hilbert space ${\mathcal H}$ with an invariant domain, then $S$ is densely defined and ${{\EuScript D}(S)}={\mathscr S(S)}$. One might expect that the reverse implication would hold as well. This is really the case for bounded operators (cf. [@Lam]) and for some unbounded operators that have sufficiently many analytic vectors (cf. [@StSz1 Theorem 7]). In Section \[cfs\] we show that this is also the case for weighted shifts on directed trees that have sufficiently many quasi-analytic vectors (see Theorem \[main-0\]). However, in general, this is not the case. Indeed, one can construct a densely defined operator $N$ in a complex Hilbert space ${\mathcal H}$ which is not subnormal and which has the following properties (see [@Cod; @Sch; @sto-ark]): $$\begin{gathered} \label{fn1} N({{\EuScript D}(N)}) \subseteq {{\EuScript D}(N)}, \, {{\EuScript D}(N)} \subseteq {{\EuScript D}(N^*)}, \, N^*({{\EuScript D}(N)}) \subseteq {{\EuScript D}(N)} \\ \text{ and } N^*Nf = NN^*f \text{ for all } f\in {{\EuScript D}(N)}. \label{fn2} \end{gathered}$$ We show that for such $N$, ${{\EuScript D}(N)}={\mathscr S(N)}$. Indeed, by and , we have $$\begin{aligned} \sum_{k,l=0}^n \|N^{k+l}f\|^2 \alpha_k \overline{\alpha_l} = \sum_{k,l=0}^n {\langle(N^*N)^{k+l}f,f\rangle} \alpha_k \overline{\alpha_l} = \Big\|\sum_{k=0}^n \alpha_k (N^*N)^k f\Big\|^2 {\geqslant}0, \end{aligned}$$ for all $f \in {{\EuScript D}(N)}$, $n \in {\mathbb Z}_+$ and $\alpha_0,\ldots, \alpha_n \in {\mathbb C}$, which means that the sequence $\{\|N^{n}f\|^2\}_{n=0}^\infty$ is positive definite for every $f \in {{\EuScript D}(N)}$. Replacing $f$ by $Nf$, we see that the sequence $\{\|N^{n+1}f\|^2\}_{n=0}^\infty$ is positive definite for every $f \in {{\EuScript D}(N)}$. Applying the Stieltjes theorem, we conclude that ${{\EuScript D}(N)}={\mathscr S(N)}$. Towards Subnormality of Weighted Shifts ======================================= Powers of weighted shifts ------------------------- Let ${{\mathscr T}}=(V,E)$ be a directed tree. Given a family $\{\lambda_v\}_{v \in V^\circ}$ of complex numbers, we define the family $\{\lambda_{u\mid v}\}_{u \in V, v \in {{\operatorname{{\mathsf{Des}}}(u)}}}$ by $$\begin{aligned} \label{luv} \lambda_{u\mid v} = \begin{cases} 1 & \text{ if } v=u, \\ \prod_{j=0}^{n-1} \lambda_{\operatorname{{\mathsf{par}}}^{j}(v)} & \text{ if } v \in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}, \, n {\geqslant}1. \end{cases} \end{aligned}$$ Note that due to the above definition is correct and $$\begin{aligned} \label{num2} \lambda_{u\mid w} & = \lambda_{u\mid v}\lambda_w, \quad w \in {\operatorname{{\mathsf{Chi}}}(v)}, \, v\in {{\operatorname{{\mathsf{Des}}}(u)}}, \, u \in V, \\ \lambda_{{\operatorname{{\mathsf{par}}}(v)}\mid w} & = \lambda_v \lambda_{v\mid w}, \quad v \in V^\circ, \, w\in {{\operatorname{{\mathsf{Des}}}(v)}}. \label{recfor2} \end{aligned}$$ The following lemma is a generalization of [@j-j-s Lemma 6.1.1] to the case of unbounded operators. Below, we maintain our general convention that $\sum_{v\in\varnothing} x_v=~0$. \[lem4\] Let ${S_{\boldsymbol \lambda}}$ be a weighted shift on a directed tree ${{\mathscr T}}$ with weights ${{\boldsymbol\lambda}}= \{\lambda_v\}_{v \in V^\circ}$. Fix $u \in V$ and $n \in {\mathbb Z}_+$. Then the following assertions hold[*:*]{} 1. $e_u \in {{\EuScript D}({S_{\boldsymbol \lambda}}^n)}$ if and only if $\sum_{v \in {\operatorname{{\mathsf{Chi}}}^{\langlem\rangle}(u)}} |\lambda_{u\mid v}|^2 < \infty$ for all integers $m$ such that $1 {\leqslant}m {\leqslant}n$, 2. if $e_u \in {{\EuScript D}({S_{\boldsymbol \lambda}}^n)}$, then ${S_{\boldsymbol \lambda}}^n e_u = \sum_{v \in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}} \lambda_{u\mid v} \, e_v$, 3. if $e_u \in {{\EuScript D}({S_{\boldsymbol \lambda}}^n)}$, then $\|{S_{\boldsymbol \lambda}}^n e_u\|^2 = \sum_{v \in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}} |\lambda_{u\mid v}|^2$. For $k \in {\mathbb Z}_+$, we define the complex function ${{{\boldsymbol\lambda}}^{\hspace{-.3ex} \langle k \rangle}_{u|\cdot}}$ on $V$ by $$\begin{aligned} \label{deflam} {{{\boldsymbol\lambda}}^{\hspace{-.3ex} \langle k \rangle}_{u|v}} = \begin{cases} \lambda_{u|v} & \text{ if } v \in {\operatorname{{\mathsf{Chi}}}^{\langlek\rangle}(u)}, \\ 0 & \text{ if } v \in V \setminus {\operatorname{{\mathsf{Chi}}}^{\langlek\rangle}(u)}. \end{cases} \end{aligned}$$ We shall prove that for every $k\in {\mathbb Z}_+$, $$\begin{gathered} \label{small} e_u \in {{\EuScript D}({S_{\boldsymbol \lambda}}^k)} \text{ if and only if } \sum_{v \in {\operatorname{{\mathsf{Chi}}}^{\langlem\rangle}(u)}} |\lambda_{u\mid v}|^2 < \infty \text{ for } m = 0,1, \ldots, k, \end{gathered}$$ and $$\begin{gathered} \label{small2} \text{if } e_u \in {{\EuScript D}({S_{\boldsymbol \lambda}}^k)}, \text{ then } {S_{\boldsymbol \lambda}}^k e_u = {{{\boldsymbol\lambda}}^{\hspace{-.3ex} \langle k \rangle}_{u|\cdot}}. \end{gathered}$$ We use an induction on $k$. The case of $k=0$ is obvious. Suppose that and hold for all nonnegative integers less than or equal to $k$. Assume that $e_u \in {{\EuScript D}({S_{\boldsymbol \lambda}}^k)}$. Now we compute $\varLambda_{{\mathscr T}}({S_{\boldsymbol \lambda}}^k e_u)$. It follows from the induction hypothesis and that $$\begin{aligned} (\varLambda_{{\mathscr T}}({S_{\boldsymbol \lambda}}^k e_u))(v) &\overset{\eqref{lamtauf}}= \begin{cases} \lambda_v ({S_{\boldsymbol \lambda}}^k e_u)(\operatorname{{\mathsf{par}}}(v)) & \text{ if } v \in V^\circ, \\ 0 & \text{ if } v=\operatorname{{\mathsf{root}}}, \end{cases} \\ &\overset{\eqref{small2}}= \begin{cases} \lambda_v {{{\boldsymbol\lambda}}^{\hspace{-.3ex} \langle k \rangle}_{u|\operatorname{{\mathsf{par}}}(v)}} & \text{ if } \operatorname{{\mathsf{par}}}(v) \in {\operatorname{{\mathsf{Chi}}}^{\langlek\rangle}(u)}, \\ 0 & \text{ otherwise,} \end{cases} \\ & \overset{\eqref{num4}}= \begin{cases} \lambda_v \lambda_{u|\operatorname{{\mathsf{par}}}(v)} & \text{ if } v \in {\operatorname{{\mathsf{Chi}}}^{\langlek+1\rangle}(u)}, \\ 0 & \text{ otherwise,} \end{cases} \\ & \overset{\eqref{num2}}= \begin{cases} \lambda_{u|v} & \text{ if } v \in {\operatorname{{\mathsf{Chi}}}^{\langlek+1\rangle}(u)}, \\ 0 & \text{ otherwise,} \end{cases} \\ &\hspace{1.7ex} = {{{\boldsymbol\lambda}}^{\hspace{-.3ex} \langle k+1 \rangle}_{u|v}}, \quad v \in V, \end{aligned}$$ which shows that $\varLambda_{{\mathscr T}}({S_{\boldsymbol \lambda}}^k e_u) = {{{\boldsymbol\lambda}}^{\hspace{-.3ex} \langle k+1 \rangle}_{u|\cdot}}$. This in turn implies that and hold for $k+1$ in place of $k$. This proves (i) and (ii). Assertion (iii) is a direct consequence of (ii). In the context of weighted shifts on directed trees, the key assumption (iii) of Theorem \[tw1\] can be verified by using the following relatively simple criterion that may be of independent interest. \[potegi\] If ${{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}=\big\{{{\lambda_{v}^{\hspace{-.3ex}{\langle} i \rangle}}}\big\}_{v \in V^\circ}$, $i=1,2,3, \ldots$, and ${{\boldsymbol\lambda}}=\{\lambda_v\}_{v\in V^\circ}$ are families of complex numbers such that 1. ${{\mathscr{E}_V}}\subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})} \cap \bigcap_{i=1}^\infty {{\EuScript D}^\infty(S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}})}$, 2. $\lim_{i \to \infty} {{\lambda_{v}^{\hspace{-.3ex}{\langle} i \rangle}}} = \lambda_v$ for all $v \in V^\circ$, 3. $\lim_{i \to \infty} \|S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}^n e_u\| = \|{S_{\boldsymbol \lambda}}^n e_u\|$ for all $n \in {\mathbb Z}_+$ and $u \in V$, then $$\begin{aligned} \label{slim+} {\langle{S_{\boldsymbol \lambda}}^m e_u,{S_{\boldsymbol \lambda}}^n e_v\rangle} = \lim_{i \to \infty} {\langleS_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}^m e_u,S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}^n e_v\rangle}, \quad u,v \in V, \, m,n \in {\mathbb Z}_+. \end{aligned}$$ We split the proof into two steps. [Step 1.]{} If ${{\boldsymbol\lambda}}=\{\lambda_v\}_{v\in V^\circ}$ is a family of complex numbers such that ${{\mathscr{E}_V}}\subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$, then for all $m,n \in {\mathbb Z}_+$ and $u,v \in V$, $$\begin{aligned} \label{smsn} {\langle{S_{\boldsymbol \lambda}}^m e_u,{S_{\boldsymbol \lambda}}^n e_v\rangle} = \begin{cases} 0 & \text{ if } {\mathcal C}^{m,n}(u,v) = \varnothing, \\ \overline{\lambda_{v|u}} \, \|{S_{\boldsymbol \lambda}}^m e_u\|^2 & \text{ if } {\mathcal C}^{m,n}(u,v) \neq \varnothing \text{ and } m{\leqslant}n, \\ \lambda_{u|v} \, \|{S_{\boldsymbol \lambda}}^n e_v\|^2 & \text{ if } {\mathcal C}^{m,n}(u,v) \neq \varnothing \text{ and } m >n, \end{cases} \end{aligned}$$ where ${\mathcal C}^{m,n}(u,v) := {\operatorname{{\mathsf{Chi}}}^{\langlem\rangle}(u)} \cap {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(v)}$. Indeed, it follows from Lemma \[lem4\] that $$\begin{aligned} \begin{aligned} \label{slmsln} {\langle{S_{\boldsymbol \lambda}}^m e_u,{S_{\boldsymbol \lambda}}^n e_v\rangle} & = \Big\langle\sum_{u^\prime \in {\operatorname{{\mathsf{Chi}}}^{\langlem\rangle}(u)}} \lambda_{u\mid u^\prime} \, e_{u^\prime}, \sum_{v^\prime \in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(v)}} \lambda_{v\mid v^\prime} \, e_{v^\prime}\Big\rangle \\ & = \sum_{u^\prime \in {\mathcal C}^{m,n}(u,v)} \lambda_{u\mid u^\prime} \overline{\lambda_{v\mid u^\prime}}. \end{aligned} \end{aligned}$$ Hence, if ${\mathcal C}^{m,n}(u,v) = \varnothing$, then the left-hand side of is equal to $0$ as required. Suppose now that ${\mathcal C}^{m,n}(u,v) \neq \varnothing$ and $m {\leqslant}n$. Then $$\begin{aligned} \label{num8} {\mathcal C}^{m,n}(u,v)={\operatorname{{\mathsf{Chi}}}^{\langlem\rangle}(u)}. \end{aligned}$$ To show this, take $w \in {\mathcal C}^{m,n}(u,v)$. Then, by , $u=\operatorname{{\mathsf{par}}}^m(w)$ and $$\begin{aligned} v = \operatorname{{\mathsf{par}}}^{n}(w) = \operatorname{{\mathsf{par}}}^{n-m}(\operatorname{{\mathsf{par}}}^m(w)) = \operatorname{{\mathsf{par}}}^{n-m}(u), \end{aligned}$$ which, by again, is equivalent to $$\begin{aligned} \label{num7} u \in {\operatorname{{\mathsf{Chi}}}^{\langlen-m\rangle}(v)}. \end{aligned}$$ This implies that $$\begin{aligned} \label{num6} {\operatorname{{\mathsf{Chi}}}^{\langlem\rangle}(u)} \subseteq {\operatorname{{\mathsf{Chi}}}^{\langlem\rangle}({\operatorname{{\mathsf{Chi}}}^{\langlen-m\rangle}(v)})} \overset{\eqref{chmn}}{=} {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(v)}. \end{aligned}$$ Thus holds. Next, we show that $$\begin{aligned} \label{num5} \lambda_{v\mid u^\prime} = \lambda_{u\mid u^\prime} \lambda_{v|u}, \quad u^\prime \in {\operatorname{{\mathsf{Chi}}}^{\langlem\rangle}(u)}. \end{aligned}$$ It is enough to consider the case where $m{\geqslant}1$ and $n > m$. Since $u^\prime \in {\operatorname{{\mathsf{Chi}}}^{\langlem\rangle}(u)}$, we infer from that $u^\prime \in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(v)}$. Moreover, by , $u \in {\operatorname{{\mathsf{Chi}}}^{\langlen-m\rangle}(v)}$. All these facts together with imply that $$\begin{gathered} \lambda_{v|u^\prime} = \prod_{j=0}^{n-1} \lambda_{\operatorname{{\mathsf{par}}}^{j}(u^\prime)} = \prod_{j=0}^{m-1} \lambda_{\operatorname{{\mathsf{par}}}^{j}(u^\prime)} \prod_{j=m}^{n-1} \lambda_{\operatorname{{\mathsf{par}}}^{j}(u^\prime)} \\ \overset{\eqref{luv}}= \lambda_{u\mid u^\prime} \prod_{j=0}^{n-m-1} \lambda_{\operatorname{{\mathsf{par}}}^{j}(\operatorname{{\mathsf{par}}}^{m}(u^\prime))} \overset{\eqref{num4}}= \lambda_{u\mid u^\prime} \prod_{j=0}^{n-m-1} \lambda_{\operatorname{{\mathsf{par}}}^{j}(u)} \overset{\eqref{luv}}= \lambda_{u\mid u^\prime} \lambda_{v|u}, \end{gathered}$$ which completes the proof of . Now applying , , and Lemma \[lem4\](iii), we obtain $$\begin{aligned} {\langle{S_{\boldsymbol \lambda}}^m e_u,{S_{\boldsymbol \lambda}}^n e_v\rangle} & = \sum_{u^\prime \in {\operatorname{{\mathsf{Chi}}}^{\langlem\rangle}(u)}} \lambda_{u\mid u^\prime} \overline{\lambda_{v\mid u^\prime}} \\ & \hspace{-2.2ex} \overset{\eqref{num5}}= \overline{\lambda_{v|u}} \sum_{u^\prime \in {\operatorname{{\mathsf{Chi}}}^{\langlem\rangle}(u)}} |\lambda_{u\mid u^\prime}|^2 = \overline{\lambda_{v|u}} \, \|{S_{\boldsymbol \lambda}}^m e_u\|^2. \end{aligned}$$ Taking the complex conjugate and making appropriate substitutions, we infer from the above that ${\langle{S_{\boldsymbol \lambda}}^m e_u,{S_{\boldsymbol \lambda}}^n e_v\rangle} = \lambda_{u|v} \, \|{S_{\boldsymbol \lambda}}^n e_v\|^2$ if ${\mathcal C}^{m,n}(u,v) \neq \varnothing$ and $m >n$, which completes the proof of Step 1. [Step 2.]{} Under the assumptions of Proposition \[potegi\], equality holds. Indeed, it follows from (ii) that $$\begin{aligned} \label{wzj+} \lim_{i \to \infty} {{\lambda_{u\mid v}^{\hspace{-.3ex}{\langle} i \rangle}}} = \lambda_{u\mid v}, \quad u \in V, v \in {{\operatorname{{\mathsf{Des}}}(u)}}, \end{aligned}$$ where $\{{{\lambda_{u\mid v}^{\hspace{-.3ex}{\langle} i \rangle}}}\}_{u \in V, v \in {{\operatorname{{\mathsf{Des}}}(u)}}}$ is the family related to $\big\{{{\lambda_{v}^{\hspace{-.3ex}{\langle} i \rangle}}}\big\}_{v \in V^\circ}$ via . Now, applying Step 1 to the operators $S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}$ and ${S_{\boldsymbol \lambda}}$ (which is possible due to (i)) and using and (iii), we obtain . A consistency condition ----------------------- The following is an immediate consequence of Proposition \[necess-gen\]. \[necess\] Let ${S_{\boldsymbol \lambda}}$ be a weighted shift on a directed tree ${{\mathscr T}}$ with weights ${{\boldsymbol\lambda}}=\{\lambda_v\}_{v \in V^\circ}$ such that ${{\mathscr{E}_V}}\subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$. If ${S_{\boldsymbol \lambda}}$ is subnormal, then for every $u \in V$ the sequence $\{\|{S_{\boldsymbol \lambda}}^n e_u\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence. The converse of the implication in Proposition \[necess\] is valid for bounded weighted shifts on directed trees. \[charsub\] Let ${S_{\boldsymbol \lambda}}\in {\boldsymbol B(\ell^2(V))}$ be a weighted shift on a directed tree ${{\mathscr T}}$ with weights ${{\boldsymbol\lambda}}= \{\lambda_v\}_{v \in V^\circ}$. Then ${S_{\boldsymbol \lambda}}$ is subnormal if and only if $\{\|{S_{\boldsymbol \lambda}}^n e_u\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence for every $u \in V$. The case of unbounded weighted shifts is discussed in Theorem \[main-0\]. If ${S_{\boldsymbol \lambda}}$ is a subnormal weighted shift on a directed tree ${{\mathscr T}}$, then in view of Proposition \[necess\] we can attach to each vertex $u \in V$ a representing measure $\mu_u$ of the Stieltjes moment sequence $\{\|{S_{\boldsymbol \lambda}}^n e_u\|^2\}_{n=0}^\infty$ (of course, since the sequence $\{\|{S_{\boldsymbol \lambda}}^n e_u\|^2\}_{n=0}^\infty$ is not determinate in general, we have to choose one of them); note that any such $\mu_u$ is a probability measure. Hence, it is tempting to find relationships between these representing measures. This has been done in the case of bounded weighted shifts in [@j-j-s Lemma 6.1.10]. What is stated below is an adaptation of this lemma (and its proof) to the unbounded case. As opposed to the bounded case, implication $1^\circ \Rightarrow 2^\circ$ of Lemma \[charsub2\] below is not true in general (cf.[@j-j-s4]). \[charsub2\] Let ${S_{\boldsymbol \lambda}}$ be a weighted shift on a directed tree ${{\mathscr T}}$ with weights ${{\boldsymbol\lambda}}=\{\lambda_v\}_{v \in V^\circ}$ such that ${{\mathscr{E}_V}}\subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$. Let $u \in V^\prime$. Suppose that for every $v \in {\operatorname{{\mathsf{Chi}}}(u)}$ the sequence $\{\|{S_{\boldsymbol \lambda}}^n e_v\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence with a representing measure $\mu_v$. Consider the following two conditions[^3][*:*]{} 1. $\{\|{S_{\boldsymbol \lambda}}^n e_u\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence, 2. ${S_{\boldsymbol \lambda}}$ satisfies the consistency condition at the vertex $u$, i.e., $$\begin{aligned} \label{alanconsi} \sum_{v \in {\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2 \int_0^\infty \frac 1 s\, \operatorname{d}\mu_v(s) {\leqslant}1. \end{aligned}$$ Then the following assertions are valid[*:*]{} 1. if $2^\circ$ holds, then so does $1^\circ$ and the positive Borel measure $\mu_u$ on ${\mathbb R}_+$ defined by $$\begin{aligned} \label{muu+} \mu_u(\sigma) = \sum_{v \in {\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2 \int_\sigma \frac 1 s \operatorname{d}\mu_v(s) + \varepsilon_u \delta_0(\sigma), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \end{aligned}$$ with $$\begin{aligned} \label{muu++} \varepsilon_u=1 - \sum_{v \in {\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2 \int_0^\infty \frac 1 s \operatorname{d}\mu_v(s) \end{aligned}$$ is a representing measure of $\{\|{S_{\boldsymbol \lambda}}^n e_u\|^2\}_{n=0}^\infty$, 2. if $1^\circ$ holds and $\{\|{S_{\boldsymbol \lambda}}^{n+1} e_u\|^2\}_{n=0}^\infty$ is determinate, then $2^\circ$ holds, the Stieltjes moment sequence $\{\|{S_{\boldsymbol \lambda}}^n e_u\|^2\}_{n=0}^\infty$ is determinate and its unique representing measure $\mu_u$ is given by and . Define the positive Borel measure $\mu$ on ${\mathbb R}_+$ by $$\begin{aligned} \mu(\sigma) = \sum_{v \in {\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2 \mu_v(\sigma), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}. \end{aligned}$$ It is a matter of routine to show that $$\begin{aligned} \label{leb2} \int_0^\infty f \operatorname{d}\mu = \sum_{v \in {\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2 \int_0^\infty f \operatorname{d}\mu_v \end{aligned}$$ for every Borel function $f\colon {[0,\infty)} \to [0,\infty]$. Using the inclusion ${{\mathscr{E}_V}}\subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$ and applying Lemma \[lem4\](iii) twice, we obtain $$\begin{aligned} \|{S_{\boldsymbol \lambda}}^{n+1} e_u\|^2 & \hspace{2.2ex}= \sum_{w \in {\operatorname{{\mathsf{Chi}}}^{\langlen+1\rangle}(u)}} |\lambda_{u\mid w}|^2 \\ & \hspace{.4ex} \overset{\eqref{dzinn2}}= \sum_{v \in {\operatorname{{\mathsf{Chi}}}(u)}} \sum_{w \in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(v)}} |\lambda_{u\mid w}|^2 \notag \\ & \hspace{.4ex} \overset{\eqref{recfor2}}= \sum_{v \in {\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2 \sum_{w \in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(v)}} |\lambda_{v\mid w}|^2 \notag \\ & \hspace{2.2ex} =\sum_{v \in {\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2 \|{S_{\boldsymbol \lambda}}^n e_v\|^2, \quad n \in {\mathbb Z}_+. \notag \end{aligned}$$ This implies that $$\begin{aligned} \|{S_{\boldsymbol \lambda}}^{n+1} e_u\|^2 = \sum_{v \in {\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2 \int_0^\infty s^n \, \operatorname{d}\mu_v(s) \overset{\eqref{leb2}}= \int_0^\infty s^n \operatorname{d}\mu(s), \quad n \in {\mathbb Z}_+. \end{aligned}$$ Hence the sequence $\{\|{S_{\boldsymbol \lambda}}^{n+1} e_u\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence with a representing measure $\mu$. Set $t_n = \|{S_{\boldsymbol \lambda}}^{n+1} e_u\|^2$ for $n \in {\mathbb Z}_+$, and $t_{-1}=1$. Note that $$\begin{aligned} t_{n-1}=\|{S_{\boldsymbol \lambda}}^{n} e_u\|^2, \quad n \in {\mathbb Z}_+. \end{aligned}$$ Suppose that $2^\circ$ holds. Then, by and , we have $\int_0^\infty \frac 1 s \operatorname{d}\mu(s) {\leqslant}1$. Applying implication (iii)$\Rightarrow$(i) of Lemma \[bext\], we see that $1^\circ$ holds, and, by , the measure $\mu_u$ defined by and is a representing measure of the Stieltjes moment sequence $\{\|{S_{\boldsymbol \lambda}}^n e_u\|^2\}_{n=0}^\infty$. Suppose now that $1^\circ$ holds and the Stieltjes moment sequence $\{\|{S_{\boldsymbol \lambda}}^{n+1} e_u\|^2\}_{n=0}^\infty$ is determinate. It follows from implication (i)$\Rightarrow$(iii) of Lemma \[bext\] that there is a representing measure $\mu^\prime$ of $\{\|{S_{\boldsymbol \lambda}}^{n+1} e_u\|^2\}_{n=0}^\infty$ such that $\int_0^\infty \frac 1 s \operatorname{d}\mu^\prime(s) {\leqslant}1$. Since $\{\|{S_{\boldsymbol \lambda}}^{n+1} e_u\|^2\}_{n=0}^\infty$ is determinate, we get $\mu^\prime=\mu$, which implies $2^\circ$. The remaining part of assertion (ii) follows from the last assertion of Lemma \[bext\]. Now we prove that the determinacy of appropriate Stieltjes moment sequences attached to a weighted shift on a directed tree implies the existence of a consistent system of measures (see also Corollary \[necessdet2\]). As shown in [@j-j-s4], Lemma \[2necess+\] below is no longer true if the assumption on determinacy is dropped (though, by Lemma \[lem3\](iv), the converse of Lemma \[2necess+\] is true without assuming determinacy). \[2necess+\] Let ${S_{\boldsymbol \lambda}}$ be a weighted shift on a directed tree ${{\mathscr T}}$ with weights ${{\boldsymbol\lambda}}=\{\lambda_v\}_{v \in V^\circ}$ such that ${{\mathscr{E}_V}}\subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$. Assume that for every $u \in V^\prime$, the sequence $\{\|{S_{\boldsymbol \lambda}}^{n} e_u\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence, and that the Stieltjes moment sequence[^4] $\{\|{S_{\boldsymbol \lambda}}^{n+1} e_u\|^2\}_{n=0}^\infty$ is determinate. Then there exist a system $\{\mu_u\}_{u \in V}$ of Borel probability measures on ${\mathbb R}_+$ and a system $\{\varepsilon_u\}_{u \in V}$ of nonnegative real numbers that satisfy for every $u \in V$. By Lemma \[bext\], the Stieltjes moment sequence $\{\|{S_{\boldsymbol \lambda}}^n e_u\|^2\}_{n=0}^\infty$ is determinate for every $u\in V^\prime$. For $u \in V^\prime$, we denote by $\mu_u$ the unique representing measure of $\{\|{S_{\boldsymbol \lambda}}^n e_u\|^2\}_{n=0}^\infty$. If $u \in V \setminus V^\prime$, then we put $\mu_u=\delta_0$. Using Lemma \[charsub2\](ii), we verify that the system $\{\mu_u\}_{u \in V}$ satisfies with $\{\varepsilon_u\}_{u \in V}$ defined by . This completes the proof. A hereditary property --------------------- Given a weighted shift ${S_{\boldsymbol \lambda}}$ on ${{\mathscr T}}$, we say that a vertex $u \in V$ [*generates*]{} a Stieltjes moment sequence (with respect to ${S_{\boldsymbol \lambda}}$) if $e_u \in {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$ and the sequence $\{\|{S_{\boldsymbol \lambda}}^n e_u\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence. We have shown in Lemma \[charsub2\] that in many cases the parent generates a Stieltjes moment sequence whenever its children do so. If the parent generates a Stieltjes moment sequence, then in general its children do not do so (cf. [@j-j-s Example 6.1.6]). However, if the parent has only one child and generates a Stieltjes moment sequence, then its child does so. \[charsub-1\] Let ${S_{\boldsymbol \lambda}}$ be a weighted shift on a directed tree ${{\mathscr T}}$ with weights ${{\boldsymbol\lambda}}= \{\lambda_v\}_{v \in V^\circ}$ and let $u_0, u_1 \in V$ be such that ${\operatorname{{\mathsf{Chi}}}(u_0)} = \{u_1\}$. Suppose that $e_{u_0} \in {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$, $\{\|{S_{\boldsymbol \lambda}}^n e_{u_0}\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence and $\lambda_{u_1}\neq 0$. Then $e_{u_1} \in {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$ and $\{\|{S_{\boldsymbol \lambda}}^n e_{u_1}\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence. Moreover, the following assertions hold[*:*]{} 1. the mapping ${\mathscr M}_{u_1}^{\mathrm b}({{\boldsymbol\lambda}}) \ni \mu \to \rho_{\mu} \in {\mathscr M}_{u_0}({{\boldsymbol\lambda}})$ defined by $$\begin{aligned} \rho_{\mu}(\sigma) = |\lambda_{u_1}|^2 \int_\sigma \frac 1 s \operatorname{d}\mu(s) + \Big(1 - |\lambda_{u_1}|^2 \int_0^\infty \frac 1 s \operatorname{d}\mu(s)\Big) \delta_0(\sigma), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \end{aligned}$$ is a bijection with the inverse ${\mathscr M}_{u_0}({{\boldsymbol\lambda}}) \ni \rho \to \mu_{\rho} \in {\mathscr M}_{u_1}^{\mathrm b}({{\boldsymbol\lambda}})$ given by $$\begin{aligned} \mu_{\rho} ( \sigma) = \frac 1 {|\lambda_{u_1}|^2} \int_\sigma s \operatorname{d}\rho (s), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \end{aligned}$$ where ${\mathscr M}_{u_1}^{\mathrm b}({{\boldsymbol\lambda}})$ is the set of all representing measures $\mu$ of $\{\|{S_{\boldsymbol \lambda}}^n e_{u_1}\|^2\}_{n=0}^\infty$ such that $\int_0^\infty \frac 1 s \operatorname{d}\mu(s) {\leqslant}\frac{1}{|\lambda_{u_1}|^2}$, and ${\mathscr M}_{u_0}({{\boldsymbol\lambda}})$ is the set of all representing measures $\rho$ of $\{\|{S_{\boldsymbol \lambda}}^n e_{u_0}\|^2\}_{n=0}^\infty$, 2. if the Stieltjes moment sequence $\{\|{S_{\boldsymbol \lambda}}^n e_{u_1}\|^2\}_{n=0}^\infty$ is determinate, then so are $\{\|{S_{\boldsymbol \lambda}}^{n+1} e_{u_0}\|^2\}_{n=0}^\infty$ and $\{\|{S_{\boldsymbol \lambda}}^n e_{u_0}\|^2\}_{n=0}^\infty$. Since $e_{u_0} \in {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$, ${\operatorname{{\mathsf{Chi}}}(u_0)} = \{u_1\}$ and $\lambda_{u_1} \neq 0$, we infer from that $e_{u_1} = \frac{1}{\lambda_{u_1}} {S_{\boldsymbol \lambda}}e_{u_0} \in {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$ and thus $$\begin{aligned} \|{S_{\boldsymbol \lambda}}^n e_{u_1}\|^2 = \frac 1 {|\lambda_{u_1}|^2}\|{S_{\boldsymbol \lambda}}^{n+1} e_{u_0}\|^2, \quad n \in {\mathbb Z}_+. \end{aligned}$$ The last equality and Lemma \[bext\] applied to $\vartheta=1$ and $t_n=\|{S_{\boldsymbol \lambda}}^{n+1} e_{u_0}\|^2$ ($n \in {\mathbb Z}_+$) complete the proof. Criteria for Subnormality of Weighted Shifts ============================================ Consistent systems of measures ------------------------------ In this section we prove some important properties of consistent systems of Borel probability measures on ${\mathbb R}_+$ attached to a directed tree. They will be used in the proof of Theorem \[main\]. \[lem1\] Let ${{\mathscr T}}$ be a directed tree. Suppose that $\{\lambda_v\}_{v \in V^\circ}$ is a system of complex numbers, $\{\varepsilon_v\}_{v \in V}$ is a system of nonnegative real numbers and $\{\mu_v\}_{v \in V}$ is a system of Borel probability measures on ${\mathbb R}_+$ satisfying for every $u \in V$. Then the following assertions hold[*:*]{} 1. for every $u \in V$, $\sum_{v\in {\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2 \int_0^\infty \frac 1 s \operatorname{d}\mu_v(s) {\leqslant}1$ and $$\begin{aligned} \varepsilon_u = 1 - \sum_{v\in {\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2 \int_0^\infty \frac 1 s \operatorname{d}\mu_v(s), \end{aligned}$$ 2. for every $u \in V$, $\mu_u(\{0\})= 0$ if and only if $\varepsilon_u=0$, 3. for every $v \in V^\circ$, if $\lambda_v \neq 0$, then $\mu_v(\{0\})=0$, 4. for every $u \in V$, $$\begin{aligned} \label{wz2} \mu_u(\sigma) = \sum_{v\in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}} |\lambda_{u\mid v}|^2 \int_{\sigma} \frac 1 {s^n} \operatorname{d}\mu_v(s) + \varepsilon_u \delta_0(\sigma), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \, n {\geqslant}1. \end{aligned}$$ \(i) Substitute $\sigma = {\mathbb R}_+$ into and note that $\mu_u({\mathbb R}_+)=1$. \(ii) & (iii) Substitute $\sigma = \{0\}$ into . \(iv) We use induction on $n$. The case of $n=1$ coincides with . Suppose that is valid for a fixed integer $n{\geqslant}1$. Then combining with , we see that $$\begin{gathered} \label{wz3} \mu_u(\sigma) = \sum_{v\in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}} |\lambda_{u\mid v}|^2 \sum_{w \in {\operatorname{{\mathsf{Chi}}}(v)}} |\lambda_w|^2\int_{\sigma} \frac 1 {s^{n+1}} \operatorname{d}\mu_w(s) \\ + \sum_{v\in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}} |\lambda_{u\mid v}|^2 \int_{\sigma} \frac 1 {s^n} \operatorname{d}(\varepsilon_v \delta_0)(s) + \varepsilon_u \delta_0(\sigma), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}. \end{gathered}$$ Since $\mu_u$ is a finite positive measure and $n{\geqslant}1$, we deduce from that $\varepsilon_v=0$ whenever $\lambda_{u\mid v} \neq 0$, and thus $$\begin{aligned} \label{wz4} \sum_{v\in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}} |\lambda_{u\mid v}|^2 \int_{\sigma} \frac 1 {s^n} \operatorname{d}(\varepsilon_v \delta_0)(s)=0. \end{aligned}$$ It follows from and that $$\begin{aligned} \mu_u(\sigma) = \sum_{v\in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}} \sum_{w \in {\operatorname{{\mathsf{Chi}}}(v)}} |\lambda_{u\mid v}\lambda_w|^2\int_{\sigma} \frac 1 {s^{n+1}} \operatorname{d}\mu_w(s) + \varepsilon_u \delta_0(\sigma) \\ \overset{\eqref{num1}\&\eqref{num2}}= \sum_{w\in {\operatorname{{\mathsf{Chi}}}^{\langlen+1\rangle}(u)}} |\lambda_{u\mid w}|^2 \int_{\sigma} \frac 1 {s^{n+1}} \operatorname{d}\mu_w(s) + \varepsilon_u \delta_0(\sigma). \end{aligned}$$ This completes the proof. \[lem3\] Let ${{\mathscr T}}$ be a directed tree. Suppose that ${{\boldsymbol\lambda}}=\{\lambda_v\}_{v \in V^\circ}$ is a system of complex numbers, $\{\varepsilon_v\}_{v \in V}$ is a system of nonnegative real numbers and $\{\mu_v\}_{v \in V}$ is a system of Borel probability measures on ${\mathbb R}_+$ satisfying for every $u \in V$. Let ${S_{\boldsymbol \lambda}}$ be a weighted shift on the directed tree ${{\mathscr T}}$ with weights ${{\boldsymbol\lambda}}$. Then the following assertions hold[*:*]{} 1. for all $u \in V$ and $n \in {\mathbb N}$, $$\begin{aligned} \label{wz5} \int_0^\infty s^n \operatorname{d}\mu_u(s) = \sum_{v\in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}} |\lambda_{u\mid v}|^2, \end{aligned}$$ 2. if ${\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}=\varnothing$ for some $u\in V$ and $n\in {\mathbb N}$, then $\mu_v=\delta_0$ for all $v \in {{\operatorname{{\mathsf{Des}}}(u)}}$, 3. ${{\mathscr{E}_V}}\subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$ if and only if $\int_0^\infty s^n \operatorname{d}\mu_u(s) < \infty$ for all $n \in {\mathbb Z}_+$ and $u \in V$, 4. if ${{\mathscr{E}_V}}\subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$, then for all $u \in V$ and $n \in {\mathbb Z}_+$, $$\begin{aligned} \label{wz6} \|{S_{\boldsymbol \lambda}}^n e_u\|^2 = \int_0^\infty s^n \operatorname{d}\mu_u(s), \end{aligned}$$ 5. ${S_{\boldsymbol \lambda}}\in {\boldsymbol B(\ell^2(V))}$ if and only if there exists a real number $M {\geqslant}0$ such that ${\mathrm{supp}\,\mu}_u \subseteq [0,M]$ for every $u \in V$. \(i) Substituting $\sigma=\{0\}$ into , we see that for every $v\in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}$, either $\lambda_{u\mid v} = 0$, or $\lambda_{u\mid v} \neq 0$ and $\mu_v(\{0\})=0$. This and lead to . \(ii) It follows from that $\int_0^\infty s^n \operatorname{d}\mu_u(s) = 0$ (recall the convention that $\sum_{v\in\varnothing} x_v=0$). This and $n {\geqslant}1$ implies that $\mu_u((0,\infty))=0$. Since $\mu_u({\mathbb R}_+) = 1$, we deduce that $\mu_u=\delta_0$. If $v \in {{\operatorname{{\mathsf{Des}}}(u)}}\setminus \{u\}$, then by there exists $k\in {\mathbb N}$ such that $v\in {\operatorname{{\mathsf{Chi}}}^{\langlek\rangle}(u)}$. Since ${\operatorname{{\mathsf{Chi}}}(\cdot)}$ is a monotonically increasing set-function, we infer from that ${\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(v)} \subseteq {\operatorname{{\mathsf{Chi}}}^{\langlen+k\rangle}(u)}=\varnothing$. By the previous argument applied to $v$ in place of $u$, we get $\mu_v=\delta_0$. Assertions (iii) and (iv) follow from (i) and Lemma \[lem4\]. \(v) To prove the “only if” part, note that $$\begin{aligned} \lim_{n\to\infty} \Big(\int_0^\infty s^{n} \operatorname{d}\mu_u(s)\Big)^{1/n} \overset{\eqref{wz6}}= \lim_{n\to\infty} (\|{S_{\boldsymbol \lambda}}^n e_u\|^{1/n})^2 {\leqslant}\|{S_{\boldsymbol \lambda}}\|^2, \end{aligned}$$ which implies that ${\mathrm{supp}\,\mu}_u \subseteq [0,\|{S_{\boldsymbol \lambda}}\|^2]$ (cf. [@Rud page 71]). The proof of the converse implication goes as follows. By , we have $$\begin{aligned} \sum_{v\in {\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_{v}|^2 = \int_0^\infty s \operatorname{d}\mu_u(s) {\leqslant}M, \quad u \in V, \end{aligned}$$ which in view of Proposition \[bas\](v) implies that ${S_{\boldsymbol \lambda}}\in {\boldsymbol B(\ell^2(V))}$ and $\|{S_{\boldsymbol \lambda}}\| {\leqslant}\sqrt{M}$. This completes the proof. \[sf-a\]Arbitrary weights ------------------------- After all these preparations we can prove the main criterion for subnormality of unbounded weighted shifts on directed trees. It is written in terms of consistent systems of measures. \[main\] Let ${S_{\boldsymbol \lambda}}$ be a weighted shift on a directed tree ${{\mathscr T}}$ with weights ${{\boldsymbol\lambda}}=\{\lambda_v\}_{v \in V^\circ}$ such that ${{\mathscr{E}_V}}\subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$. Suppose that there exist a system $\{\mu_v\}_{v \in V}$ of Borel probability measures on ${\mathbb R}_+$ and a system $\{\varepsilon_v\}_{v \in V}$ of nonnegative real numbers that satisfy for every $u \in V$. Then ${S_{\boldsymbol \lambda}}$ is subnormal. For a fixed positive integer $i$, we define the system ${{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}=\big\{{{\lambda_{v}^{\hspace{-.3ex}{\langle} i \rangle}}}\big\}_{v \in V^\circ}$ of complex numbers, the system $\big\{{{\mu_{v}^{\hspace{-.25ex}\langle i \rangle}}}\big\}_{v \in V}$ of Borel probability measures on ${\mathbb R}_+$ and the system $\big\{{{\varepsilon_{v}^{\hspace{-.1ex}\langle i \rangle}}}\big\}_{v \in V}$ of nonnegative real numbers by $$\begin{aligned} \label{wzd1} {{\lambda_{v}^{\hspace{-.3ex}{\langle} i \rangle}}} & = \begin{cases} \lambda_v \sqrt{\cfrac{\mu_v([0,i])}{\mu_{{\operatorname{{\mathsf{par}}}(v)}}([0,i])}} & \text{ if } \mu_{{\operatorname{{\mathsf{par}}}(v)}}([0,i]) > 0, \\[1.5ex] 0 & \text{ if } \mu_{{\operatorname{{\mathsf{par}}}(v)}}([0,i]) = 0, \end{cases} \quad v \in V^\circ, \\ \label{wzd2} {{\mu_{v}^{\hspace{-.25ex}\langle i \rangle}}}(\sigma) & = \begin{cases} \cfrac{\mu_v(\sigma \cap [0,i])}{\mu_v([0,i])} & \text{ if } \mu_v([0,i]) > 0, \\[1.5ex] \delta_0(\sigma) & \text{ if } \mu_v([0,i]) = 0, \end{cases} \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)},\, v \in V, \\ \label{wzd3} {{\varepsilon_{v}^{\hspace{-.1ex}\langle i \rangle}}} & = \begin{cases} \cfrac{\varepsilon_v}{\mu_v([0,i])} & \text{ if } \mu_v([0,i]) > 0, \\[1.5ex] 1 & \text{ if } \mu_v([0,i])=0, \end{cases} \quad v \in V. \end{aligned}$$ Our first goal is to show that the following equality holds for all $u \in V$ and $i\in {\mathbb N}$, $$\begin{aligned} \label{wz1J} {{\mu_{u}^{\hspace{-.25ex}\langle i \rangle}}}(\sigma) = \sum_{v\in {\operatorname{{\mathsf{Chi}}}(u)}} |{{\lambda_{v}^{\hspace{-.3ex}{\langle} i \rangle}}}|^2 \int_{\sigma} \frac 1 s \operatorname{d}{{\mu_{v}^{\hspace{-.25ex}\langle i \rangle}}}(s) + {{\varepsilon_{u}^{\hspace{-.1ex}\langle i \rangle}}} \delta_0(\sigma), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}. \end{aligned}$$ For this fix $u \in V$ and $i\in {\mathbb N}$. If $\mu_u([0,i])=0$, then, according to our definitions, we have ${{\lambda_{v}^{\hspace{-.3ex}{\langle} i \rangle}}}=0$ for all $v \in {\operatorname{{\mathsf{Chi}}}(u)}$, ${{\mu_{u}^{\hspace{-.25ex}\langle i \rangle}}}=\delta_0$ and ${{\varepsilon_{u}^{\hspace{-.1ex}\langle i \rangle}}}=1$, which means that the equality holds. Consider now the case of $\mu_u([0,i])>0$. It follows from that $$\begin{aligned} \label{muu} \mu_u(\sigma \cap [0,i]) = \sum_{v\in {\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2 \int_{\sigma \cap [0,i]} \frac 1 s \operatorname{d}\mu_v(s) + \varepsilon_u \delta_0(\sigma), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}. \end{aligned}$$ If $v \in {\operatorname{{\mathsf{Chi}}}(u)}$ (equivalently: $u={\operatorname{{\mathsf{par}}}(v)}$), then by and we have $$\begin{aligned} \label{muu2} \begin{aligned} \frac{|\lambda_v|^2}{\mu_u([0,i])} \int_{\sigma \cap [0,i]} \frac 1 s \operatorname{d}\mu_v(s) & = \begin{cases} |{{\lambda_{v}^{\hspace{-.3ex}{\langle} i \rangle}}}|^2 \int_{\sigma} \frac 1 s \operatorname{d}{{\mu_{v}^{\hspace{-.25ex}\langle i \rangle}}}(s) & \text{ if } \mu_v([0,i]) > 0, \\[1ex] 0 & \text{ if } \mu_v([0,i])=0, \end{cases} \\ & = |{{\lambda_{v}^{\hspace{-.3ex}{\langle} i \rangle}}}|^2 \int_{\sigma} \frac 1 s \operatorname{d}{{\mu_{v}^{\hspace{-.25ex}\langle i \rangle}}}(s), \end{aligned} \end{aligned}$$ where the last equality holds because ${{\lambda_{v}^{\hspace{-.3ex}{\langle} i \rangle}}}=0$ whenever $\mu_v([0,i])=0$. Dividing both sides of by $\mu_u([0,i])$ and using , we obtain . Let $S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}$ be the weighted shift on ${{\mathscr T}}$ with weights ${{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}$. Since, by , ${\mathrm{supp}\,{{\mu_{}}^{\hspace{-.25ex}\langle i \rangle}}}{u} \subseteq [0,i]$ for every $u \in V$, we infer from and Lemma  \[lem3\](v), applied to the triplet $({{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}, \{{{\mu_{v}^{\hspace{-.25ex}\langle i \rangle}}}\}_{v \in V}, \{{{\varepsilon_{v}^{\hspace{-.1ex}\langle i \rangle}}}\}_{v \in V})$, that $S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}\in {\boldsymbol B(\ell^2(V))}$. In turn, and Lemma \[lem3\](iv) (applied to the same triplet) imply that for every $u \in V$, $\{\|S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}^n e_u\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence (with a representing measure ${{\mu_{u}^{\hspace{-.25ex}\langle i \rangle}}}$). Hence, by Theorem \[charsub\], the operator $S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}$ is subnormal. Since $\mu_u$, $u \in V$, are Borel probability measures on ${\mathbb R}_+$, we have $$\begin{aligned} \label{lim1} \lim_{i \to \infty} \mu_u([0,i]) = 1, \quad u \in V. \end{aligned}$$ Hence, for every $u \in V$ there exists a positive integer $\kappa_{u}$ such that $$\begin{aligned} \label{as1} \mu_{u}([0,i]) > 0,\quad i \in {\mathbb N}, \, i {\geqslant}\kappa_{u}. \end{aligned}$$ Note that $$\begin{aligned} \label{wzj} \lim_{i \to \infty} {{\lambda_{v}^{\hspace{-.3ex}{\langle} i \rangle}}} = \lambda_v, \quad v \in V^\circ. \end{aligned}$$ Indeed, if $v \in V^\circ$, then and yield ${{\lambda_{v}^{\hspace{-.3ex}{\langle} i \rangle}}}=\lambda_v \sqrt{\frac{\mu_v([0,i])} {\mu_{{\operatorname{{\mathsf{par}}}(v)}}([0,i])}}$ for all integers $i {\geqslant}\kappa_{{\operatorname{{\mathsf{par}}}(v)}}$. This, combined with , gives . By , , and Lemma \[lem3\](iv), applied to $S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}$, we have $$\begin{aligned} \|S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}^n e_u\|^2 = \int_0^\infty s^n \operatorname{d}{{\mu_{u}^{\hspace{-.25ex}\langle i \rangle}}}(s) = \frac{1}{\mu_u([0,i])} \int_{[0,i]} s^n \operatorname{d}\mu_u(s), \quad n \in {\mathbb Z}_+,\, i {\geqslant}\kappa_{u}, \, u \in V. \end{aligned}$$ This, together with and Lemma \[lem3\](iv), now applied to ${S_{\boldsymbol \lambda}}$, implies that $$\begin{aligned} \label{limsti} \lim_{i \to \infty} \|S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}^n e_u\|^2 = \int_0^\infty s^n \operatorname{d}\mu_u(s) = \|{S_{\boldsymbol \lambda}}^n e_u\|^2, \quad n \in {\mathbb Z}_+,\, u \in V. \end{aligned}$$ It follows from , and Proposition \[potegi\] that holds. According to Proposition \[bas\](iv), ${{\mathscr{E}_V}}$ is a core of ${S_{\boldsymbol \lambda}}$. Hence $\operatorname{\mbox{\sc lin}}\bigcup_{n=0}^\infty {S_{\boldsymbol \lambda}}^n({{\mathscr{E}_V}})$ is a core of ${S_{\boldsymbol \lambda}}$ as well. Applying and Theorem \[tw1\] to the operators $\{S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}\}_{i=1}^\infty$ and ${S_{\boldsymbol \lambda}}$ with ${\mathcal X}:=\{e_u\colon u \in V\}$ completes the proof of Theorem \[main\]. In the proof of Theorem \[main\] we have used Proposition \[potegi\] which provides a general criterion for the validity of the approximation procedure . However, if the approximating triplets $({{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}, \{{{\mu_{v}^{\hspace{-.25ex}\langle i \rangle}}}\}_{v \in V}, \{{{\varepsilon_{v}^{\hspace{-.1ex}\langle i \rangle}}}\}_{v \in V})$, $i=1,2,3, \ldots$, are defined as in , and , then $$\begin{aligned} \label{slim2} \lim_{i\to \infty} S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}^n e_u = S_{{{\boldsymbol\lambda}}}^n e_u, \quad u \in V, \, n \in {\mathbb Z}_+. \end{aligned}$$ To prove this, we first show that for all $u \in V$ and $i {\geqslant}\kappa_u$ (see ), $$\begin{aligned} \label{liuup} {{\lambda_{u \mid u^\prime}^{\hspace{-.3ex}{\langle} i \rangle}}} = \lambda_{u \mid u^\prime} \; \sqrt{\frac{\mu_{u^\prime} ([0,i])}{\mu_u([0,i])}}, \quad u^\prime \in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}, \, n \in {\mathbb Z}_+. \end{aligned}$$ Indeed, if $n=0$, then holds. Suppose that $n{\geqslant}1$. If $\mu_{\operatorname{{\mathsf{par}}}(u^\prime)}([0,i])=0$, then $n{\geqslant}2$ and, by , ${{\lambda_{u^\prime}^{\hspace{-.3ex}{\langle} i \rangle}}} = 0$, which implies that ${{\lambda_{u \mid u^\prime}^{\hspace{-.3ex}{\langle} i \rangle}}}=0$. Since $\mu_{\operatorname{{\mathsf{par}}}(u^\prime)}([0,i])=0$, we deduce from (applied to $u=\operatorname{{\mathsf{par}}}(u^\prime)$) that either $\lambda_{u^\prime}=0$, or $\mu_{u^\prime} ([0,i]) = 0$. In both cases, the right-hand side of vanishes, and so holds. In turn, if $\mu_{\operatorname{{\mathsf{par}}}(u^\prime)}([0,i]) > 0$, then we can define $$\begin{aligned} j_0 = \min \Big\{j \in \{1, \ldots, n\}\colon \mu_{\operatorname{{\mathsf{par}}}^k(u^\prime)}([0,i]) > 0 \text{ for all } k=1, \ldots, j\Big\}. \end{aligned}$$ Clearly, $1 {\leqslant}j_0 {\leqslant}n$. First, we consider the case where $j_0 < n$. Since, by , $\mu_u([0,i])> 0$, we must have $j_0 {\leqslant}n-2$. Thus $\mu_{\operatorname{{\mathsf{par}}}^{j_0+1}(u^\prime)}([0,i]) = 0$, which together with and implies that the left-hand side of vanishes. Since $\mu_{\operatorname{{\mathsf{par}}}^{j_0+1}(u^\prime)}([0,i]) = 0$ and $\mu_{\operatorname{{\mathsf{par}}}^{j_0}(u^\prime)}([0,i]) > 0$, we deduce from (applied to $u=\operatorname{{\mathsf{par}}}^{j_0+1}(u^\prime)$) that $\lambda_{\operatorname{{\mathsf{par}}}^{j_0}(u^\prime)}=0$, and so the right-hand side of vanishes. This means that is again valid. Finally, if $j_0=n$, then by we have $$\begin{aligned} {{\lambda_{u \mid u^\prime}^{\hspace{-.3ex}{\langle} i \rangle}}} = \prod_{j=0}^{n-1} \lambda_{\operatorname{{\mathsf{par}}}^j(u^\prime)} \sqrt{\frac{\mu_{\operatorname{{\mathsf{par}}}^j(u^\prime)([0,i])}} {\mu_{\operatorname{{\mathsf{par}}}^{j+1}(u^\prime)([0,i])}}} = \lambda_{u \mid u^\prime} \; \sqrt{\frac{\mu_{u^\prime} ([0,i])}{\mu_u([0,i])}}, \end{aligned}$$ which completes the proof of . Now we show that $$\begin{aligned} \label{ils} \lim_{i \to \infty} {\langleS_{{{\boldsymbol\lambda}}}^n e_u,S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}^n e_u\rangle} = \|{S_{\boldsymbol \lambda}}^n e_u\|^2, \quad u \in V, \, n \in {\mathbb Z}_+. \end{aligned}$$ Indeed, it follows from Lemma \[lem4\](ii) and that $$\begin{gathered} {\langleS_{{{\boldsymbol\lambda}}}^n e_u,S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}^n e_u\rangle} = \sum_{u^\prime \in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}} \lambda_{u\mid u^\prime} \overline{{{\lambda_{u \mid u^\prime}^{\hspace{-.3ex}{\langle} i \rangle}}}} \\ = \frac{1}{\sqrt{\mu_u([0,i])}} \sum_{u^\prime \in {\operatorname{{\mathsf{Chi}}}^{\langlen\rangle}(u)}} |\lambda_{u\mid u^\prime}|^2 \sqrt{\mu_{u^\prime}([0,i])}, \quad u \in V, \, n \in {\mathbb Z}_+, \, i {\geqslant}\kappa_u. \end{gathered}$$ By applying Lebesgue’s monotone convergence theorem for series, and Lemma \[lem4\](iii), we obtain . Since $$\begin{aligned} \|S_{{{\boldsymbol\lambda}}}^n e_u - S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}^n e_u\|^2 = \|S_{{{\boldsymbol\lambda}}}^n e_u\|^2 + \|S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}^n e_u\|^2 - 2 \, \mathrm{Re}{\langleS_{{{\boldsymbol\lambda}}}^n e_u,S_{{{{{\boldsymbol\lambda}}^{\hspace{-.3ex}\langle i \rangle}}}}^n e_u\rangle} \end{aligned}$$ we infer from and . Clearly implies . We conclude this section with a general criterion for subnormality of weighted shifts on directed trees written in terms of determinacy of Stieltjes moment sequences. \[necessdet2\] Let ${S_{\boldsymbol \lambda}}$ be a weighted shift on a directed tree ${{\mathscr T}}$ with weights ${{\boldsymbol\lambda}}=\{\lambda_v\}_{v \in V^\circ}$ such that ${{\mathscr{E}_V}}\subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$. Assume that $\{\|{S_{\boldsymbol \lambda}}^{n+1} e_u\|^2\}_{n=0}^\infty$ is a determinate Stieltjes moment sequence for every $u \in V$. Then the following conditions are equivalent[*:*]{} 1. ${S_{\boldsymbol \lambda}}$ is subnormal, 2. $\{\|{S_{\boldsymbol \lambda}}^{n} e_u\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence for every $u \in V$, 3. there exist a system $\{\mu_u\}_{u \in V}$ of Borel probability measures on ${\mathbb R}_+$ and a system $\{\varepsilon_u\}_{u \in V}$ of nonnegative real numbers that satisfy for every $u \in V$. (i)$\Rightarrow$(ii) Use Proposition \[necess\]. (ii)$\Rightarrow$(iii) Employ Lemma \[2necess+\]. (iii)$\Rightarrow$(i) Apply Theorem \[main\]. Regarding Corollary \[necessdet2\], note that by Proposition \[necess\], Lemma \[lem3\](iv) and each of the conditions (i), (ii) and (iii) implies that $\{\|{S_{\boldsymbol \lambda}}^{n+1} e_u\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence for every $u \in V$. Nonzero weights --------------- As pointed out in [@j-j-s Proposition 5.1.1] bounded hyponormal weighted shifts on directed trees with nonzero weights are always injective. It turns out that the same conclusion can be derived in the unbounded case (with almost the same proof). Recall that a densely defined operator $S$ in ${\mathcal H}$ is said to be [*hyponormal*]{} if ${{\EuScript D}(S)} \subseteq {{\EuScript D}(S^*)}$ and $\|S^*f\| {\leqslant}\|Sf\|$ for all $f \in {{\EuScript D}(S)}$. It is well-known that subnormal operators are hyponormal (but not conversely) and that hyponormal operators are closable and their closures are hyponormal. We refer the reader to [@ot-sch; @jj1; @jj2; @jj3; @sto] for elements of the theory of unbounded hyponormal operators. \[hypcor\] Let ${{\mathscr T}}$ be a directed tree with $V^\circ \neq \varnothing$. If ${S_{\boldsymbol \lambda}}$ is a hyponormal weighted shift on ${{\mathscr T}}$ whose all weights are nonzero, then ${{\mathscr T}}$ is leafless. In particular, ${S_{\boldsymbol \lambda}}$ is injective and $V$ is infinite and countable. Suppose that, contrary to our claim, ${\operatorname{{\mathsf{Chi}}}(u)} = \varnothing$ for some $u \in V$. We deduce from Proposition \[przem\] and $V^\circ \neq \varnothing$ that $u \in V^\circ$. Hence, by assertions (ii), (iii) and (vi) of Proposition \[bas\], we have $$\begin{aligned} |\lambda_u|^2 \overset{ \eqref{sl*}}= \|{S_{\boldsymbol \lambda}}^*e_u\|^2 {\leqslant}\|{S_{\boldsymbol \lambda}}e_u\|^2 \overset{\eqref{eu}}= \sum_{v \in {\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2 = 0, \end{aligned}$$ which is a contradiction. Since each leafless directed tree is infinite, we deduce from assertions (vii) and (viii) of Proposition \[bas\] that ${S_{\boldsymbol \lambda}}$ is injective and $V$ is infinite and countable. This completes the proof. The sufficient condition for subnormality of weighted shifts on directed trees stated in Theorem \[main\] takes the simplified form for weighted shifts with nonzero weights. Indeed, if a weighted shift ${S_{\boldsymbol \lambda}}$ on ${{\mathscr T}}$ with nonzero weights satisfies the assumptions of Theorem \[main\], then, by assertions (ii) and (iii) of Lemma \[lem1\], $\varepsilon_v=0$ for every $v \in V^\circ$. Let ${S_{\boldsymbol \lambda}}$ be a weighted shift on a directed tree ${{\mathscr T}}$ with nonzero weights ${{\boldsymbol\lambda}}=\{\lambda_v\}_{v \in V^\circ}$ such that ${{\mathscr{E}_V}}\subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$. Then ${S_{\boldsymbol \lambda}}$ is subnormal provided that one of the following two conditions holds[*:*]{} 1. ${{\mathscr T}}$ is rootless and there exists a system $\{\mu_v\}_{v \in V}$ of Borel probability measures on ${\mathbb R}_+$ which satisfies the following equality for every $u \in V$, $$\begin{aligned} \label{wz1+} \mu_u(\sigma) = \sum_{v\in {\operatorname{{\mathsf{Chi}}}(u)}} |\lambda_v|^2 \int_{\sigma} \frac 1 s \operatorname{d}\mu_v(s), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \end{aligned}$$ 2. ${{\mathscr T}}$ has a root and there exist $\varepsilon \in {\mathbb R}_+$ and a system $\{\mu_v\}_{v \in V}$ of Borel probability measures on ${\mathbb R}_+$ which satisfy for every $u \in V^\circ$, and $$\begin{aligned} \mu_{\operatorname{{\mathsf{root}}}}(\sigma) = \sum_{v\in {\operatorname{{\mathsf{Chi}}}(\operatorname{{\mathsf{root}}})}} |\lambda_v|^2 \int_{\sigma} \frac 1 s \operatorname{d}\mu_v(s) + \varepsilon \delta_0(\sigma), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}. \end{aligned}$$ \[cfs\]Quasi-analytic vectors ----------------------------- Let $S$ be an operator in a complex Hilbert space ${\mathcal H}$. We say that a vector $f\in{{\EuScript D}^\infty(S)}$ is a [*quasi-analytic*]{} vector of $S$ if $$\begin{aligned} \sum_{n=1}^\infty \frac{1}{\|S^n f\|^{\nicefrac{1}{n}}} = \infty \quad \text{(convention: $\frac{1}{0}=\infty$)}. \end{aligned}$$ Denote by ${\mathscr Q(S)}$ the set of all quasi-analytic vectors. Note that (cf. [@StSz1 Section 9]) $$\begin{aligned} \label{quasiinv} S({\mathscr Q(S)}) \subseteq {\mathscr Q(S)}. \end{aligned}$$ In general, ${\mathscr Q(S)}$ is not a linear subspace of ${\mathcal H}$ even if $S$ is essentially selfadjoint (see [@ru2]; see also [@ru1] for related matter). We now show that the converse of the implication in Proposition \[necess\] holds for weighted shifts on directed trees having sufficiently many quasi-analytic vectors, and that within this class of operators subnormality is completely characterized by the existence of a consistent system of probability measures. \[main-0\] Let ${S_{\boldsymbol \lambda}}$ be a weighted shift on a directed tree ${{\mathscr T}}$ with weights ${{\boldsymbol\lambda}}=\{\lambda_v\}_{v \in V^\circ}$ such that ${{\mathscr{E}_V}}\subseteq {\mathscr Q({S_{\boldsymbol \lambda}})}$. Then the following conditions are equivalent[*:*]{} 1. ${S_{\boldsymbol \lambda}}$ is subnormal, 2. $\{\|{S_{\boldsymbol \lambda}}^n e_u\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence for every $u \in V$, 3. there exist a system $\{\mu_v\}_{v \in V}$ of Borel probability measures on ${\mathbb R}_+$ and a system $\{\varepsilon_v\}_{v \in V}$ of nonnegative real numbers that satisfy for every $u \in V$. (i)$\Rightarrow$(ii) Apply Proposition \[necess\]. (ii)$\Rightarrow$(iii) Fix $u \in V$ and set $t_n = \|{S_{\boldsymbol \lambda}}^{n+1} e_u\|^2$ for $n \in {\mathbb Z}_+$. By , the sequence $\{t_n\}_{n=0}^\infty$ is a Stieltjes moment sequence. Since $e_u \in {\mathscr Q({S_{\boldsymbol \lambda}})}$, we infer from that ${S_{\boldsymbol \lambda}}e_u \in {\mathscr Q({S_{\boldsymbol \lambda}})}$, or equivalently that $\sum_{n=1}^\infty t_n^{-\nicefrac{1}{2n}} = \infty$. Hence, by the Carleman criterion for determinacy of Stieltjes moment sequences[^5] (cf. [@sh-tam Theorem 1.11]), the Stieltjes moment sequence $\{t_n\}_{n=0}^\infty = \{\|{S_{\boldsymbol \lambda}}^{n+1} e_u\|^2\}_{n=0}^\infty$ is determinate. Now applying Lemma \[2necess+\] yields (iii). (iii)$\Rightarrow$(i) Employ Theorem \[main\]. Using [@StSz1 Theorem 7], one can prove a version of Theorem \[main-0\] in which the class of quasi-analytic vectors is replaced by the class of analytic ones. Since the former class is larger[^6] than the latter, we see that “analytic” version of Theorem \[main-0\] is weaker than Theorem \[main-0\] itself. To the best of our knowledge, Theorem \[main-0\] is the first result of this kind; it shows that the unbounded version of Lambert’s characterization of subnormality happens to be true for operators that have sufficiently many quasi-analytic vectors. The following result, which is an immediate consequence of Theorem \[main-0\], provides a new characterization of subnormality of bounded weighted shifts on directed trees written in terms of consistent systems of probability measures. It may be thought of as a complement to Theorem \[charsub\]. Let ${S_{\boldsymbol \lambda}}\in {\boldsymbol B(\ell^2(V))}$ be a weighted shift on a directed tree ${{\mathscr T}}$ with weights ${{\boldsymbol\lambda}}=\{\lambda_v\}_{v \in V^\circ}$. Then ${S_{\boldsymbol \lambda}}$ is subnormal if and only if there exist a system $\{\mu_v\}_{v \in V}$ of Borel probability measures on ${\mathbb R}_+$ and a system $\{\varepsilon_v\}_{v \in V}$ of nonnegative real numbers that satisfy for every $u \in V$. Subnormality via subtrees ------------------------- Let ${S_{\boldsymbol \lambda}}$ be a weighted shift on a directed tree ${{\mathscr T}}$ with weights ${{\boldsymbol\lambda}}=\{\lambda_v\}_{v\in V^\circ}$. Note that if $u \in V$, then the space $\ell^2({{\operatorname{{\mathsf{Des}}}(u)}})$ (which is regarded as a closed linear subspace of $\ell^2(V)$) is invariant for ${S_{\boldsymbol \lambda}}$, i.e., $$\begin{aligned} \label{ilb} {S_{\boldsymbol \lambda}}\big({{\EuScript D}({S_{\boldsymbol \lambda}})} \cap \ell^2({{\operatorname{{\mathsf{Des}}}(u)}})\big) \subseteq \ell^2({{\operatorname{{\mathsf{Des}}}(u)}}). \end{aligned}$$ (For this, apply and the inclusion $\operatorname{{\mathsf{par}}}\big(V\setminus \big({{\operatorname{{\mathsf{Des}}}(u)}} \cup {\operatorname{{\mathsf{Root}}}({{\mathscr T}})}\big)\big) \subseteq V\setminus {{\operatorname{{\mathsf{Des}}}(u)}}$.) Denote by ${S_{\boldsymbol \lambda}}|_{\ell^2({{\operatorname{{\mathsf{Des}}}(u)}})}$ the operator in $\ell^2({{\operatorname{{\mathsf{Des}}}(u)}})$ given by ${{\EuScript D}({S_{\boldsymbol \lambda}}|_{\ell^2({{\operatorname{{\mathsf{Des}}}(u)}})})} = {{\EuScript D}({S_{\boldsymbol \lambda}})} \cap \ell^2({{\operatorname{{\mathsf{Des}}}(u)}})$ and ${S_{\boldsymbol \lambda}}|_{\ell^2({{\operatorname{{\mathsf{Des}}}(u)}})}f = {S_{\boldsymbol \lambda}}f$ for $f \in {{\EuScript D}({S_{\boldsymbol \lambda}}|_{\ell^2({{\operatorname{{\mathsf{Des}}}(u)}})})}$. It is easily seen that ${S_{\boldsymbol \lambda}}|_{\ell^2({{\operatorname{{\mathsf{Des}}}(u)}})}$ coincides with the weighted shift on the directed tree $({{\operatorname{{\mathsf{Des}}}(u)}}, ({{\operatorname{{\mathsf{Des}}}(u)}}\times {{\operatorname{{\mathsf{Des}}}(u)}}) \cap E)$ with weights $\{\lambda_v\}_{v \in {{\operatorname{{\mathsf{Des}}}(u)}}\setminus \{u\}}$ (see [@j-j-s Proposition 2.1.8] for more details on this and related subtrees). Proposition \[subtree\] below shows that the study of subnormality of weighted shifts on rootless directed trees can be reduced in a sense to the case of directed trees with root. Unfortunately, our criteria for subnormality of weighted shifts on directed trees are not applicable in this context. Fortunately, we can employ the inductive limit approach to subnormality provided by Proposition \[tw1+1\]. \[subtree\] Let ${S_{\boldsymbol \lambda}}$ be a weighted shift on a rootless directed tree ${{\mathscr T}}$ with weights ${{\boldsymbol\lambda}}=\{\lambda_v\}_{v\in V^\circ}$. Suppose that ${{\mathscr{E}_V}}\subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$. If $\varOmega$ is a subset of $V$ such that $V=\bigcup_{\omega\in \varOmega} {{\operatorname{{\mathsf{Des}}}(\omega)}}$, then the following conditions are equivalent[*:*]{} 1. ${S_{\boldsymbol \lambda}}$ is subnormal, 2. for every $\omega\in \varOmega$, ${S_{\boldsymbol \lambda}}|_{\ell^2({{\operatorname{{\mathsf{Des}}}(\omega)}})}$ is subnormal as an operator acting in $\ell^2({{\operatorname{{\mathsf{Des}}}(\omega)}})$. (ii)$\Rightarrow$(i) Using an induction argument and one can show that ${S_{\boldsymbol \lambda}}^n e_v \in \ell^2({{\operatorname{{\mathsf{Des}}}(v)}}) \subseteq \ell^2({{\operatorname{{\mathsf{Des}}}(u)}})$ for all $n\in {\mathbb Z}_+$, $v \in {{\operatorname{{\mathsf{Des}}}(u)}}$ and $u \in V$. Hence $$\begin{aligned} {\mathcal X}_\omega := \operatorname{\mbox{\sc lin}}\big\{e_v\colon v \in {{\operatorname{{\mathsf{Des}}}(\omega)}}\big\} \subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})} \text{ and } {S_{\boldsymbol \lambda}}^n({\mathcal X}_\omega) \subseteq \ell^2({{\operatorname{{\mathsf{Des}}}(\omega)}}) \end{aligned}$$ for all $\omega \in \varOmega$ and $n\in {\mathbb Z}_+$. It follows from [@j-j-s Proposition 2.1.4] and the equality $V=\bigcup_{\omega\in \varOmega} {{\operatorname{{\mathsf{Des}}}(\omega)}}$ that for each pair $(\omega_1,\omega_2) \in \varOmega \times \varOmega$, there exists $\omega \in \varOmega$ such that ${{\operatorname{{\mathsf{Des}}}(\omega_1)}} \cup {{\operatorname{{\mathsf{Des}}}(\omega_2)}} \subseteq {{\operatorname{{\mathsf{Des}}}(\omega)}}$, and thus $\{{\mathcal X}_\omega\}_{\omega \in \varOmega}$ is an upward directed family of subsets of $\ell^2(V)$. By applying Proposition \[bas\](iv) and Proposition \[tw1+1\] to $S={S_{\boldsymbol \lambda}}$ and ${\mathcal H}_\omega= \ell^2({{\operatorname{{\mathsf{Des}}}(\omega)}})$, we get (i). The reverse implication (i)$\Rightarrow$(ii) is obvious because ${\mathcal X}_\omega \subseteq {{\EuScript D}({S_{\boldsymbol \lambda}}|_{\ell^2({{\operatorname{{\mathsf{Des}}}(\omega)}})})}$. It follows from [@j-j-s Proposition 2.1.6] that if ${{\mathscr T}}$ is a rootless directed tree, then $V=\bigcup_{k=1}^\infty {{\operatorname{{\mathsf{Des}}}(\operatorname{{\mathsf{par}}}^k(u))}}$ for every $u \in V$, and so the set $\varOmega$ in Proposition \[subtree\] may always be chosen to be countable and infinite. Subnormality on Assorted Directed Trees ======================================= \[cws\]Classical weighted shifts -------------------------------- By a [*classical weighted shift*]{} we mean either a unilateral weighted shift $S$ in $\ell^2$ or a bilateral weighted shift $S$ in $\ell^2({\mathbb Z})$, i.e., $S=VD$, where, in the unilateral case, $V$ is the unilateral isometric shift on $\ell^2$ of multiplicity $1$ and $D$ is a diagonal operator in $\ell^2$ with diagonal elements $\{\lambda_n\}_{n=0}^\infty$; in the bilateral case, $V$ is the bilateral unitary shift on $\ell^2({\mathbb Z})$ of multiplicity $1$ and $D$ is a diagonal operator in $\ell^2({\mathbb Z})$ with diagonal elements $\{\lambda_n\}_{n=-\infty}^\infty$. In view of [@ml equality (1.7)], $S$ is a unique closed linear operator in $\ell^2$ (respectively:$\ell^2({\mathbb Z})$) such that the linear span of the standard orthonormal basis $\{e_n\}_{n=0}^\infty$ of $\ell^2$ (respectively: $\{e_n\}_{n=-\infty}^\infty$ of $\ell^2({\mathbb Z})$) is a core of $S$ and $$\begin{aligned} \label{notold} S e_n = \lambda_n e_{n+1}, \quad n\in {\mathbb Z}_+ \;\; (\textrm{respectively:\ } n \in {\mathbb Z}). \end{aligned}$$ This fact, combined with parts (ii), (iii) and (iv) of Proposition \[bas\], implies that a unilateral (respectively: a bilateral) classical weighted shift is a weighted shift on the directed tree $({\mathbb Z}_+, \{(n,n+1)\colon n \in {\mathbb Z}_+\})$ (respectively:$({\mathbb Z}, \{(n,n+1)\colon n \in {\mathbb Z}\})$) with weights $\{\lambda_{n-1}\}_{n=1}^\infty$ (respectively:$\{\lambda_{n-1}\}_{n=-\infty}^\infty$). From now on we enumerate weights of a classical weighted shift in accordance with our notation relevant to these two particular trees. This means that takes now the form $$\begin{aligned} \label{notnew} {S_{\boldsymbol \lambda}}e_n = \lambda_{n+1} e_{n+1}, \quad n\in {\mathbb Z}_+ \;\; (\textrm{respectively:\ } n \in {\mathbb Z}), \end{aligned}$$ where ${{\boldsymbol\lambda}}=\{\lambda_{n}\}_{n=1}^\infty$ (respectively:${{\boldsymbol\lambda}}=\{\lambda_{n}\}_{n=-\infty}^\infty$). Using our approach, we can derive the Berger-Gellar-Wallen criterion for subnormality of injective unilateral classical weighted shifts (see [@g-w; @hal2] for the bounded case and [@StSz1 Theorem 4] for the unbounded one). \[b-g-w\] If ${S_{\boldsymbol \lambda}}$ is a unilateral classical weighted shift with nonzero weights ${{\boldsymbol\lambda}}= \{\lambda_n\}_{n=1}^\infty$ $($with notation as in $)$, then the following three conditions are equivalent[*:*]{} 1. ${S_{\boldsymbol \lambda}}$ is subnormal, 2. $\{1, |\lambda_1|^2, |\lambda_1 \lambda_2|^2, |\lambda_1 \lambda_2 \lambda_3|^2, \ldots\}$ is a Stieltjes moment sequence, 3. $\{\|{S_{\boldsymbol \lambda}}^n e_k\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence for all $k \in {\mathbb Z}_+$. First note that ${{\mathscr{E}_V}}\subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$. (i)$\Rightarrow$(iii) Employ Proposition \[necess\]. (iii)$\Rightarrow$(ii) This is evident, because the sequence $\{1, |\lambda_1|^2, |\lambda_1 \lambda_2|^2, |\lambda_1 \lambda_2 \lambda_3|^2, \ldots\}$ coincides with $\{\|{S_{\boldsymbol \lambda}}^n e_0\|^2\}_{n=0}^\infty$. (ii)$\Rightarrow$(i) Let $\mu$ be a representing measure of the Stieltjes moment sequence $\{\|{S_{\boldsymbol \lambda}}^n e_0\|^2\}_{n=0}^\infty$ (which in general may not be determinate, cf. [@sz3]). Define the sequence $\{\mu_n\}_{n=0}^\infty$ of Borel probability measures on ${\mathbb R}_+$ by $$\begin{aligned} \mu_n(\sigma) = \frac{1}{\|{S_{\boldsymbol \lambda}}^n e_0\|^2} \int_{\sigma} s^n \operatorname{d}\mu(s), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \, n \in {\mathbb Z}_+. \end{aligned}$$ It is then clear that $$\begin{aligned} \mu_0(\sigma) &=|\lambda_{1}|^2 \int_\sigma \frac{1}{s} \operatorname{d}\mu_{1}(s) + \mu(\{0\}) \delta_0 (\sigma), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \\ \mu_n(\sigma) &= |\lambda_{n+1}|^2 \int_\sigma \frac{1}{s} \operatorname{d}\mu_{n+1}(s), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)},\, n {\geqslant}1, \end{aligned}$$ which means that the systems $\{\mu_n\}_{n=0}^\infty$ and $\{\varepsilon_n\}_{n=0}^\infty := \{\mu(\{0\}), 0, 0, \ldots\}$ satisfy the assumptions of Theorem \[main\]. This completes the proof. Before formulating the next theorem, we recall that a two-sided sequence $\{t_n\}_{n=-\infty}^\infty$ of real numbers is said to be a [*two-sided Stieltjes moment sequence*]{} if there exists a positive Borel measure $\mu$ on $(0,\infty)$ such that $$\begin{aligned} t_{n}=\int_{(0,\infty)} s^n \operatorname{d}\mu(s),\quad n \in {\mathbb Z}; \end{aligned}$$ $\mu$ is called a [*representing measure*]{} of $\{t_n\}_{n=-\infty}^\infty$. It follows from [@ber page 202] (see also [@j-t-w Theorem 6.3]) that $$\begin{aligned} \label{char2sid} \begin{minipage}{29em} $\{t_n\}_{n=-\infty}^\infty \subseteq {\mathbb R}$ is a two-sided Stieltjes moment sequence if and only if $\{t_{n-k}\}_{n=0}^\infty$ is a Stieltjes moment sequence for every $k \in {\mathbb Z}_+$. \end{minipage} \end{aligned}$$ Now we show how to deduce an analogue of the Berger-Gellar-Wallen criterion for subnormality of injective bilateral classical weighted shifts from our results (see [@con2 Theorem II.6.12] for the bounded case and [@StSz1 Theorem 5] for the unbounded  one). \[b-g-w-2\] If ${S_{\boldsymbol \lambda}}$ is a bilateral classical weighted shift with nonzero weights ${{\boldsymbol\lambda}}=\{\lambda_n\}_{n \in {\mathbb Z}}$ $($with notation as in $)$, then the following four conditions are equivalent[*:*]{} 1. ${S_{\boldsymbol \lambda}}$ is subnormal, 2. the two-sided sequence $\{t_n\}_{n=-\infty}^\infty$ defined by $$\begin{aligned} t_n = \begin{cases} |\lambda_1 \cdots \lambda_{n}|^2 & \text{ for } n {\geqslant}1, \\ 1 & \text{ for } n=0, \\ |\lambda_{n+1} \cdots \lambda_{0}|^{-2} & \text{ for } n {\leqslant}-1, \end{cases} \end{aligned}$$ is a two-sided Stieltjes moment sequence, 3. $\{\|{S_{\boldsymbol \lambda}}^n e_{-k}\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence for infinitely many nonnegative integers $k$, 4. $\{\|{S_{\boldsymbol \lambda}}^n e_k\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence for all $k \in {\mathbb Z}$. First note that ${{\mathscr{E}_V}}\subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$. (i)$\Rightarrow$(iv) Employ Proposition \[necess\]. (iv)$\Rightarrow$(iii) Evident. (iii)$\Rightarrow$(iv) Apply Lemma \[charsub-1\]. (iv)$\Rightarrow$(ii) Since $t_{n-k} = t_{-k} \|{S_{\boldsymbol \lambda}}^n e_{-k}\|^2$ for all $n \in {\mathbb Z}$ and $k\in {\mathbb Z}_+$, we can apply the criterion . (ii)$\Rightarrow$(i) Let $\mu$ be a representing measure of $\{t_n\}_{n=-\infty}^\infty$. Define the two-sided sequence $\{\mu_n\}_{n=-\infty}^\infty$ of Borel probability measures on ${\mathbb R}_+$ by (note that $\mu(\{0\})=0$) $$\begin{aligned} \mu_n(\sigma) = \frac{1}{\|{S_{\boldsymbol \lambda}}^n e_0\|^2} \int_{\sigma} s^n \operatorname{d}\mu(s), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \, n \in {\mathbb Z}. \end{aligned}$$ We easily verify that $$\begin{aligned} \mu_n(\sigma) &= |\lambda_{n+1}|^2 \int_\sigma \frac{1}{s} \operatorname{d}\mu_{n+1}(s), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)},\, n \in {\mathbb Z}, \end{aligned}$$ which means that the systems $\{\mu_n\}_{n=-\infty}^\infty$ and $\{\varepsilon_n\}_{n=-\infty}^\infty$ with $\varepsilon_n\equiv 0$ satisfy the assumptions of Theorem \[main\]. This completes the proof. It is worth mentioning that, in view of Theorems \[b-g-w\] and \[b-g-w-2\], the necessary condition for subnormality of Hilbert space operators that appeared in Proposition \[necess-gen\] (see also Proposition \[necess\]) turns out to be sufficient in the case of injective classical weighted shifts. To the best of our knowledge, the class of injective classical weighted shifts seems to be the only one for which this phenomenon occurs regardless of whether or not the operators in question have sufficiently many qusi-analytic vectors (see [@StSz0] for more details; see also Sections \[subs1\] and \[cfs\]). \[obv\]One branching vertex --------------------------- Our next aim is to discuss subnormality of weighted shifts with nonzero weights on leafless directed trees that have only one branching vertex. Such directed trees are one step more complicated than those involved in the definitions of classical weighted shifts (see Section \[cws\]). By Proposition \[hypcor\], there is no loss of generality in assuming that ${\mathrm{card}(V)} = \aleph_0$. Infinite, countable and leafless directed trees with one branching vertex can be modelled as follows (see Figure 1). Given $\eta,\kappa \in {\mathbb Z}_+ \sqcup \{\infty\}$ with $\eta {\geqslant}2$, we define the directed tree ${{\mathscr T}}_{\eta,\kappa} = (V_{\eta,\kappa}, E_{\eta,\kappa})$ by $$\begin{aligned} \begin{aligned} V_{\eta,\kappa} & = \big\{-k\colon k\in J_\kappa\big\} \sqcup \{0\} \sqcup \big\{(i,j)\colon i\in J_\eta,\, j\in {\mathbb N}\big\}, \\ E_{\eta,\kappa} & = E_\kappa \sqcup \big\{(0,(i,1))\colon i \in J_\eta\big\} \sqcup \big\{((i,j),(i,j+1))\colon i\in J_\eta,\, j\in {\mathbb N}\big\}, \\ E_\kappa & = \big\{(-k,-k+1) \colon k\in J_\kappa\big\}, \end{aligned} \end{aligned}$$ where $J_\iota := \{k \in {\mathbb N}\colon k{\leqslant}\iota\}$ for $\iota \in {\mathbb Z}_+ \sqcup \{\infty\}$. ![image](GrafA.eps){width="7cm"}\ If $\kappa < \infty$, then the directed tree ${{\mathscr T}}_{\eta,\kappa}$ has the root $-\kappa$. If $\kappa=\infty$, then the directed tree ${{\mathscr T}}_{\eta,\infty}$ is rootless. In all cases, $0$ is the branching vertex of ${{\mathscr T}}_{\eta,\kappa}$. We begin by proving criteria for subnormality of weighted shifts on ${{\mathscr T}}_{\eta,\kappa}$ with nonzero weights. Below, we adhere to the notation $\lambda_{i,j}$ instead of a more formal expression $\lambda_{(i,j)}$. \[omega2\] Let ${S_{\boldsymbol \lambda}}$ be a weighted shift on the directed tree ${{\mathscr T}}_{\eta,\kappa}$ with nonzero weights ${{\boldsymbol\lambda}}= \{\lambda_v\}_{v \in V_{\eta,\kappa}^\circ}$ such that $e_0 \in {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$. Suppose that there exists a sequence $\{\mu_i\}_{i=1}^\eta$ of Borel probability measures on ${\mathbb R}_+$ such that $$\begin{aligned} \label{zgod0} \int_0^\infty s^n \operatorname{d}\mu_i(s) = \Big|\prod_{j=2}^{n+1}\lambda_{i,j}\Big|^2, \quad n \in {\mathbb N}, \; i \in J_\eta. \end{aligned}$$ Then ${S_{\boldsymbol \lambda}}$ is subnormal provided that one of the following four conditions holds[*:*]{} 1. $\kappa=0$ and $$\begin{aligned} \label{zgod} \sum_{i=1}^\eta |\lambda_{i,1}|^2 \int_0^\infty \frac 1 s\, \operatorname{d}\mu_i(s) {\leqslant}1, \end{aligned}$$ 2. $0 < \kappa < \infty$ and $$\begin{aligned} \label{zgod'} \sum_{i=1}^\eta |\lambda_{i,1}|^2 \int_0^\infty \frac 1 s\, \operatorname{d}\mu_i(s) &= 1, \\ \Big|\prod_{j=0}^{l-1} \lambda_{-j}\Big|^2 \sum_{i=1}^\eta|\lambda_{i,1}|^2 \int_0^\infty \frac 1 {s^{l+1}} \operatorname{d}\mu_i(s) & = 1, \quad l \in J_{\kappa-1}, \label{widly1} \\ \Big|\prod_{j=0}^{\kappa-1} \lambda_{-j}\Big|^2\sum_{i=1}^\eta|\lambda_{i,1}|^2 \int_0^\infty \frac 1 {s^{\kappa+1}} \operatorname{d}\mu_i(s) & {\leqslant}1, \label{widly1'} \end{aligned}$$ 3. $0 < \kappa < \infty$ and there exists a Borel probability measure $\nu$ on ${\mathbb R}_+$ such that $$\begin{aligned} \label{prob} \int_0^\infty s^n \operatorname{d}\nu(s) & = \Big|\prod_{j=\kappa-n}^{\kappa-1}\lambda_{-j}\Big|^2, \quad n \in J_\kappa, \\ \label{prob'} \int_\sigma s^\kappa \operatorname{d}\nu(s) & = \Big|\prod_{j=0}^{\kappa-1} \lambda_{-j}\Big|^2 \; \sum_{i=1}^\eta |\lambda_{i,1}|^2 \int_\sigma \frac{1}{s} \operatorname{d}\mu_i(s), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \end{aligned}$$ 4. $\kappa=\infty$ and equalities and are satisfied. Note that the assumption $e_0\in {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$ implies that $\mathscr{E}_{V_{\eta,\kappa}} \subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$. \(i) Define the system of Borel probability measures $\{\mu_v\}_{v\in V_{\eta,0}}$ on ${\mathbb R}_+$ and the system $\{\varepsilon_v\}_{v\in V_{\eta,0}}$ of nonnegative real numbers by $$\begin{aligned} \mu_{0}(\sigma) & = \sum_{i=1}^{\eta} |\lambda_{i,1}|^2 \int_\sigma \frac 1 s \operatorname{d}\mu_i(s) + \varepsilon_0 \delta_0(\sigma), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \\ \varepsilon_0 & = 1 - \sum_{i=1}^\eta |\lambda_{i,1}|^2 \int_0^\infty \frac 1 s\, \operatorname{d}\mu_i(s), \end{aligned}$$ and $$\begin{aligned} \label{kap0} \mu_{i,n}(\sigma) & = \frac{1}{\|{S_{\boldsymbol \lambda}}^{n-1} e_{i,1}\|^2} \int_{\sigma} s^{n-1} \operatorname{d}\mu_i(s), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \, i \in J_\eta, \, n \in {\mathbb N}, \\ \varepsilon_{i,n} & = 0, \quad i \in J_\eta, \, n \in {\mathbb N}. \notag \end{aligned}$$ (We write $\mu_{i,j}$ and $\varepsilon_{i,j}$ instead of the more formal expressions $\mu_{(i,j)}$ and $\varepsilon_{(i,j)}$.) Clearly $\mu_{i,1}=\mu_i$ for all $i \in J_\eta$. Using and , we verify that the systems $\{\mu_v\}_{v\in V_{\eta,0}}$ and $\{\varepsilon_v\}_{v\in V_{\eta,0}}$ are well-defined and satisfy the assumptions of Theorem \[main\]. Hence ${S_{\boldsymbol \lambda}}$ is subnormal. \(ii) Define the systems $\{\mu_v\}_{v\in V_{\eta,\kappa}}$ and $\{\varepsilon_v\}_{v\in V_{\eta,\kappa}}$ by and $$\begin{aligned} \label{literki1} \mu_{0}(\sigma) & = \sum_{i = 1}^{\eta} |\lambda_{i,1}|^2 \int_\sigma \frac 1 s \operatorname{d}\mu_i(s), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \\ \label{literki2} \mu_{-l} (\sigma) & = \Big|\prod_{j=0}^{l-1} \lambda_{-j}\Big|^2 \sum_{i=1}^{\eta} |\lambda_{i,1}|^2 \int_{\sigma} \frac 1 {s^{l+1}}\, \operatorname{d}\mu_i(s), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \, l\in J_{\kappa-1}, \\ \label{literki3} \mu_{-\kappa} (\sigma) & = \Big|\prod_{j=0}^{\kappa-1} \lambda_{-j}\Big|^2 \sum_{i=1}^{\eta} |\lambda_{i,1}|^2 \int_{\sigma} \frac 1 {s^{\kappa+1}}\, \operatorname{d}\mu_i(s) + \varepsilon_{-\kappa} \delta_0(\sigma), \hspace{0.8ex} \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \ \\ \label{literki4} \varepsilon_v & = \begin{cases} 0 & \text{ if } v\in V_{\eta,\kappa}^\circ, \\ 1 - \Big|\prod_{j=0}^{\kappa-1} \lambda_{-j}\Big|^2\sum_{i=1}^\eta|\lambda_{i,1}|^2 \int_0^\infty \frac 1 {s^{\kappa+1}} \operatorname{d}\mu_i(s) & \text{ if } v = - \kappa. \end{cases} \end{aligned}$$ Applying , , and , we check that the systems $\{\mu_v\}_{v\in V_{\eta,\kappa}}$ and $\{\varepsilon_v\}_{v\in V_{\eta,\kappa}}$ are well-defined and satisfy the assumptions of Theorem \[main\]. Therefore ${S_{\boldsymbol \lambda}}$ is subnormal. \(iii) First note that $\|{S_{\boldsymbol \lambda}}^n e_{-\kappa}\|^2 = \Big|\prod_{j=\kappa-n}^{\kappa-1}\lambda_{-j}\Big|^2$ for $n \in J_\kappa$. Define the systems $\{\mu_v\}_{v\in V_{\eta,\kappa}}$ and $\{\varepsilon_v\}_{v\in V_{\eta,\kappa}}$ by and $$\begin{aligned} \mu_{-l}(\sigma) & = \frac{1}{\|{S_{\boldsymbol \lambda}}^{-l + \kappa} e_{-\kappa}\|^2}\int_\sigma s^{-l + \kappa} \operatorname{d}\nu(s), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)},\, l\in J_\kappa \cup \{0\}, \\ \varepsilon_v & = \begin{cases} 0 & \text{ if } v\in V_{\eta,\kappa}^\circ, \\ \nu(\{0\}) & \text{ if } v = - \kappa. \end{cases} \end{aligned}$$ Clearly $\mu_{-\kappa}=\nu$, which together with , and implies that the systems $\{\mu_v\}_{v\in V_{\eta,\kappa}}$ and $\{\varepsilon_v\}_{v\in V_{\eta,\kappa}}$ satisfy the assumptions of Theorem \[main\]. As a consequence, ${S_{\boldsymbol \lambda}}$ is subnormal. \(iv) Define the system $\{\mu_v\}_{v\in V_{\eta,\kappa}}$ by , and . In view of (ii), the systems $\{\mu_v\}_{v\in V_{\eta,\kappa}}$ and $\{\varepsilon_v\}_{v\in V_{\eta,\kappa}}$ with $\varepsilon_v\equiv 0$ satisfy the assumptions of Theorem \[main\], and so ${S_{\boldsymbol \lambda}}$ is subnormal. It is worth mentioning that conditions (ii) and (iii) of Theorem \[omega2\] are equivalent without assuming that is satisfied. \[IBJ\] Let ${S_{\boldsymbol \lambda}}$ be a weighted shift on the directed tree ${{\mathscr T}}_{\eta,\kappa}$ with nonzero weights ${{\boldsymbol\lambda}}= \{\lambda_v\}_{v \in V_{\eta,\kappa}^\circ}$ such that $e_0 \in {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$ and let $\{\mu_i\}_{i=1}^\eta$ be a sequence of Borel probability measures on ${\mathbb R}_+$. Then conditions [*(ii)*]{} and [*(iii)*]{} of Theorem [*\[omega2\]*]{} $($with the same $\kappa$$)$ are equivalent. (ii)$\Rightarrow$(iii) Let $\{\mu_{-l}\}_{l=0}^\kappa$ be the Borel probability measures on ${\mathbb R}_+$ defined by , and with $\varepsilon_{-\kappa}$ given by . Set $\nu=\mu_{-\kappa}$. It follows from that for every $n \in J_\kappa$, $$\begin{aligned} \label{intnu} \int_\sigma s^n \operatorname{d}\nu(s) = \Big|\prod_{j=0}^{\kappa-1} \lambda_{-j}\Big|^2 \; \sum_{i=1}^\eta |\lambda_{i,1}|^2 \int_\sigma \frac{1}{s^{\kappa + 1 - n}} \operatorname{d}\mu_i(s), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}. \end{aligned}$$ This immediately implies . By , and , we have $$\begin{aligned} \int_\sigma s^n \operatorname{d}\nu(s) = \begin{cases} \cfrac{|\prod_{j=0}^{\kappa-1} \lambda_{-j}|^2}{|\prod_{j=0}^{\kappa-n-1} \lambda_{-j}|^2} \, \mu_{-(\kappa-n)}(\sigma) & \text{ if } n \in J_{\kappa-1}, \\[3ex] |\prod_{j=0}^{\kappa-1} \lambda_{-j}|^2 \, \mu_{0}(\sigma) & \text{ if } n=\kappa, \end{cases} \end{aligned}$$ for all $\sigma \in {{\mathfrak B}({\mathbb R}_+)}$. Substituting $\sigma={\mathbb R}_+$ and using the fact that $\{\mu_{-l}\}_{l=0}^{\kappa-1}$ are probability measures, we obtain . (iii)$\Rightarrow$(ii) Given $n \in J_\kappa$, we define the positive Borel measure $\rho_n$ on ${\mathbb R}_+$ by $\rho_n(\sigma) = \int_\sigma s^n \operatorname{d}\nu(s)$ for $\sigma \in {{\mathfrak B}({\mathbb R}_+)}$. By , equality holds for $n=\kappa$. If this equality holds for a fixed $n \in J_\kappa \setminus \{1\}$, then $\rho_n(\{0\})=0$ and consequently $$\begin{aligned} \int_\sigma s^{n-1} \operatorname{d}\nu(s) = \int_\sigma \frac{1}{s} \operatorname{d}\rho_n(s) \overset{\eqref{intnu}}= \Big|\prod_{j=0}^{\kappa-1} \lambda_{-j}\Big|^2 \; \sum_{i=1}^\eta |\lambda_{i,1}|^2 \int_\sigma \frac{1}{s^{\kappa + 1 - (n-1)}} \operatorname{d}\mu_i(s) \end{aligned}$$ for all $\sigma \in {{\mathfrak B}({\mathbb R}_+)}$. Hence, by reverse induction on $n$, holds for all $n\in J_\kappa$. Substituting $\sigma={\mathbb R}_+$ into and using , we obtain and . It follows from , applied to $n=1$, that for every $\sigma \in {{\mathfrak B}({\mathbb R}_+)}$, $$\begin{gathered} \label{nusig} \nu(\sigma) = \nu(\sigma \setminus \{0\}) + \nu(\{0\}) \delta_0(\sigma) = \int_\sigma \frac{1}{s} \operatorname{d}\rho_1(s) + \nu(\{0\}) \delta_0(\sigma) \\ \overset{\eqref{intnu}}= \Big|\prod_{j=0}^{\kappa-1} \lambda_{-j}\Big|^2 \; \sum_{i=1}^\eta |\lambda_{i,1}|^2 \int_\sigma \frac{1}{s^{\kappa + 1}} \operatorname{d}\mu_i(s) + \nu(\{0\}) \delta_0(\sigma). \end{gathered}$$ Substituting $\sigma={\mathbb R}_+$ into and using the fact that $\nu({\mathbb R}_+)=1$, we obtain . This completes the proof. Now we show that under some additional requirements imposed on the weighted shift in question the sufficient conditions appearing in Theorem \[omega2\] become necessary (see also Remark \[deterrem\] below). \[deter\] Let ${S_{\boldsymbol \lambda}}$ be a subnormal weighted shift on the directed tree ${{\mathscr T}}_{\eta,\kappa}$ with nonzero weights ${{\boldsymbol\lambda}}= \{\lambda_v\}_{v \in V_{\eta,\kappa}^\circ}$. If $e_0 \in {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$ and $$\begin{aligned} \label{detn+1} \text{$\Big\{\sum_{i=1}^\eta \Big|\prod_{j=1}^{n+1} \lambda_{i,j}\Big|^2\Big\}_{n=0}^\infty$ is a determinate Stieltjes moment sequence,} \end{aligned}$$ then the following four assertions hold[*:*]{} 1. if $\kappa = 0$, then there exists a sequence $\{\mu_i\}_{i=1}^\eta$ of Borel probability measures on ${\mathbb R}_+$ that satisfy and , 2. if $0 < \kappa < \infty$, then there exists a sequence $\{\mu_i\}_{i=1}^\eta$ of Borel probability measures on ${\mathbb R}_+$ that satisfy , , and , 3. if $0 < \kappa < \infty$, then there exist a sequence $\{\mu_i\}_{i=1}^\eta$ of Borel probability measures on ${\mathbb R}_+$ and a Borel probability measure $\nu$ on ${\mathbb R}_+$ that satisfy , and , 4. if $\kappa=\infty$, then there exists a sequence $\{\mu_i\}_{i=1}^\eta$ of Borel probability measures on ${\mathbb R}_+$ that satisfy , and . Moreover, if $e_0 \in {\mathscr Q({S_{\boldsymbol \lambda}})}$, i.e., $\sum_{n=1}^\infty \big(\sum_{i=1}^\eta \big|\prod_{j=1}^{n} \lambda_{i,j}\big|^2\big)^{-\nicefrac{1}{2n}} = \infty$, then is satisfied. It is clear that $e_0\in {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$ implies that $\mathscr{E}_{V_{\eta,\kappa}} \subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$ and $$\begin{aligned} \label{detn+2} \|{S_{\boldsymbol \lambda}}^{n+1} e_0\|^2 = \sum_{i=1}^\eta \big|\prod_{j=1}^{n+1} \lambda_{i,j}\big|^2, \quad n \in {\mathbb Z}_+. \end{aligned}$$ By Proposition \[necess\], for every $u \in V_{\eta,\kappa}$ the sequence $\{\|{S_{\boldsymbol \lambda}}^n e_u\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence. For each $i \in J_\eta$, we choose a representing measure $\mu_i$ of $\{\|{S_{\boldsymbol \lambda}}^{n} e_{i,1}\|^2\}_{n=0}^\infty$. It is easily seen that holds. Since, by and , the Stieltjes moment sequence $\{\|{S_{\boldsymbol \lambda}}^{n+1} e_{0}\|^2\}_{n=0}^\infty$ is determinate, we infer from Lemma \[charsub2\], applied to $u=0$, that holds and $\{\|{S_{\boldsymbol \lambda}}^{n} e_{0}\|^2\}_{n=0}^\infty$ is a determinate Stieltjes moment sequence with the representing measure $\mu_0$ given by $$\begin{aligned} \label{muu+2} \mu_0(\sigma) = \sum_{i=1}^\eta |\lambda_{i,1}|^2 \int_\sigma \frac 1 s \operatorname{d}\mu_i(s) + \varepsilon_0 \delta_0(\sigma), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \end{aligned}$$ where $\varepsilon_0$ is a nonnegative real number. In view of the above, assertion (i) is proved. Suppose $0 < \kappa {\leqslant}\infty$. Since $\{\|{S_{\boldsymbol \lambda}}^{n} e_{0}\|^2\}_{n=0}^\infty$ is a determinate Stieltjes moment sequence, we deduce from Lemma \[charsub-1\], applied to $u_0=-1$, that $\{\|{S_{\boldsymbol \lambda}}^{n+1} e_{-1}\|^2\}_{n=0}^\infty$ and $\{\|{S_{\boldsymbol \lambda}}^{n} e_{-1}\|^2\}_{n=0}^\infty$ are determinate Stieltjes moment sequences  and $$\begin{aligned} \label{jabko} & \int_0^\infty \frac 1 s \operatorname{d}\mu_0(s) {\leqslant}\frac{1}{|\lambda_{0}|^2}, \\ & \mu_{-1}(\sigma) = |\lambda_{0}|^2 \int_\sigma \frac 1 s \operatorname{d}\mu_0(s) + \varepsilon_{-1} \delta_0(\sigma), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \label{jabko2} \end{aligned}$$ where $\mu_{-1}$ is the representing measure of $\{\|{S_{\boldsymbol \lambda}}^{n} e_{-1}\|^2\}_{n=0}^\infty$ and $\varepsilon_{-1}$ is a nonnegative real number. Inequality combined with equality implies that $\varepsilon_0=0$ and therefore that holds for $\kappa=1$. Substituting $\sigma={\mathbb R}_+$ into , we obtain . This completes the proof of assertion (ii) for $\kappa=1$. Note also that equalities and , combined with $\varepsilon_0=0$, yield $$\begin{aligned} \mu_{-1}(\sigma) = |\lambda_{0}|^2 \sum_{i=1}^\eta |\lambda_{i,1}|^2 \int_\sigma \frac 1 {s^2} \operatorname{d}\mu_i(s) + \varepsilon_{-1} \delta_0(\sigma), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}. \end{aligned}$$ If $\kappa > 1$, then arguing by induction, we conclude that for every $k\in J_{\kappa}$ the Stieltjes moment sequences $\{\|{S_{\boldsymbol \lambda}}^{n+1} e_{-k}\|^2\}_{n=0}^\infty$ and $\{\|{S_{\boldsymbol \lambda}}^{n} e_{-k}\|^2\}_{n=0}^\infty$ are determinate and $$\begin{aligned} \label{mu-l} \mu_{-l}(\sigma) = \Big|\prod_{j=0}^{l-1}\lambda_{-j}\Big|^2 \sum_{i=1}^\eta |\lambda_{i,1}|^2 \int_\sigma \frac 1 {s^{l+1}} \operatorname{d}\mu_i(s), \quad \sigma \in {{\mathfrak B}({\mathbb R}_+)}, \, l\in J_{\kappa - 1}, \end{aligned}$$ where $\mu_{-l}$ is the representing measure of $\{\|{S_{\boldsymbol \lambda}}^{n} e_{-l}\|^2\}_{n=0}^\infty$. Substituting $\sigma={\mathbb R}_+$ into , we obtain . This completes the proof of assertion (iv). Finally, if $1 < \kappa < \infty$, then again by Lemma \[charsub-1\], now applied to $u=-\kappa$, we have $\int_0^\infty \frac 1 s \operatorname{d}\mu_{-\kappa+1}(s) {\leqslant}\frac{1}{|\lambda_{-\kappa+1}|^2}$. This inequality together with yields , which completes the proof of assertion (ii). Assertion (iii) can be deduced from assertion (ii) via Lemma \[IBJ\]. Arguing as in the proof of Theorem \[main-0\], we see that if $e_0 \in {\mathscr Q({S_{\boldsymbol \lambda}})}$, then is satisfied. \[deterrem\] A careful look at the proof reveals that Theorem \[deter\] remains valid if instead of assuming that ${S_{\boldsymbol \lambda}}$ is subnormal, we assume that $\{\|{S_{\boldsymbol \lambda}}^n e_u\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence for every $u \in \big\{-k\colon k\in J_\kappa\big\} \sqcup \{0\} \sqcup {\operatorname{{\mathsf{Chi}}}(0)}$. Let ${S_{\boldsymbol \lambda}}$ be a weighted shift on the directed tree ${{\mathscr T}}_{\eta,\kappa}$ with nonzero weights ${{\boldsymbol\lambda}}= \{\lambda_v\}_{v \in V_{\eta,\kappa}^\circ}$ such that $e_0 \in {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$ $($or, equivalently, $\mathscr{E}_{V_{\eta,\kappa}} \subseteq {{\EuScript D}^\infty({S_{\boldsymbol \lambda}})}$$)$. Suppose that $\{\|{S_{\boldsymbol \lambda}}^n e_v\|^2\}_{n=0}^\infty$ is a Stieltjes moment sequence for every $v \in \big\{-k\colon k\in J_\kappa\big\} \sqcup \{0\} \sqcup {\operatorname{{\mathsf{Chi}}}(0)}$, and that $\{\|{S_{\boldsymbol \lambda}}^{n+1} e_0\|^2\}_{n=0}^\infty$ is a determinate Stieltjes moment sequence. Then the following assertions hold[*:*]{} 1. ${S_{\boldsymbol \lambda}}$ is subnormal, 2. $\{\|{S_{\boldsymbol \lambda}}^{n+1} e_{-j}\|^2\}_{n=0}^\infty$ is a determinate Stieltjes moment sequence for every integer $j$ such that $0 {\leqslant}j {\leqslant}\kappa$, 3. ${S_{\boldsymbol \lambda}}$ satisfies the consistency condition at the vertex $u=-j$ for every integer $j$ such that $0 {\leqslant}j {\leqslant}\kappa$. \(i) By using Remark \[deterrem\], we find a sequence $\{\mu_i\}_{i=1}^\eta$ of Borel probability measures on ${\mathbb R}_+$ satisfying and exactly one of the conditions (i), (ii) and (iv) of Theorem \[omega2\] (the choice depends on $\kappa$), and then we apply Theorem \[omega2\]. \(ii) See the proof of Theorem \[deter\]. \(iii) Apply (ii) and Lemma \[charsub2\](ii). Acknowledgements {#acknowledgements .unnumbered} ---------------- A substantial part of this paper was written while the second and the fourth authors visited Kyungpook National University during the autumn of 2010 and the spring of 2011. They wish to thank the faculty and the administration of this unit for their warm hospitality. [99]{} N. I. Akhiezer, I. M. Glazman, [*Theory of linear operators in Hilbert space*]{}, Vol. II, Dover Publications, Inc., New York, 1993. E. Albrecht, F.-H. Vasilescu, Unbounded extensions and operator moment problems, [*J. Funct. Anal.*]{} [**260**]{} (2011), 2497-2517. A. Athavale, S. Chavan, Sectorial forms and unbounded subnormals, [*Math. Proc. Cambridge Philos. Soc.*]{} [**143**]{} (2007), 685-702. C. Berg, J. P. R. Christensen, P. Ressel, [*Harmonic Analysis on Semigroups*]{}, Springer, Berlin, 1984. M. Sh. Birman, M. Z. Solomjak, [*Spectral theory of selfadjoint operators in Hilbert space*]{}, D. Reidel Publishing Co., Dordrecht, 1987. E. Bishop, Spectral theory for operators on a Banach space, [*Trans. Amer. Math. Soc.*]{} [**86**]{} (1957), 414-445. S. Chavan, A. Athavale, On a Friedrichs extension related to unbounded subnormal operators, [*Glasg. Math. J.*]{} [**48**]{} (2006), 19-28. P. R. Chernoff, A semibounded closed symmetric operator whose square has trivial domain, [*Proc. Amer. Math. Soc.*]{} [**89**]{} (1983), 289-290. D. Cichoń, J. Stochel, F. H. Szafraniec, Extending positive definiteness, [*Trans. Amer. Math. Soc.*]{} [**363**]{} (2011), 545-577. E. A. Coddington, Formally normal operators having no normal extension, [*Canad. J. Math.*]{} [**17**]{} (1965), 1030-1040. J. B. Conway, [*The theory of subnormal operators*]{}, Mathematical Surveys and Monographs, Providence, Rhode Island, 1991. J. B. Conway, N. S. Feldman, The state of subnormal operators, [*A glimpse at Hilbert space operators*]{}, 177-194, [*Oper. Theory Adv. Appl.*]{}, [**207**]{}, Birkhäuser Verlag, Basel, 2010. J. B. Conway, K. H. Jin, S. Kouchekian, On unbounded Bergman operators, [*J. Math. Anal. Appl.*]{} [**279**]{} (2003), 418-429. R. E. Curto, Quadratically hyponormal weighted shifts, [*Integr. Equ. Oper. Theory*]{} [**13**]{} (1990), 49-66. O. Demanze, A subnormality criterion for unbounded tuples of operators, [*Acta Sci. Math. $($Szeged$)$*]{} [**69**]{} (2003), 773-787. O. Demanze, On subnormality and formal subnormality for tuples of unbounded operators, [*Integr. Equ. Oper. Theory*]{} [**46**]{} (2003), 267-284. O. Demanze, Sous-normalité jointe non bornée et applications, [*Studia Math.*]{} [**171**]{} (2005), 227-237. J. Eschmeier, F.-H. Vasilescu, On jointly essentially self-adjoint tuples of operators, [*Acta Sci. Math. $($Szeged$)$*]{} [**67**]{} (2001), 373-386. C. Foiaş, Décompositions en opérateurs et vecteurs propres. I., Études de ces dècompositions et leurs rapports avec les prolongements des opérateurs, [*Rev. Roumaine Math. Pures Appl.*]{} [**7**]{} (1962), 241-282. R. Gellar, L. J. Wallen, Subnormal weighted shifts and the Halmos-Bram criterion, [*Proc. Japan Acad.*]{} [**46**]{} (1970), 375-378. P. Halmos, Normal dilations and extensions of operators, [*Summa Bras. Math.*]{} [**2**]{} (1950), 124-134. P. R. Halmos, Ten problems in Hilbert space, [*Bull. Amer. Math. Soc.*]{} [**76**]{} (1970), 887-933. Z. J. Jab[ł]{}oński, I. B. Jung, J. Stochel, Weighted shifts on directed trees, [*Mem. Amer. Math. Soc.*]{}, in press. Z. J. Jab[ł]{}oński, I. B. Jung, J. Stochel, Normal extensions escape from the class of weighted shifts on directed trees, preprint 2011. Z. J. Jab[ł]{}oński, I. B. Jung, J. Stochel, A hyponormal weighted shift on a directed tree whose square has trivial domain, preprint 2011. Z. J. Jab[ł]{}oński, I. B. Jung, J. Stochel, A non-hyponormal operator generating Stieltjes moment sequences, preprint 2011. J. Janas, On unbounded hyponormal operators, [*Ark. Mat.*]{} [**27**]{} (1989), 273-281. J. Janas, On unbounded hyponormal operators. II, [*Integr. Equ. Oper. Theory*]{} [**15**]{} (1992), 470-478. J. Janas, On unbounded hyponormal operators. III, [*Studia Math.*]{} [**112**]{} (1994), 75-82. K. H. Jin, On unbounded subnormal operators, [*Bull. Korean Math. Soc.*]{} [**30**]{} (1993), 65-70. W. B. Jones, W. J. Thron, H. Waadeland, A strong Stieltjes moment problem, [*Trans. Amer. Math. Soc.*]{} [**261**]{} (1980), 503-528. P. E. T. Jorgensen, Commutative algebras of unbounded operators, [*J. Math. Anal. Appl.*]{} [**123**]{} (1987), 508-527. I. B. Jung, J. Stochel, Subnormal operators whose adjoints have rich point spectrum, [*J. Funct. Anal.*]{} [**255**]{} (2008), 1797-1816. S. Kouchekian, The density problem for unbounded Bergman operators, [*Integr. Equ. Oper. Theory*]{} [**45**]{} (2003), 319-342. S. Kouchekian, J. E. Thomson, The density problem for self-commutators of unbounded Bergman operators, [*Integr. Equ. Oper. Theory*]{} [**52**]{} (2005), 135-147. S. Kouchekian, J. E. Thomson, On self-commutators of Toeplitz operators with rational symbols, [*Studia Math.*]{} [**179**]{} (2007), 41-47. A. Lambert, Subnormality and weighted shifts, [*J. London Math. Soc.*]{} [**14**]{} (1976), 476-480. G. Lassner, Topological algebras of operators, [*Rep. Mathematical Phys.*]{} [**3**]{} (1972), 279-293. G. McDonald, C. Sundberg, On the spectra of unbounded subnormal operators, [*Canad. J. Math.*]{} [**38**]{} (1986), 1135-1148. W. Mlak, The Schrödinger type couples related to weighted shifts, [*Univ. Iagel. Acta Math.*]{} [**27**]{} (1988), 297-301. M. Naimark, On the square of a closed symmetric operator, [*Dokl. Akad. Nauk SSSR*]{} [**26**]{} (1940), 866-870; ibid. [**28**]{} (1940), 207-208. N. K. Nikol’skiĭ, [*Treatise on the shift operator*]{}, Grundlehren der Mathematischen Wissenschaften, 273, Springer-Verlag, Berlin, 1986. Y. Okazaki, Boundedness of closed linear operator $T$ satisfying $R(T)\subset D(T)$, [*Proc. Japan. Acad.*]{} [**62**]{} (1986), no. 8, 294-296. S. Ôta, Closed linear operators with domain containing their range, [*Proc. Edinburg Math. Soc.*]{} [**27**]{}, (1984) 229-233. S. Ôta, K. Schmüdgen, On some classes of unbounded operators, [*Integr. Equ. Oper. Theory*]{} [**12**]{} (1989), 211-226. W. Rudin, [*Real and Complex Analysis*]{}, McGraw-Hill, New York 1987. J. Rusinek, $p$-analytic and $p$-quasi-analytic vectors, [*Studia Math.*]{} [**127**]{} (1998), 233-250. J. Rusinek, Non-linearity of the set of $p$-quasi-analytic vectors for some essentially self-adjoint operators, [*Bull. Polish Acad. Sci. Math.*]{} [**48**]{} (2000), 287-292. K.Schmüdgen, A formally normal operator having no normal extension, [*Proc. Amer. Math. Soc.*]{} [**95**]{} (1985), 503-504. A. L. Shields, Weighted shift operators and analytic function theory, [*Topics in operator theory*]{}, pp. 49-128. Math. Surveys, No. 13, Amer. Math. Soc., Providence, R.I., 1974. J. A. Shohat, J. D. Tamarkin, [*The problem of moments*]{}, Math. Surveys [**1**]{}, Amer. Math. Soc., Providence, Rhode Island, 1943. B. Simon, The classical moment problem as a self-adjoint finite difference operator, [*Adv. Math.*]{} [**137**]{} (1998), 82-203. S. P. Slinker, On commuting self-adjoint extensions of unbounded operators, [*Indiana Univ. Math. J.*]{} [**27**]{} (1978), 629-636. J. Stochel, Moment functions on real algebraic sets, [*Ark. Mat.*]{} [**30**]{} (1992), 133-148. J. Stochel, An asymmetric Putnam-Fuglede theorem for unbounded operators, [*Proc. Amer. Math. Soc.*]{} [**129**]{} (2001), 2261-2271. J. B. Stochel, Subnormality and generalized commutation relations, [*Glasgow Math. J.*]{} [**30**]{} (1988), 259-262. J. Stochel, F. H. Szafraniec, On normal extensions of unbounded operators. I, [*J. Operator Theory*]{} [**14**]{} (1985), 31-55. J. Stochel and F. H. Szafraniec, On normal extensions of unbounded operators. II, [*Acta Sci. Math. $($Szeged$)$*]{} [**53**]{} (1989), 153-177. J. Stochel, F. H. Szafraniec, On normal extensions of unbounded operators. III. Spectral properties, [*Publ. RIMS, Kyoto Univ.*]{} [**25**]{} (1989), 105-139. J. Stochel, F. H. Szafraniec, A few assorted questions about unbounded subnormal operators, [*Univ. Iagel. Acta Math.*]{} [**28**]{} (1991), 163-170. J. Stochel, F. H. Szafraniec, The complex moment problem and subnormality: a polar decomposition approach, [*J. Funct. Anal.*]{} [**159**]{} (1998), 432-491. M. H. Stone, [*Linear transformations in Hilbert space and their applications to analysis*]{}, Amer. Math. Soc. Colloq. Publ. 15, Amer. Math. Soc., Providence, R.I. [**1932**]{}. F. H. Szafraniec, A RKHS of entire functions and its multiplication operator. An explicit example, Linear operators in function spaces (Timişoara, 1988), 309-312, [*Oper. Theory Adv. Appl.*]{}, [**43**]{}, Birkhäuser, Basel, 1990. F. H. Szafraniec, Sesquilinear selection of elementary spectral measures and subnormality, [*Elementary operators and applications*]{} (Blaubeuren, 1991), 243-248, [*World Sci. Publ., River Edge, NJ*]{}, 1992. F. H. Szafraniec, On extending backwards positive definite sequences, [*Numer. Algorithms*]{} [**3**]{} (1992), 419-426. F. H. Szafraniec, Subnormality in the quantum harmonic oscillator, [*Comm. Math. Phys.*]{} [**210**]{} (2000), 323-334. F. H. Szafraniec, Charlier polynomials and translational invariance in the quantum harmonic oscillator, [*Math. Nachr.*]{} [**241**]{} (2002), 163-169. F. H. Szafraniec, On normal extensions of unbounded operators. IV. A matrix construction, [*Operator theory and indefinite inner product spaces*]{}, 337-350, [*Oper. Theory Adv. Appl.*]{}, [**163**]{}, Birkhäuser, Basel, 2006. F.-H. Vasilescu, Extensions of unbounded symmetric multioperators, [*Operator theory and Banach algebras*]{} (Rabat, 1999), 151-161, Theta, Bucharest, 2003. F.-H. Vasilescu, Unbounded normal algebras and spaces of fractions, [*System theory, the Schur algorithm and multidimensional analysis*]{}, 295-322, [*Oper. Theory Adv. Appl.*]{} [**176**]{}, Birkhäuser, Basel, 2007. F.-H. Vasilescu, Subnormality and moment problems, [*Extracta Math.*]{} [**24**]{} (2009), 167-186. J. von Neumann, Allgemeine Eigenwerttheorie Hermitescher Funktionaloperatoren, [*Math. Ann.*]{} [**102**]{} (1929), 49-131. J. Weidmann, [*Linear operators in Hilbert spaces*]{}, Springer-Verlag, Berlin, Heidelberg, New York, [**1980**]{}. F. M. Wright, On the backward extension of positive definite Hamburger moment sequences, [*Proc. Amer. Math. Soc.*]{} [**7**]{} (1956), 413-422. [^1]: Research of the first, second and fourth authors was supported by the MNiSzW (Ministry of Science and Higher Education) grant NN201 546438 (2010-2013). The third author was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (2009-0093125). [^2]: \[foot\]We adhere to the convention that $\frac 1 0 := \infty$. Hence, $\int_0^\infty \frac 1 s \operatorname{d}\mu(s) < \infty$ implies $\mu(\{0\})=0$. [^3]: We adhere to the standard convention that $0 \cdot \infty = 0$; see also footnote \[foot\]. [^4]: see [^5]: In fact, one can prove that a Stieltjes moment sequence $\{t_n\}_{n=0}^\infty$ for which $\sum_{n=1}^\infty t_n^{-\nicefrac{1}{2n}} = \infty$ is determinate as a Hamburger moment sequence, which means that there exists only one positive Borel measure on ${\mathbb R}$ which represents the sequence $\{t_n\}_{n=0}^\infty$ (cf.[@sim Corollary 4.5]). [^6]: In general, the class of analytic vectors of an operator $S$ is essentially smaller than the class of quasi-analytic vectors of $S$ even for essentially selfadjoint operators $S$ (cf.[@ru1]).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We prove an equivariant version of the local splitting theorem for tame Poisson structures and Poisson actions of compact Lie groups. As a consequence, we obtain an equivariant linearization result for Poisson structures whose transverse structure has semisimple linear part of compact type.' address: - 'Laboratoire Emile Picard, UMR 5580 CNRS, Universit[é]{} Toulouse III' - 'Laboratoire Emile Picard, UMR 5580 CNRS, Université Toulouse III' author: - Eva Miranda - Nguyen Tien Zung date: 'Second version, March 16, 2006' title: A note on equivariant normal forms of Poisson structures --- [^1] Introduction ============ The main purpose of this note is to prove an equivariant version of Weinstein’s splitting theorem for Poisson structures [@weinstein]. This theorem asserts that in the neighborhood of any point $p$ in a Poisson manifold $(P^n,\Pi)$ there is a local coordinate system $(x_1,y_1,\dots, x_{2k},y_{2k}, z_1,\dots, z_{n-2k})$ in which the Poisson structure $\Pi$ can be written as $$\Pi = \sum_{i=1}^k \frac{\partial}{\partial x_i}\wedge \frac{\partial}{\partial y_i} + \sum_{ij} f_{ij}(z)\frac{\partial}{\partial z_i}\wedge \frac{\partial}{\partial z_j} ,$$ where $2k$ is the rank of $\Pi$ at $p$, and $f_{ij}$ are functions which depend only on the variables $(z_1,\hdots,z_{n-2k})$ and which vanish at the origin. Geometrically speaking, locally the Poisson manifold $(P^n,\Pi)$ can be splitted into the direct product of a $2k$-dimensional symplectic manifold (with the standard nondegenerate Poisson structure $\Pi_1 = \sum_{i=1}^k \frac{\partial}{\partial x_i}\wedge \frac{\partial}{\partial y_i}$) and a $(n-2k)$-dimensional Poisson manifold whose Poisson structure $\Pi_2 = \sum_{ij} f_{ij}(z)\frac{\partial}{\partial z_i}\wedge \frac{\partial}{\partial z_j}$ vanishes at the origin. We want to show that if there is a (local) action of a compact Lie group $G$ on $P^n$ which fixes the point $p$ and which preserves $\Pi$, then this splitting can be made equivariantly. In the special case when $\Pi$ is nondegenerate at $p$ (i.e., $2k = n$), one recovers from Weinstein’s theorem the classical Darboux theorem about the local existence of canonical (Darboux) coordinates for symplectic manifolds. We know two methods for proving Darboux theorem: 1) the classical coordinate-by-coordinate construction method; and 2) the path method due to Moser [@moser]. Weinstein’s proof of the splitting theorem [@weinstein] is also based on the first method (coordinate by coordinate construction). However, this classical method does not seem to work in the equivariant situation, while the path method can be used to prove the equivariant Darboux theorem [@wei1]. In the same spirit, we will try to use the path method to prove an equivariant version of the splitting theorem for Poisson structures. In doing so, we encounter a technical condition, which we call the *tameness condition*: a smooth Poisson structure $\Pi$ on a manifold $P^n$ is called *tame* if for any two smooth Poisson vector fields $X, Y$ on $P^n$ (which may depend on some parameters) which are tangent to the symplectic leaves the function $\Pi^{-1}(X,Y)$ is smooth (and depends smoothly on the parameters). We will devote Section 2 of this note to the tameness condition, in order to convince the reader that it is an interesting condition, and many “reasonable” Poisson structures satisfy it. For example, if the linear part of the transverse Poisson structure at a point $p$ has semisimple type, then the Poisson structure is tame near $p$. Now we can formulate the main result of this note: \[thm:main\] Let $(P^n,\Pi)$ be a smooth Poisson manifold, $p$ a point of $P$, $2k = {{\rm rank\ }}\Pi (p)$, and $G$ a compact Lie group which acts on $P$ in such a way that the action preserves $\Pi$ and fixes the point $p$. Assume that the Poisson structure $\Pi$ is tame at $p$. Then there is a smooth canonical local coordinate system $(x_1,y_1,\dots, x_{2k},y_{2k}, z_1,\dots, z_{n-2k})$ near $p$, in which the Poisson structure $\Pi$ can be written as $$\Pi = \sum_{i=1}^k \frac{\partial}{\partial x_i}\wedge \frac{\partial}{\partial y_i} + \sum_{ij} f_{ij}(z)\frac{\partial}{\partial z_i}\wedge \frac{\partial}{\partial z_j},$$ and in which the action of $G$ is linear and preserves the subspaces $\{x_1 = y_1 = \hdots x_k = y_k = 0\}$ and $\{z_1 = \hdots = z_{n-2k} = 0\}$. i\) We do not know whether the tameness condition is really necessary, or if it is because our method is not good enough. We notice that this condition is also implicitly present in the papers of Ginzburg and Weinstein [@ginzburgweinstein] and of Aleekseev and Meinrenken [@alek], [@alek2], which involve the path method in Poisson geometry.\ ii) The above theorem also holds in the analytic (i.e., real analytic or holomorphic) setting, with basically the same proof. The analytic version of this equivariant theorem is used by Philippe Monnier and the second author in their study of normal forms of vector fields on Poisson manifolds [@zungmonnier]. We hope that our result can be useful in the study of equivariant Hamiltonian systems as well.\ iii) If the action of $G$ on $(P^n,\Pi)$ is Hamiltonian (with an equivariant momentum map), then there is another approach to this equivariant splitting problem, based on the Nash-Moser method, which does not need the tameness condition. We will consider this issue in a separate work. The above theorem will be proved in Section 3 of this note. In Section 4 we will combine this theorem with linearization results of Conn [@conn] and Ginzburg [@ginzburg] to obtain an equivariant linearization theorem (see Theorem \[thm:EquivLin\]). [**Acknowledgements.**]{} We are indebted to Mich[è]{}le Vergne for drawing our attention to the paper of Dixmier [@dixmier] and for pointing out its relation to the division property stated in section 2. We would like to thank Viktor Ginzburg for his useful comments and suggestions on the problem. We would also like to thank David Mart[í]{}nez-Torres for carefully reading a previous version of this preprint and pointing out some misprints. Tame Poisson structures ======================= We will denote by $\Pi^{-1}$ the covariant tensor dual to the Poisson tensor $\Pi$ of a Poisson manifold $(P^n,\Pi)$, i.e. the symplectic form on symplectic leaves. If $X,Y$ are vector fields on $P^n$ which are tangent to the symplectic leaves, then $\Pi^{-1}(X,Y)$ is well-defined. In particular, if $X = X_h$ is the Hamiltonian vector field of a function $h$ on $(P^n,\Pi)$ then $\Pi^{-1}(X,Y) = - Y(h)$. Recall that a Poisson vector field is a vector field which preserves the Poisson structure. Let $(P^n,\Pi)$ be a smooth Poisson manifold and $p$ a point in $P$. We will say that $\Pi$ is *tame* at $p$ if for any pair $X_t,Y_t$ of germs of smooth Poisson vector fields near $p$ which are tangent to the symplectic foliation of $(P^n,\Pi)$ and which may depend smoothly on a (multi-dimensional) parameter $t$, then then the function $\Pi^{-1}(X_t,Y_t)$ is smooth and depends smoothly on $t$. The tameness condition is a kind of homological condition. In particular, if the parametrized germified first Poisson cohomology group, which we will denote by $H^1_\Pi(P^n,p)$, vanishes, then $\Pi$ is tame at $p$. Indeed, $H^1_\Pi(P^n,p) = 0$ means that if $X_t$ is a germ of Poisson vector field near $p$ which depends smoothly on a parameter $t$, then we can write $X_t = X_{h_t}$ where $h_t$ is a germ of smooth function near $p$ which depends smoothly on the parameter $t$. Hence $\Pi^{-1}(X_t,Y_t) = - Y_t(h_t)$ is smooth. In particular, it is known that if ${\mathfrak g}$ is a compact semi-simple Lie algebra, and $({\mathfrak g}^*,\Pi_{lin})$ is the dual of ${\mathfrak g}$ equipped with the corresponding linear Poisson structure then $H^1_{\Pi_{lin}}({\mathfrak g}^*,0) = 0$ (see [@conn]). Hence our first example of tame Poisson structures: Any smooth Poisson structure $\Pi$, which vanishes at a point $p$ and whose linear part at $p$ corresponds to a compact semisimple Lie algebra ${\mathfrak g}$, is tame at $p$. Indeed, in this case, according to Conn’s smooth linearization theorem [@conn], $(P^n, \Pi)$ is locally isomorphic near $p$ to $({\mathfrak g}^*,\Pi_{lin})$, and therefore $H^1_\Pi(P^n,p) = 0$. If $X$ is not Hamiltonian (and maybe not even Poisson) but can be written as $X = \sum_{i=1}^m f_i X_{g_i}$ where $f_i, g_i$ are smooth functions, then $\Pi^{-1}(X,Y) = - \sum_{i=1}^m f_i Y(g_i)$ is still smooth. This leads us to: We say that a smooth (resp real analytic) Poisson structure $\Pi$ satisfies the *smooth division property* (resp *analytic division property*) at a point $p$ if the Hamiltonian vector fields generate the space of vector fields tangent to the associated symplectic foliation near $p$. More precisely, for any germ of smooth (resp. analytic) vector field $Z$ -which may depend smoothly (resp. analytically) on some parameters- which is tangent to the symplectic foliation there exists a finite number of germs of smooth (resp. analytic) functions $f_1,\hdots,f_m,g_1,\hdots,g_m$ -which depend smoothly (resp. analytically) on the same parameters as $Z$- such that $Z = \sum f_i X_{g_i}$. Clearly, if $\Pi$ satisfies the division property at a point $p$, then it is tame at $p$. A natural question is to know which Poisson structures satisfy the division property. In particular, is it true that all linear Poisson structures satisfy the division property at the origin? In the appendix we prove that low-dimensional Lie algebras satisfy the division property at the origin. Namely \[prop:DivisionDim3\] Any linear Poisson structure in dimension 2 or 3 has the division property at the origin. In the higher-dimensional case, a result of Dixmier [@dixmier] says (in our language) that if $\Pi$ is a linear Poisson structure which corresponds to a semisimple Lie algebra then it has the analytic division property at the origin (mainly due to the fact that the singular set has codimension $3$ in this case). We would conjecture that Dixmier’s result also holds in the smooth case. On the other hand, one can probably produce linear Poisson (non semisimple) structures which do not satisfy the division property (similiar to Dixmier’s counterexample $3.3$ in [@dixmier]). It is not difficult to construct examples of Poisson structures with a trivial 1-jet which are not tame. Consider the Poisson structure $\Pi=x^4 \frac{\partial}{\partial x}\wedge \frac{\partial}{\partial y}$ on ${\mathbb R}^2$. The following vector fields are Poisson and tangent to the symplectic foliation: $$X=x^2 \frac{\partial}{\partial x}+2xy \frac{\partial}{\partial y}, \quad Y=x \frac{\partial}{\partial y},$$ but $\displaystyle \Pi^{-1}(X,Y)=\frac{1}{x}$ is not smooth at the origin. So this Poisson structure is not tame. Recall that if $\Pi = \sum_{i=1}^k \frac{\partial}{\partial x_i}\wedge \frac{\partial}{\partial y_i} + \sum_{ij} f_{ij}(z)\frac{\partial}{\partial z_i}\wedge \frac{\partial}{\partial z_j}$ in a local canonical coordinate system in the neighborhood of a point $p$, then $\Pi_2 = \sum_{ij} f_{ij}(z)\frac{\partial}{\partial z_i}\wedge \frac{\partial}{\partial z_j}$ is called the *transverse* Poisson structure of $\Pi$ at $\Pi$. Up to local Poisson isomorphisms, this Poisson transevrse structure is unique, i.e. it does not depend on the choice of local canonical coordinates, see, e.g., [@dufourzung; @weinstein]. The following lemma shows that, to verify the tameness condition, it is sufficient to check it in the transverse direction to the symplectic leaf: \[lemma:transverse\] A smooth Poisson structure $\Pi$ is tame at a point $p$ if and only if the transverse Poisson structure of $\Pi$ at $p$ is tame at $p$. Write $\Pi = \Pi_1 + \Pi_2 = \sum_{i=1}^k \frac{\partial}{\partial x_i}\wedge \frac{\partial}{\partial y_i} + \sum_{ij} f_{ij}(z)\frac{\partial}{\partial z_i}\wedge \frac{\partial}{\partial z_j}$ in a local canonical coordinate system near $p$. For each germ of vector field $X$ near $p$ write $X = X_{hor} + X_{vert}$, where $X_{hor}$ is the “horizontal part” of $X$, i.e. is a combination of the vector fields $\frac{\partial}{\partial x_i}, \frac{\partial}{\partial y_i}$, and $X_{vert}$ is the “vertical part” of $X$, i.e. is a combination of the vector fields $\frac{\partial}{\partial z_i}$. If $X$ is a smooth Poisson vector field for $\Pi$, then $X_{hor}$ (resp. $X_{vert}$) may be viewed as a Poisson vector field for $\Pi_1$ (resp., $\Pi_2$) which depends smoothly on parameters $z_i$ (resp., $x_i,y_i$). We have $ \Pi^{-1}(X,Y)= \Pi_1^{-1}(X_{hor},Y_{hor})+\Pi_2^{-1}(X_{vert},Y_{vert})$. The term $\Pi_1^{-1}(X_{hor},Y_{hor})$ is always smooth (provided that $X$ and $Y$ are smooth), and so the smoothness of $ \Pi^{-1}(X,Y)$ is equivalent to the smoothness of $\Pi_2^{-1}(X_{vert},Y_{vert})$. The lemma then follows easily. Proof of the equivariant splitting theorem ========================================== In this section we will give a proof of Theorem \[thm:main\]. It uses coupling tensors for Poisson manifolds, so we will first recall a result of Yu. Vorobiev about coupling tensors (see, e.g., [@dufourzung; @voro]). The proof of the theorem consists of three steps. In the first step we prove that we can assume that the action of our compact Lie group $G$ is linear and that the symplectic foliation is normalized (i.e. is the same as in the splitting theorem). In the second step we construct a path of $G$-invariant Poisson structures connecting the initial Poisson structure to the splitted one. Finally, in the last step, we use this path of Poisson structures and the averaging method to construct a flow which intertwines with the action of $G$ and whose time-1 map moves the initial Poisson structure to the splitted one, thus giving an equivariant splitting of our Poisson structure. Preliminaries: coupling tensors ------------------------------- Let $\pi: E \longrightarrow S$ be a submersion over a manifold $S$ and let $T_V E=\ker d\pi$. An Ehresmann connection on $E$ is a splitting of the tangent bundle of $E$ as $TE=T_V E\oplus T_H E$. We call $T_H E$ the horizontal space. Denote by $\mathcal{V} ^1_V(E)$ the set of vertical vector fields. We can associate to this splitting a $\mathcal{V}^1_V(E)$-valued $1$-form $\Gamma\in\Omega^1(E)\otimes \mathcal{V}^1_V(E)$ such that $\Gamma(Z)=Z$ for any vertical vector field. Then the horizontal space can be written as $T_H E=\{X\in TE,\quad \Gamma(X)=0 \}$. We can define the horizontal lifting of vector fields from $S$ to $E$. In the same way, we may associate a parallel transport to $\Gamma$ which is smooth, a curvature form and a covariant derivative (for details see for example [@dufourzung]). Consider now the case when $S$ is a symplectic leaf of a Poisson manifold $(P,\Pi)$. We can consider a neighbourhood $E$ of $S$ and submersion $\pi: E \longrightarrow S$ whose restriction to $S$ is the identity. There is a natural smooth Ehresmann connection where the horizontal subbundle is spanned by the Hamiltonian vector fields $X_{f\circ\pi}$. We can also associate to it a $2$-form $\mathbb F\in \Omega^2(S)\otimes {\mathcal C}^{\infty}(E)$ defined as $$\mathbb F(X_{f\circ\pi},X_{g\circ\pi})=\langle \Pi,\pi^*{df}\wedge\pi^*{dg}\rangle.$$ Recall that we have an induced transverse Poisson structure $\Pi_{Vert}$ on the vertical space. The triple $(\Pi_{Vert},\Gamma,\mathbb F)$ is called the geometric data associated to the Poisson manifold $(P,\Pi)$ in a neighbourhood of a symplectic leaf. In [@voro], Vorobjev studies the reconstruction problem from given geometric data. That is given a triple of smooth geometric data he gives compatibility conditions that guarantee the existence of a Poisson structure with the given geometric data. Those compatibility conditions come from the Schouten condition $[\Pi,\Pi]=0$ imposed on the bivector field $\Pi$ reconstructed from the geometric data. Assume that we are given $(\Pi_{Vert},\Gamma,\mathbb F)$ on a fibration $\pi: E \longrightarrow S$, where $\Gamma$ is an Ehresmann connection on $E$, $\Pi_{Vert}$ a vertical bivector field, and $\mathbb F \in \Omega^2(S) \otimes \mathcal{C}^\infty(E)$ a nondegenerate $\mathcal{C}^\infty(E)$-valued 2-form on $S$. We will need the following characterization of geometrical data which come from a Poisson structure: \[voro\] The triple $(\Pi_{Vert},\Gamma,\mathbb F)$ on a fibration $\pi: E \longrightarrow S$ determines a Poisson structure on $E$ if and only if ${\mathbb F}$ is nondegenerate and the following four compatibility conditions are satisfied: $$\begin{aligned} &&[\Pi_{Vert} ,\Pi_{Vert} ]=0, \\ &&L_{Hor(u)}(\Pi_{Vert}) =0 \ \ \forall\ u \in \ \nu ^1_V(E), \\ &&\partial_\Gamma \mathbb F=0, \\ && Curv_\Gamma(u,v)=\nu^{\sharp}(d(\mathbb F(u,v))) ,\end{aligned}$$ where $\partial_\Gamma$ stands for the covariant derivative and $\nu^\sharp$ stands for the map from $T^*E$ to $TE$ defined by $\langle \nu^\sharp(\alpha), \beta \rangle = \langle \nu,\alpha\wedge \beta\rangle. $ We may think of $\Pi$ as the coupling of $\Pi_{Vert}$ with $\mathbb F $ by $\Gamma$. This so-called coupling method is a generalization of the minimal coupling procedure established for symplectic fibrations by Guillemin, Lerman and Sternberg [@guillemin], [@sternberg]. First step of the proof: linearization of the group action ---------------------------------------------------------- Consider an action $\rho: G \times P^n \rightarrow P^n$ of a compact Lie group $G$ on a Poisson manifold $(P^n,\Pi)$, which fixes a point $p \in P^n$ and preserves the Poisson structure $\Pi$. Denote by $S$ the local symplectic leaf through $p$. Note that $S$ is invariant under the action of $G$. According to Bochner’s theorem [@bo], the action of $G$ is linearizable near $p$, i.e., there is a local coordinate system in which the action is linear. Moreover, we may assume that $S$ is linear in these cordinates. Since linear representations of compact Lie groups are completely reducible, there is a local submanifold $N$ (which is also linear in these coordinates), which is invariant under the action of $G$ and which is transverse to $S$ at $p$. The following lemma says that we can choose this coordinate system in such a way that the symplectic foliation of $(P^n,\Pi)$ will also be the same as in the splitting theorem. \[samefoliation\] With the above notations, there is a local system of coordinates near $p$ in which the action of $G$ is linear, the submanifolds $S$, $N$ are linear, and the local symplectic leaves near $p$ are direct products of $S$ with symplectic leaves of the transverse Poisson structure on $N$. We can start with a first coordinate system in which the action of $G$ is linear and the submanifolds $S$, $N$ are linear. Denote by $p_1$ the linear projection from a sufficiently small neighborhood $U$ of $p$ in $P^n$ to $S$ which projects $N$ to $p$. Define another (a-priori nonlinear) projection $p_2$, from $U$ to $N$, as follows: Denote by $\Gamma$ the Ehresmann connection associated to the Poisson structure $\Pi$ and the projection $p_1$. For each $x\in U$, let $\alpha_x(t)$ be the linear path joining $p_1(x)$ to the origin $p$ in $S$, with $\alpha_x(0) = p_1(x)$ and $\alpha_x(1) = p$. Denote by $\hat{\alpha}_x$ the horizontal lift of $\alpha_x$ through $x$ with respect to $\Gamma$. Then we take $p_2(x)=\hat{\alpha}_x(1) \in N$. By construction both projections are smooth and $G$-equivariant: The projection $p_1$ is equivariant since $N$ is $G$-invariant and $p_2$ is equivariant because the action of $G$ preserves $\Pi$ and therefore the parallel transport is equivariant. Now consider the $G$-equivariant local diffeomorphism $$\begin{array}{ccc}\phi: & U\longrightarrow & S\times N \\ & x\longmapsto & (p_1(x),p_2(x)) \end{array}$$ Since the parallel transport preserves the Poisson structure, $\phi$ takes the Poisson structure on $U$ to a Poisson structure on $S\times N$ which has as symplectic leaves the product of the symplectic leaves on $N$ with $S$. This ends the proof of the lemma. Second step: constructing a path of Poisson structures ------------------------------------------------------ After the first step, we can now assume that $P = N \times S$, and the Poisson structure $\Pi$ has the same symplectic leaves as the splitted Poisson structure $\tilde{\Pi} = \Pi_S + \Pi_N$, where $\Pi_S$ is the standard nondegenerate Poisson structure on $S$ and $\Pi_N$ is the transverse Poisson structure on $N$, and both $\Pi$ and $\tilde\Pi$ are invariant under our linear action of $G$. We will assume that $\Pi$ is tame at $p$, or equivalently, the transverse Poisson structure $\Pi_N$ is tame at the origin. \[path\] With the above notations and assumptions, there is a smooth path of $G$-invariant Poisson structures $\Pi_t$, $t \in [0,1]$, on (a neighborhood of the origin in) $N \times S$, such that $\Pi_0 = \Pi$, $\Pi_1 = \Pi_S + \Pi_N$, and which have the same symplectic foliation for all $t \in [0,1]$. We denote by $\omega_0$ the symplectic structure induced on the symplectic leaves by $\Pi_0 = \Pi$. In the same way we denote by $\omega_1$ the symplectic structure induced by $\Pi_1 = \Pi_S + \Pi_N$ on the same symplectic foliation. Consider the linear path of $2$-forms $$\omega_t=t\omega_1+(1-t)\omega_0 .$$ This is a path of smooth closed $2$-forms on each symplectic leaf of the common symplectic foliation. We want to show that, for each $t$ there is a smooth bivector field $\Pi_t$ which corresponds to $\omega_t$. Then, automatically, $\Pi_t$ is a Poisson structure because of the closedness of $\omega_t$, has the same symplectic foliation as $\Pi_0$ and $\Pi_1$, and is $G$-invariant. Denote by $(\Pi_N,\Gamma_0, \mathbb F_0)$ and $(\Pi_N,\Gamma_1, \mathbb F_1)$, the geometric data associated to the Poisson structures $\Pi_0 = \Pi$ and $\Pi_1 = \Pi_N + \Pi_S$ with respect to the projection $p_1: N\times S \rightarrow S$ (remark that, by construction, they have the same vertical component, which is equal to $\Pi_N$). We will use Vorobjev’s Theorem \[voro\] to construct $\Pi_t$ and to prove its smoothness. In other words, we will construct geometric data $(\Pi_N,\Gamma_t, \mathbb F_t)$, which will be shown to be smooth and satisfy the compatibility conditions of Theorem \[voro\], so they will give rise to a smooth Poisson structure $\Pi_t$. In order to construct the connection $\Gamma_t$, it is enough to show how to lift each vector field $X$ on $S$ horizontally with respect to $\Gamma_t$. The horizontal lift $X_t$ of $X$ with respect to $\Gamma_t$ is uniquely characterized by $\omega_t$ (the would-be associated symplectic form on the symplectic leaves) and by the following two conditions: 1. The vector field $X_t$ is tangent to the common symplectic foliation of $\Pi_0$ and $\Pi_1$, and its projection to $S$ by $p_1$ is $X$. 2. $\omega_t(X_t,Z)=0$ for any vertical vector field $Z$. Denote by $X_0$ and $X_1$ the horizontal lift of $X$ with respect to $\Gamma_0$ and $\Gamma_1$ respectively. We will show that $$X_t = (1-t) X_0 + tX_1.$$ (Then the smothness of $X_t$, and hence of $\Gamma_t$, is automatic). It is clear that $(1-t) X_0 + tX_1$ is tangent to the symplectic foliation and projects to $X$ under $p_1$. It remains to show that $$\omega_t ((1-t) X_0 + tX_1, Z) = 0$$ for any vertical vector field $Z$ on $N \times S$. Indeed, denoting $W = X_0 - X_1$, we have $$\begin{array}{ccl} & & \omega_t((1-t) X_0 + tX_1,Z) \\ & = & t\omega_1((1-t) X_0 + tX_1,Z)+(1-t)\omega_0((1-t) X_0 + tX_1,Z) \\ & = & t\omega_1(X_1+(1-t)W,Z)+(1-t)\omega_0(X_0-tW,Z) \\ & = & t\omega_1(X_1,Z) +(1-t)\omega_0(X_0,Z) + t(1-t) [\omega_1(W,Z) - \omega_0(W,Z)] . \end{array}$$ Since $X_1$ and $X_0$ are the horizontal lifts of $X$ with respect to $\Gamma_1$ and $\Gamma_0$, the terms $\omega_1(X_1,Z)$ and $\omega_0(X_0,Z)$ vanish. Since the Poisson structures $\Pi_0$ and $\Pi_1$ have the same transverse component, and $W$ and $Z$ are vertical vector fields, we have $\omega_1(W,Z) = \omega_0(W,Z) = \Pi_N^{-1}(W,Z)$. Hence $\omega_t((1-t) X_0 + tX_1,Z)=0$ as desired. If $X$ is a vector field on $S$ then we will denote by $X_t = (1-t) X_0 + tX_1$ the horizontal lift of $X$ to $N \times S$ via $\Gamma_t$ as above. For any two smooth vector fields $X, Y$ on $S$ and a point $q \in N \times S$, put $${\mathbb F}_t(X,Y)(q)=\omega_t(X_t,Y_t)(q) .$$ The main point here is to check the smoothness of the function ${\mathbb F}_t(X,Y)$ defined by the above formula, in a neighborhood of the origin in $N \times S$. Denote $Z^{X}= X_0 - X_1$ and $Z^{Y}= Y_0 - Y_1$; they are vertical vector fields. Since the Ehresmann connection $\Gamma_i$ ($i=0,1$) preserves the transverse Poisson structures, the vector fields $\hat{X_i}$ and $\hat{Y_i}$ preserve the transverse Poisson structure $\Pi_N$. Therefore the vertical vector fields $Z^{X}$ and $Z^{Y}$ also preserve the transverse Poisson structure. (They may be viewed as Poisson fields on $(N,\Pi_N)$ parametrized by $S$). We can write $X_t = X_0-tZ^X= X_1+(1-t)Z^X$ and $Y_t = Y_0-tZ^Y= Y_1+(1-t)Z^Y$. Recall that if $X_t$ is horizontal with respect to $\Gamma_t$ and $Z$ is vertical then $\omega_t(X_t,Z) = 0$. We have: $$\begin{array}{ccl} & & {\mathbb F}_t(X,Y) \\ &= & t\omega_1(X_1+(1-t)Z^X,Y_1+(1-t)Z^Y)+ (1-t)\omega_0(X_0-tZ^X,Y_0-tZ^Y) \\ &= & t\omega_1(X_1,Y_1)+ (1-t)\omega_0(X_0,Y_0)+ \\ & & + t(1-t)^2\omega_1(Z^X, Z^Y)+t^2(1-t)\omega_0(Z^X, Z^Y) \\ & = & t\omega_1(X_1,Y_1)+ (1-t)\omega_0(X_0,Y_0)+ t(1-t)\Pi_N^{-1}(Z^X, Z^Y) \end{array}$$ By our tameness hypothesis, $\Pi_N^{-1}(Z^X, Z^Y)$ is smooth, and so ${\mathbb F}_t(X,Y)$ is smooth (and depends smothly on $t$). Remark that ${\mathbb F}_t$ coincides with ${\mathbb F}_0$ and ${\mathbb F}_1$ at the origin $p$. Since ${\mathbb F}_0$ is nodegenerate, ${\mathbb F}_t$ is also nondegenerate in a neighborhood of $p$ in $N \times S$. Since the form $\omega_t$ used in the construction of $(\Pi_N,\Gamma_t,{\mathbb F}_t)$ is closed on each symplectic leaf, the four compatibility conditions for the triple $(\Pi_N,\Gamma_t,{\mathbb F}_t)$ are automatically satisfied. Hence the triple $(\Pi_N,\Gamma_t,{\mathbb F}_t)$ corresponds to a smooth Poisson structure $\Pi_t$ in a neighborhood of $p$ in $N \times S$. Moreover, by construction, $\Pi_0 = \Pi$, $\Pi_1 = \Pi_N + \Pi_S$, and $\Pi_t$ depends smoothly on $t$. Lemma \[path\] is proved. End of the proof ---------------- According to Lemma \[path\], we now have a smooth path of $G$-invariant Poisson structures $\Pi_t$, where $\Pi_0$ is our initial Poisson structure, and $\Pi_1 = \Pi_N + \Pi_S$ is the splitted one. (The action of $G$ is already linearized, and by the equivariant Darboux theorem we may assume that $\Pi_S$ is already equivariantly normalized, i.e. has Darboux form). In order to finish the proof of the theorem, it suffices to find a local diffeomorphism of $N \times S$ which commutes with the action of $G$ and which moves $\Pi_0$ to $\Pi_1$. According to Weinstein’s splitting theorem (or rather its parametrized version, whose proof is the same), there is a smooth family of local diffeomorphisms $\phi_t, t \in [0,1]$ such that $\phi_t{_*}(\Pi_0)=\Pi_t$ and $\phi_0 = Id$. Note that, a-priori, $\phi_t$ does not commute with the action of $G$. Denote by $X_t$ the time-dependent vector field whose flow generates $\phi_t$, i.e., $$X_t(\phi_t(q))= {\partial \phi_t \over \partial t} (q).$$ By derivation of the condition $$\phi_t{_*}(\Pi_0)=\Pi_t \label{eqn:isotopy0}$$ we get the following equation for $X_t$: $$\label{eqn:isotopy} L_{X_t}(\Pi_t)=-\frac{d \Pi_t}{dt}$$ Denote by $X_t^G$ the averaging of $X$ with respect to the action of $G$, i.e., $$X_t^G = \int_{G} \rho_g{_*}(X_{t}) d\mu, \label{eqn:averaging}$$ where $d\mu$ is the probabilistic Haar measure on $G$, and $\rho_g$ denotes the action of $g \in G$. Then $X^G_t$ is a $G$-invariant time-dependent vector field. Since $\Pi_t$ is invariant under the action of $G$, it follows from Equation (\[eqn:isotopy\]) that we also have $$L_{X^G_t}\Pi_t=-\frac{d \Pi_t}{dt}. \label{eqn:isotopy3}$$ Denote by $\phi^G_t$ the flow $X^G_t$. Then $\phi^G_t$ commutes with the action of $G$. Equation (\[eqn:isotopy3\]) implies that $\phi^G_t{_*}(\Pi_0)=\Pi_t$. In particular, $\phi^G_1$ is a $G$-equivariant local diffeomorphism such that $\phi^G_1{_*}(\Pi_0)=\Pi_1 = \Pi_N + \Pi_S$. This concludes the proof of Theorem \[thm:main\]. Equivariant linearization of Poisson structures =============================================== \[thm:EquivLin\] Let $(P^n,\Pi)$ be a smooth Poisson manifold, $p$ a point of $P$, $2r = {{\rm rank\ }}\Pi (p)$, and $G$ a compact Lie group which acts on $P$ in such a way that the action preserves $\Pi$ and fixes the point $p$. Assume that the linear part of transverse Poisson structure of $\Pi$ at $p$ corresponds to a semisimple compact Lie algebra $\mathfrak k$. Then there is a smooth canonical local coordinate system $(x_1,y_1,\dots, x_{2r},y_{2r}, z_1,\dots, z_{n-2r})$ near $p$, in which the Poisson structure $\Pi$ can be written as $$\Pi = \sum_{i=1}^r \frac{\partial}{\partial x_i}\wedge \frac{\partial}{\partial y_i} + {1 \over 2}\sum_{i,j,k} c^{k}_{ij} z_k \frac{\partial}{\partial z_i}\wedge \frac{\partial}{\partial z_j},$$ where $c_{ij}^k$ are structural constants of $\mathfrak k$, and in which the action of $G$ is linear and preserves the subspaces $\{x_1 = y_1 = \hdots x_r = y_r = 0\}$ and $\{z_1 = \hdots = z_{n-2r} = 0\}$. Invoking Theorem \[thm:main\], we may assume that $\Pi$ is already equivariantly splitted, i.e. $\Pi = \Pi = \sum_{i=1}^r \frac{\partial}{\partial x_i}\wedge \frac{\partial}{\partial y_i} + \sum_{i,j} f_{ij}(z) \frac{\partial}{\partial z_i}\wedge \frac{\partial}{\partial z_j}$. It remains to linearize the transverse Poisson structure $\Pi_N = \sum_{i,j} f_{ij}(z) \frac{\partial}{\partial z_i}\wedge \frac{\partial}{\partial z_j}$ on $N$ in an equivariant way. But this last step is provided by the following results of Conn and Ginzburg: \[thm:conn\] Any smooth Poisson structure, which vanishes at a point and whose linear part at that point is of semisimple compact type, is locally smoothly linearizable. \[thm:ginz\] Assume that a Poisson structure $\Pi$ vanishes at a point $p$ and is smoothly linearizable near $p$. If there is an action of a compact Lie group $G$ which fixes $p$ and preserves $\Pi$, then $\Pi$ and this action of $G$ can be linearized simultaneously. Indeed, by Theorem \[thm:conn\], the transverse Poisson structure $\Pi_N$ is smoothly linearizable because its linear part is compact semisimple. As a consequence, by Theorem \[thm:ginz\], $\Pi_N$ can be linearized in a $G$-equivariant way. Appendix ======== In this appendix we will give a proof of Proposition \[prop:DivisionDim3\]. We will assume that our linear Poisson structure corresponds to a 3-dimensional Lie algebra ${\mathfrak g}$ (the case of dimension 2 is similar and simpler and can be reduced from the 3-dimensional case). Recall that any 3-dimensional Lie algebra ${\mathfrak g}$ over ${\mathbb R}$ belongs to one of the following types: 1. Solvable: ${\mathfrak g}= {\mathbb R}\ltimes_A {\mathbb R}^2$ where $A = \left( \begin{matrix}a & b \\ c & d \end{matrix} \right)$ is a 2-by-2 matrix, i.e. with Lie brackets $[x,y ] = ay+bz$, $[x, z ] = cy+dz$, $[y,z] = 0$. 2. Simple: $\mathfrak{so}(3,{\mathbb R})$ or $\mathfrak{sl}(2,\mathbb R)$. We will prove that any vector field $X$ tangent to the symplectic foliation of ${\mathfrak g}^*$ (i.e. the foliation by coadjoint orbits on ${\mathfrak g}^*$) can be expressed as a smooth combination of the Hamiltonian vector fields $X_x$, $X_y$ and $X_z$, where $(x,y,z)$ is a basis of ${\mathfrak g}$. Let us first consider the case when ${\mathfrak g}= {\mathbb R}\ltimes_A {\mathbb R}^2$. In this case, our linear Poisson structure $\Pi$ can be written as: $$\Pi=\frac{\partial}{\partial x}\wedge ((ay+bz)\frac{\partial}{\partial y}+(cy+dz)\frac{\partial}{\partial z}) .$$ We distinguish two subcases. 1\) The matrix $A$ has non-zero determinant. A vector field tangent to the symplectic foliation can be written as $Z=f\frac{\partial}{\partial x}+g ((ay+bz)\frac{\partial}{\partial y}+(cy+dz)\frac{\partial}{\partial z})$ where the function $f$ has vanishes for $(ay+bz,cy+dz)=(0,0)$. Since the mapping $(x,y,z)\mapsto (x,ay+bz, cy+dz)$ defines new smooth coordinates, we may write $f=(ay+bz) f_1+(cy+dz) f_2$ for smooth functions $f_1$ and $f_2$. Finally we obtain $Z=f_1 X_y+f_2 X_z-g X_x$ for smooth functions $f_1,f_2$ and $g$ as desired. 2\) The matrix $A$ has determinant zero. In the case $a=b=c=d=0$, the Lie algebra considered is abelian and the Poisson structure is trivial so in this case there is nothing to prove. In the nontrivial subcase we may write, $$\Pi=\frac{\partial}{\partial x}\wedge (B\frac{\partial}{\partial y}+\lambda B\frac{\partial}{\partial z})$$ being $B$ a linear function in $y$ and $z$. After a linear change we may assume that, $\Pi=\overline B \frac{\partial}{\partial x}\wedge \frac{\partial}{\partial \overline y}$. A vector field tangent to the symplectic foliation is of the form $Z=f\frac{\partial}{\partial x}+ g\frac{\partial}{\partial \overline y}$ where the functions $f$ and $g$ vanish when $\overline B=0$. Since $\overline B$ is a non-trivial linear function in $\overline y$ and $z$, we may write $f=\overline B f_1$ and $g=\overline B g_1$. Therefore we may write $Z= f_1X_x+g_1 X_{\overline y}$. Consider now the case when ${\mathfrak g}$ is simple. We will use the following lemma which is a smooth version of de Rham’s division lemma due to Moussu [@moussu]: \[lemma:division\] Let $\alpha$ be a smooth (or analytic) 1-form on a neighbourhood of the origin in $\mathbb R^n$ for which the origin is an algebraically isolated singularity, then for any $p$-form $\omega$ such that $\omega\wedge\alpha=0$ we can write the decomposition $\omega=\beta\wedge\alpha$ for a smooth (resp. analytic) $(p-1)$-form $\beta$. Denote by $\Pi$ the linear Poisson structure, it can be written as $\Pi=x\frac{\partial}{\partial y}\wedge \frac{\partial}{\partial z}+y\frac{\partial}{\partial z}\wedge \frac{\partial}{\partial x}+z\frac{\partial}{\partial x}\wedge \frac{\partial}{\partial y}$ (in the case of $\mathfrak{so}(3,\mathbb K)$) or as $\Pi=z\frac{\partial}{\partial x}\wedge \frac{\partial}{\partial y}+x\frac{\partial}{\partial z}\wedge \frac{\partial}{\partial x}+y\frac{\partial}{\partial y}\wedge \frac{\partial}{\partial z}$ (in the case of $\mathfrak{sl}(2,\mathbb K)$). Let $\Omega$ be the volume form $\Omega= dx\wedge dy\wedge dz$, then the map $\Omega^{b}: \mathcal{V}^p(\mathfrak{g}^*) \longrightarrow \Omega^{3-p}(\mathfrak{g}^*)$ from the space of multivector fields to the space of forms defined by $\Omega^{b} (A)= i_A\Omega$ is an isomorphism. Let $X$ be the vector field tangent to the symplectic foliation. The condition of tangency to the symplectic foliation implies the relation $X\wedge \Pi=0$. Under the above linear isomorphism this condition becomes $i_X\Omega\wedge i_{\Pi}\Omega=0$. Since $i_{\Pi}\Omega$ has isolated singularities at the origin, we can now apply lemma \[lemma:division\] to write $i_X\Omega=\beta\wedge i_{\Pi}\Omega$ for a smooth one-form $\beta$. Finally, we make convenient substitutions to obtain $X = i_X\Omega \lrcorner(\frac{\partial}{\partial x}\wedge \frac{\partial}{\partial y}\wedge \frac{\partial}{\partial z}) = (\beta\wedge i_{\Pi}\Omega)\lrcorner(\frac{\partial}{\partial x}\wedge \frac{\partial}{\partial y}\wedge \frac{\partial}{\partial z})=\beta \lrcorner \Pi$. From this we conclude the proof of proposition \[prop:DivisionDim3\] since from this equality if $\beta=fdx+gdy+hdz$ then $X=f X_x+ g X_y +h X_z$ as desired. [10]{} A. Alekseev and E. Meinrenken, *Poisson geometry and the Kashiwara-Vergne conjecture.* C. R. Math. Acad. Sci. Paris 335 (2002), no. 9, 723–728. A. Alekseev and E. Meinrenken, *Ginzburg-Weinstein via Gelfand-Zeitlin* preprint 2005, math.DG/0506112. S. Bochner, *Compact groups of differentiable transformations.* Ann. of Math. (2) **46**, (1945). 372–381. J.Conn, *Normal forms for smooth Poisson structures.* Ann. of Math. (2) 121 (1985), no. 3, 565–593. J. Dixmier, *Champs de vecteurs adjoints sur les groupes et alg[è]{}bres de Lie semi-simples.* J. Reine Angew. Math. 309 (1979), 183–190. J.P. Dufour and N. T. Zung, *Poisson Structures and their normal forms*, Birkhauser, Progress in Mathematics, n 242, 2005. V. Ginzburg, *Momentum mappings and Poisson cohomology.* Internat. J. Math. **7** (1996), no. 3, 329–358. V. Ginzburg and A. Weinstein, *Lie-Poisson structure on some Poisson Lie groups.* J. Amer. Math. Soc. 5 (1992), no. 2, 445–453. V. Guillemin, E. Lerman and S. Sternberg, *Symplectic fibrations and multiplicity diagrams.* Cambridge University Press, Cambridge, 1996. Philippe Monnier and Nguyen Tien Zung, *Normal forms of vector fields on Poisson manifolds*, math.SG/0509144, to appear in Annales Math. Blaise Pascal. J. Moser, *On the volume elements on a manifold.* Trans. Amer. Math. Soc. 120 1965 286–294. R. Moussu, *Le th[é]{}or[è]{}me de de Rham sur la division des formes.* C. R. Acad. Sci. Paris S[é]{}r. A-B 280 (1975), no.6, 329–332. S. Sternberg, *Minimal coupling and the symplectic mechanics of a classical particle in the presence of a Yang-Mills field.* Proc. Nat. Acad. Sci. U.S.A. 74 (1977), no. 12, 5253–5254. Y. Vorobjev, *Coupling tensors and Poisson geometry near a single symplectic leaf.* Lie algebroids and related topics in differential geometry (Warsaw, 2000), 249–274, Banach Center Publ., 54, Polish Acad. Sci., Warsaw, 2001. Y. Vorobjev, *Poisson equivalence over a symplectic leaf.*, preprint 2005, math.SG/0503628. A. Weinstein, *Lectures on symplectic manifolds.* Regional Conference Series in Mathematics, No.**29**. American Mathematical Society, Providence, R.I., 1977. A. Weinstein, *The local structure of Poisson manifolds.*, J. Differential Geom. 18 (1983), no. 3, 523–557. [^1]: The first author is supported by Marie Curie EIF postdoctoral fellowship contract number EIF2005-024513 and partially supported by the DGICYT project number BFM2003-03458.
{ "pile_set_name": "ArXiv" }
--- abstract: | We study Padé interpolation at the node $z=0$ of functions $f(z)=\sum_{m=0}^{\infty} f_m z^m$, analytic in a neighbourhood of this node, by [*amplitude and frequency operators*]{} (*sums*) of the form $$\sum_{k=1}^n \mu_k h(\lambda_k z), \qquad \mu_k,\lambda_k\in \mathbb{C}.$$ Here $h(z)=\sum_{m=0}^{\infty} h_m z^m$, $h_m\ne 0$, is a fixed (*basis*) function, analytic at the origin, and the interpolation is carried out by an appropriate choice of *amplitudes* $\mu_k $ and *frequencies* $\lambda_k$. The solvability of the $2n$-multiple interpolation problem is determined by the solvability of the associated moment problem $$\sum_{k=1}^n\mu_k \lambda_k^m={f_m}/{h_m}, \qquad m=\overline{0,2n-1}.$$ In a number of cases, when the moment problem is consistent, it can be solved by the classical method due to Prony and Sylvester, moreover, one can easily construct the corresponding interpolating sum too. In the case of inconsistent moment problems, we propose a regularization method, which consists in adding a special binomial $c_1z^{n-1}+c_2 z^{2n-1}$ to an amplitude and frequency sum so that the moment problem, associated with the sum obtained, can be already solved by the method of Prony and Sylvester. This approach enables us to obtain interpolation formulas with $n$ nodes $\lambda_k z$, being exact for the polynomials of degree ${\leqslant}2n-1$, whilst traditional formulas with the same number of nodes are usually exact only for the polynomials of degree ${\leqslant}n-1$. The regularization method is applied to numerical differentiation and extrapolation. author: - Petr Chunaev and Vladimir Danchenko title: | Approximation by\ amplitude and frequency operators --- Introduction and statement of the problem ========================================= In [@Dan2008; @DanChu2011; @Chu2010; @Chu2012] the so-called *$h$-sums* of the form $$\label{h-sum} \mathcal{H}_n(\{\lambda_k\},h;z)=\sum_{k=1}^n\lambda_k h(\lambda_k z), \qquad z, \lambda_k\in \mathbb{C},\qquad n\in \mathbb{N},$$ are studied. Hereinafter $h(z)=\sum_{m=0}^\infty h_mz^m$ is a function, analytic in a disc $|z|<\rho$, $\rho>0$. We call it *a basis function*. Obviously, $\mathcal{H}_n(z)$ is well-defined and analytic in the disc $|z|<\rho\cdot \min_{k=\overline{1,n}} |\lambda_k|^{-1}$. In [@Dan2008; @DanChu2011; @Chu2010; @Chu2012] operators $\mathcal{H}_n(\{\lambda_k\},h;z)$ are used as a tool for $n$-multiple (Padé) interpolation and approximation of functions $f$, analytic in a neighbourhood of the origin. In particular, it is shown in [@Dan2008] that if $h_m\ne 0$, $m\in \mathbb{N}_0$, then *there always exists a unique set of the numbers* $\lambda_k=\lambda_k(f,h,n)$ such that $$f(z)=\mathcal{H}_n(\{\lambda_k\},h;z) +O(z^n), \qquad z\to 0.$$ On the other hand, in the above-mentioned papers $h$-sums are also used as operators of differentiation, integration, interpolation and extrapolation on certain classes of functions, holomorphic in a fixed neighbourhood of the origin. In this case the numbers $\lambda_k$ are already independent of individual functions $f$ from the class and hence are of a universal kind. For instance, the following formulas for numerical differentiation and integration, being exact for the polynomials of degree ${\leqslant}n-1$, are valid [@Dan2008]: $$\label{300} zh'(z)\approx -h(z)+\sum_{k=1}^{n}\lambda_{1,k} h(\lambda_{1,k} z);\quad \int_{0}^{z} h(t)\,dt\approx z \sum_{k=1}^{n}\lambda_{2,k} h(\lambda_{2,k} z).$$ Here the numbers $\lambda_{l,k}$ are absolute constants, being the roots of the polynomials $P_{l,n}$ $(l=1,2)$, which can be defined recursively as follows. Let $P_{l,0}=1$, $v_{l,1}=-1$ $(l=1,2)$, then for $k=1,2,\ldots$ we have $$P_{l,k}=\lambda P_{l,k-1}+v_{l,k},\quad v_{1,k}=-1-\sum_{j=1}^{k-1}\left(1-\frac{j}{k}\right)v_{1,j},\quad v_{2,k}=-\frac{1}{k^2}-\sum_{j=1}^{k-1}\frac{v_{2,j}}{k(k-j)}.$$ In 2013 we proposed [@CD-B; @CD-K] a natural generalization of the $h$-sums, the so-called *amplitude and frequency operators* (*sums*) of the form $$\label{gH} H_n(z)=H_n(\{\mu_k\},\{\lambda_k\},h;z):=\sum_{k=1}^n \mu_k h(\lambda_k z),\qquad \mu_k,\lambda_k\in \mathbb{C},$$ where *amplitudes* $\mu_k$ and *frequencies* $\lambda_k$ are parameters, being independent of each other. In the preprint [@CD-arxiv] and the present paper we give a detailed exposition of the results announced in [@CD-B; @CD-K]. Later on, operators of the form (\[gH\]) were studied in [@YF] but with a fundamentally different approach for constructing them. Namely, there was proposed not an analytic method for that as in this paper, but a numerical one with small residuals (it will be discussed in Section \[Section6\]). The number $n$ in (\[gH\]) is called *the order* of the amplitude and frequency operator, if there are no zeros among the numbers $\mu_k$ and, moreover, the numbers $\lambda_k$ are pairwise distinct (otherwise the order of the operator is less than $n$). As in the case of $h$-sums, we regard amplitude and frequency sums both as approximants of individual functions $f$, analytic at the origin (and then $\mu_k=\mu_k(f,h,n)$ and $\lambda_k=\lambda_k(f,h,n)$), and as special operators (of differentiation, extrapolation, etc.), acting on certain classes of functions (and then $\mu_k=\mu_k(n)$ and $\lambda_k=\lambda_k(n)$). Introduction of the additional parameters $\mu_k$ enables us to formulate the problem of $2n$-multiple (Padé) interpolation at $z=0$ by means of the amplitude and frequency sums (in contrast to the $h$-sums, when $n$-multiple interpolation is only possible). Indeed, given Maclaurin series $$f(z)=\sum_{m=0}^{\infty} f_m z^m, \qquad h(z)=\sum_{m=0}^{\infty} h_m z^m, \qquad \text{where } f_{m} = 0, \text{ if } h_{m} = 0,$$ we introduce the numbers $s_m=s_{m}(h,f)$: $$\label{ssmm} s_{m}(h,f)=0, \hbox{ if } f_{m}=0; \qquad s_{m}(h,f)=f_{m}/h_{m}, \hbox{ if } f_{m}\ne 0,\qquad m \in \mathbb{N}_0.$$ For $|z|<\rho\cdot\min_{k=\overline{1,n}} |\lambda_k|^{-1}$ the operator (\[gH\]) has the form $$\label{gH+} H_n(z)=\sum_{k=1}^n \mu_k \sum_{m=0}^{\infty} h_m (\lambda_k z)^m = \sum_{m=0}^{\infty}h_{m} \left(\sum_{k=1}^n \mu_k \lambda_k^m\right) z^{m},$$ hence to realize the $2n$-multiple interpolation $$f(z)=H_n(\{\mu_k\},\{\lambda_k\},h;z)+O(z^{2n}), \qquad z\to 0, \label{2n-interpolation}$$ or, which is the same, $$f^{(m)}(z)=H_n(\{\mu_k\lambda_k^m\},\{\lambda_k\},h^{(m)};z) +O(z^{2n-m}),\quad m=\overline{0, 2n-1}, \quad z\to 0, \label{2n-bis-interpol}$$ the following conditions on the so-called *generalized power sums* (*moments*) $S_m$ should be satisfied: $$\label{SRS} S_m:=\sum_{k=1}^n \mu_k \lambda_k^m=s_m,\qquad m=\overline{0,2n-1}.$$ The system (\[SRS\]) with unknown $\mu_k$, $\lambda_k$ and given $s_m$ is well known as *the discrete moment problem*. Classical works of Prony, Sylvester, Ramanujan and papers of many contemporary researchers are devoted to the problem of its solvability (see [@Prony; @Sylvester; @Ramanujan; @Lyubich; @Lyubich2; @Kung1]). Note that the system (\[SRS\]) is bound up with Hankel forms, orthogonal polynomials, continued fractions, Gaussian quadratures and Padé approximants (a detailed review of these connections is given in [@Lyubich; @Lyubich2] and also in Section \[par2\]). Suppose that the system (\[SRS\]) is solvable. Then, following [@Lyubich], we call the system and its solution *regular* if all $\lambda_k$ are pairwise distinct and all $\mu_k$ are not vanishing. In the case of regular systems (\[SRS\]) we call the problem of $2n$-multiple interpolation (\[2n-interpolation\]) *regularly solvable*. One of the methods for solving regular systems (\[SRS\]) is due to Prony [@Prony]. Consider the following product of determinants: $$\label{equ1} \left| \begin{array}{ccccc} 1 & 0 & 0 & \ldots & 0\\ 0 & \mu_1 & \mu_2 & \ldots & \mu_n\\ 0 & \mu_1\lambda_1 & \mu_2\lambda_2 & \ldots & \mu_n\lambda_n\\ \ldots & \ldots & \ldots & \ldots & \ldots\\ 0 & \mu_1\lambda_1^{n-1} & \mu_2\lambda_2^{n-1} & \ldots & \mu_n\lambda_n^{n-1}\\ \end{array} \right| \cdot \left| \begin{array}{ccccc} 1 & \lambda & \lambda^2 & \ldots & \lambda^n\\ 1 & \lambda_1 & \lambda_1^2 & \ldots & \lambda_1^n\\ \ldots & \ldots & \ldots & \ldots & \ldots\\ 1 & \lambda_n & \lambda_n^2 & \ldots & \lambda_n^n\\ \end{array} \right|.$$ By regularity, the former of them does not vanish and the latter does only for $\lambda=\lambda_k$ (as a Vandermonde determinant). On the other hand, direct multiplication of the determinants and taking into account (\[SRS\]) give the following determinant, which is a polynomial of $\lambda$: $$\label{G_n} G_n(\lambda):=\sum_{m=0}^n g_{m} \lambda^{m}= \left| \begin{array}{ccccc} 1 & \lambda & \lambda^2 & \ldots & \lambda^n\\ s_0 & s_1 & s_2 & \ldots & s_n\\ s_1 & s_2 & s_3 & \ldots & s_{n+1}\\ \ldots & \ldots & \ldots & \ldots & \ldots\\ s_{n-1} & s_n & s_{n+1} & \ldots & s_{2n-1}\\ \end{array} \right|.$$ We call $G_n$ [*a generating polynomial*]{} (for functional properties of such polynomials, including orthogonality, completeness, etc., see [@Lyubich]). Consequently, the numbers $\lambda_k$ are the simple roots of the generating polynomial $G_n$. If the equation $G_n(\lambda)=0$ is solved, we substitute the numbers found into the system (\[SRS\]). Finally, extracting any $n$ rows from (\[SRS\]) leads to a linear system of equations with unknowns $\mu_k$, which has a unique solution with non-vanishing components. We now give known formulas for calculation of the numbers $\mu_k$ (see, for example, [@DanDod2013]). Let $\sigma_m$ and $\sigma_m^{(k)}$ denote elementary symmetric polynomials of the form $$\sigma_m=\sigma_m(\lambda_1,\ldots,\lambda_n)= \sum_{1 {\leqslant}j_1<\ldots<j_m {\leqslant}n}{\lambda_{j_1}\ldots \lambda_{j_m}}, \quad m=\overline{1,n},$$ $$\sigma_0=1,\quad \sigma_m^{(k)}=\sigma_m(\lambda_1,\ldots,\lambda_{k-1},0, \lambda_{k+1},\ldots,\lambda_n), \quad k=\overline{1,n}.$$ \[400\] The numbers $\mu_k$ are the scalar products $\mu_k=({\mathcal{L}}_k\cdot {\mathcal{S}})$, where ${\mathcal{S}}= (s_0,\ldots, s_{n-1})$ and $$\label{444} {\mathcal{L}}_k=\frac{{g}_n}{{G}'_n(\lambda_k)} \left((-1)^{n-1}\sigma_{n-1}^{(k)}, \ldots , (-1)^{n-m}\sigma_{n-m}^{(k)}, \ldots , -\sigma_{1}^{(k)},\; 1\right).$$ If $V=V(\lambda_1,\lambda_2,\ldots,\lambda_n)$ is a Vandermonde matrix of the first $n$ equations in the system (\[SRS\]), then the elements of the $k$th row ${\mathcal{L}}_k$ of the matrix $V^{-1}$ have the form (\[444\]); see [@DanDod2013] for more details. We now formulate a known criterion of regularity in terms of roots of the polynomial $G_n$. Originally this criterion was obtained in an algebraic form by Sylvester [@Sylvester] (see also [@Kung1 Ch. 5]); later on Lyubich [@Lyubich; @Lyubich2] stated it in the analytical terms, which we use in the present paper. \[Criterion\_Syl\] The system $(\ref{SRS})$ is regular if and only if the generating polynomial $G_n$ is of degree $n$ and all its roots are pairwise distinct. Moreover, the regular system has a unique solution. This theorem immediately implies the following proposition about regular solvability of the interpolation problem (\[2n-interpolation\]) for a function $f$, being analytic in a neighbourhood of the origin. \[th1\] Suppose that the generating polynomial $G_n$, constructed using the numbers $s_m=s_m(h,f)$, $m=\overline{0,2n-1}$, is of degree $n$ and all its roots are pairwise distinct. Then the amplitude and frequency operator $H_n$ is uniquely determined from the system $(\ref{SRS})$ and realizes the $2n$-multiple interpolation $(\ref{2n-interpolation})$ of the function $f$ at the node ${z=0}$. \[remark\_even\_odd\] If the function $f$ is even or odd, then the local precision of the interpolation can be increased. Indeed, if $f$ is even, then $f(z)=\tilde{f}(t)$, $t=z^2$, for some function $\tilde{f}$, analytic at the point $t=0$, and the interpolation (\[2n-interpolation\]) under the assumptions of Theorem \[th1\] with the basic function $h$ gives $$\label{interpolaton_even} \tilde{f}(t)=\sum_{k=1}^n\mu_kh(\lambda_kt)+O(t^{2n})\quad \Leftrightarrow \quad f(z)=\sum_{k=1}^n\mu_kh(\lambda_kz^2)+O(z^{4n}),$$ where it is necessary that $f_{2m}\neq 0 \Rightarrow h_m\neq 0$, see (\[ssmm\]). If $f$ is odd, then $f(z)=z\tilde{f}(t)$, $t=z^2$, and analogously $$f(z)=\sum_{k=1}^n\mu_kzh(\lambda_kz^2)+O(z^{4n+1}).$$ Note that, in contrast to similar discussions in [@YF Corollary  2], here we do not require the functions $f$ and $h$ to be of the same parity. \[remark2\] One can consider the interpolation problem (\[2n-interpolation\]) with $O(z^{M})$, where $M>2n$ or $M<2n$, instead of $O(z^{2n})$. Then we accordingly get overdetermined and underdetermined moment systems of the type (\[SRS\]): $$\label{SRS_M} \sum_{k=1}^n \mu_k \lambda_k^m=s_m,\qquad m=\overline{0,M-1}.$$ In some cases the process of solving the consistent systems (\[SRS\_M\]) with $M\ne 2n$ can be reduced to the one of the standard systems (\[SRS\]) (with $M=2n$). It can be done by eliminating the superfluous equations or adding the missed ones (regarding this see also [@Lyubich §5]). But in the present paper we do not consider the case $M\ne 2n$ for the following reasons. The overdetermined systems (\[SRS\_M\]) belong to the non-regular problems of the form (\[SRS\]), where $2n$ is exchanged by $M$ and $\mu_{k}=0$ for $k{\geqslant}n+1$. To apply the Prony-Sylvester method or some other analytical approaches in the standard subsystem of the system (\[SRS\_M\]), one needs a preliminary analysis of its consistency. As far as we know, there exist no reasonably general methods for this purpose. Moreover, it can be seen from the corresponding overdetermined interpolation problem of the form (\[2n-interpolation\]), where $O(z^{2n})$ is exchanged by $O(z^{M})$, that its consistency is quite rigidly connected with individual properties of the functions $f$ and $h$ and thus one has a little chance to obtain more or less general interpolation formulas of the overdetermined type. For example, if a solvable system of the form (\[SRS\]) is supposed to be consistent with the next equation $S_{2n}=s_{2n}$, then the coefficient $f_{2n}$ cannot be chosen arbitrarily as it depends on the parameters $s_0,\ldots s_{2n-1},h_{2n}$ in a certain way. Indeed, the following generalized Newton’s formula, connecting the coefficients $g_m$ of the generating polynomial $G_n$ and the moments $S_{v+m}$, is well-known: $$\label{corollary_form_Newton++++} \sum_{m=0}^{n} S_{v+m}{g}_{m}=0, \qquad v=0,1,\ldots.$$ Therefore the values $s_m=s_m(h,f)$ (see (\[ssmm\])) of the moments $S_m$ with ${m>2n-1}$ (simultaneously with the coefficients $f_{m}=s_m h_m$) in a solvable overdetermined system of the form (\[SRS\_M\]) are uniquely determined from the system (\[SRS\]) (we suppose that $h$ is fixed). A similar situation arises when one obtains formulas for numerical differentiation and extrapolation (see the corresponding sections below). In these problems the sequences $\{s_m\}$ have a certain arithmetic structure and do not satisfy (\[corollary\_form\_Newton++++\]) for $v=n$ and $S_{2n}=s_{2n}$, i.e. one gets solvable systems of the form (\[SRS\]) with $M=2n$ although adding one more equation of the required form leads to an inconsistent system (see, for example, (\[r\_n\_diff\])). As regards the underdetermined moment systems (when $M<2n$) and the corresponding Padé interpolation, they do not arise in the present paper as we solve the interpolation problem of the order higher than $M$ with the same number of independent parameters $\{\mu_k\}_{k=1}^n$ and $\{\lambda_k\}_{k=1}^n$. Nevertheless, the systems (\[SRS\_M\]) (consistent and even inconsistent) are of independent interest, moreover, they are actively studied in numerical analysis. Various methods for finding an approximate solution (in different senses) to them were developed, for example, in [@Beylkin; @Kung_residual; @Beylkin2; @Potts; @YF] (characteristics of one such method are discussed in Section \[Section6\]). Amplitude and frequency operators in classical problems {#par2} ======================================================= We now consider several classical problems, which are bound up with the class of amplitude and frequency operators. [**.1. Hamburger moment problem.**]{} Theorem \[th1\] raises the question about interpolating amplitude and frequency operators with real $\lambda_k$ and $\mu_k$ (in particular, with $\mu_k>0$). This question is well-studied [@Akhiezer1 Ch. 2] and can be settled by discretization of the following classical Hamburger problem: given a sequence of real numbers $\{s_m\}$, $m\in \mathbb{N}_0$, find a non-negative Borel measure $\mu$ on $\mathbb{R}$ such that $$\label{Hamburger} s_m = \int_{-\infty}^\infty \lambda^m\,d \mu(\lambda),\qquad m\in \mathbb{N}_0.$$ Namely, the following criterion is valid [@Akhiezer1 Ch. 2, §1]: the problem (\[Hamburger\]), where $m=\overline{0,2n-1}$, has a unique solution with the spectrum, consisting of $n$ pairwise distinct points $\lambda_1,\ldots,\lambda_n$, if and only if the leading principal minors $\Delta_k$ of order $k$ of the infinite Hankel matrix $(s_{i+j})_{i,j=0}^\infty$ satisfy the following conditions: $$\label{Hamb_condition} \Delta_1>0, \qquad \Delta_2>0,\qquad \ldots \qquad \Delta_{n}>0,\qquad \Delta_{n+1}=\Delta_{n+2}=\ldots=0.$$ This implies that the discrete moment problem (\[SRS\]) is regularly solvable in real numbers $\lambda_k$ (this is equivalent to the fact that the polynomial (\[G\_n\]) has $n$ pairwise distinct real roots) and $\mu_k>0$ if and only if the sequence (\[ssmm\]) satisfies the first $n$ inequalities in (\[Hamb\_condition\]). Note that then the sequence $\{s_m\}_{m=0}^{2n-1}$ is called *positive*. [**.2. Gauss and Chebyshev quadratures.**]{} Given a function $f$, analytic in a $\rho$-neighbourhood of the origin, suppose that $$F(x):=\frac{1}{x}\int_{-x}^{x}f(t)\,dt,\qquad 0<x <\rho.$$ To construct the amplitude and frequency operator $H_n(\{\mu_k\},\{\lambda_k\},f;x)$ for $F(x)$, we get the (positive) moment sequence $s_{m}=\frac{1-(-1)^{m+1}}{m+1}$, $m=\overline{0,2n-1}$, from (\[ssmm\]) and then consider the corresponding discrete moment problem (\[SRS\]). It is well-known that it is regular for any $n$, moreover, the corresponding generating polynomial (\[G\_n\]) is the Legendre polynomial $P_n$ (we write it in Rodrigues’ form) multiplied by a non-zero constant [@Krylov Ch. 7, §2]: $$G_n(x)=P_n(1) P_n(x), \qquad P_n(x):=\frac{1}{2^n n!}\frac{d^n}{dx^n}(x^2-1)^n.$$ Therefore the frequencies $\lambda_k$ are real, pairwise distinct and belonging to the interval $(-1,1)$ (as roots of the Legendre polynomials, forming an orthogonal system on $[-1,1]$). The amplitudes $\mu_k$ are determined via the numbers $\lambda_k$ by the well-known formulas [@Krylov Ch. 10, §3]: $$\mu_k = \frac{2}{\left( 1-\lambda_k^2 \right) [P'_n(\lambda_k)]^2}>0.$$ Thus we obtain the interpolation formula $$\label{GAUSS} \frac{1}{x}\int_{-x}^{x}f(t)\,dt=\sum_{k=1}^n\mu_kf(\lambda_kx)+r_n(x), \qquad r_n(x)=O(x^{2n}),$$ which is a Gaussian quadrature for each fixed $x$. The amplitudes and frequencies depend only on $n$ but not on $f$. It is known [@Krylov Ch. 10] that the Gaussian quadratures are of the highest algebraic degree of accuracy among all the formulas of the form (\[GAUSS\]) and exact (i.e. $r_n(x)\equiv 0$) for the polynomials of degree ${\leqslant}2n-1$. In a similar manner one can obtain interpolation formulas for integrals with classical weights. For example, for $$F(x):=\int_{-x}^{x}\frac{f(t)}{\sqrt{x^2-t^2}}\,dt,\qquad 0< x <\rho,$$ we have $$s_{2m}=\pi\frac{(2m)!}{(2^m m!)^2},\quad s_{2m+1}=0,\quad m=\overline{0,n-1},\qquad G_n(x)=(-2^{1-n}\pi)^n T_n(x),$$ where $T_n(x)=\cos(n\arccos x)$ are the Chebyshev polynomials of the first kind. Calculating the amplitudes $\mu_k$ via the frequencies $\lambda_k$ leads to the following Gauss-Chebyshev quadrature [@Abramovitz §25.4.38] for real $x$, $0<x <\rho$: $$\label{Gauss-Chebyshev} \int_{-x}^{x}\frac{f(t)}{\sqrt{x^2-t^2}}\,dt=\frac{\pi}{n} \sum_{k=1}^n f(\lambda_kx)+r_n(x), \quad \lambda_k=\cos \tfrac{2k-1}{2n}\pi, \quad r_n(x)=O(x^{2n}),$$ whose characteristic property is the equality of the amplitudes $\mu_k=\pi/n$. The remainder can be written more precisely: $$\label{Gauss-Chebyshev-error} r_n(x)=\frac{\pi f^{(2n)}(\xi)}{2^{2n-1}(2n)!}x^{2n},\qquad \xi\in(-x,x).$$ One can deduce it from [@Abramovitz §25.4.38] by a suitable change of variables. [**.3. Padé approximants.**]{} The Padé approximants as well as the Gaussian quadratures are closely related to the classical moment problem (see, for instance, [@Dzyadyk]). Construction of the amplitude and frequency sum $H_n(\{\mu_k\},\{\lambda_k\},h;x)$ with the basis function $h(z)=(z-1)^{-1}$ for some function $f$, analytic in a neighbourhood of the origin, leads to the sequence of moments $s_m=-f_m$, $m=\overline{0,2n-1}$. If the generating polynomial (\[G\_n\]) for this sequence is of degree $n$ and all its roots are pairwise distinct, then by Theorem \[th1\] we get the following interpolation identity: $$\label{2} f(z)=\sum_{k=1}^n\frac{\mu_k}{\lambda_kz-1}+O(z^{2n}).$$ This is a classical Padé approximant of order $[(n-1)/n]$. We recall that classical Padé approximants of order $[m/n]$ are interpolating rational functions of the form $P_m(f;z)/Q_n(f;z)$ (see, for instance, [@Baker §1.1]). Note that the method for solving the problem (\[SRS\]), proposed by Ramanujan [@Ramanujan], is equivalent to the one for construction of the interpolation formula (\[2\]) (see [@Lyubich; @Lyubich2]). [**.4. Exponential sums.**]{} Let $h(z)=\exp(z)$ be a basis function in the amplitude and frequency operator $H_n(\{\mu_k\},\{\lambda_k\},h;z)$ and $f$ be a function, which we are going to interpolate. The corresponding sequence of moments is $s_m=m!f_m$, $m=\overline{0,2n-1}$. Suppose that the problem (\[SRS\]) for this sequence is regular. Then the following formula of $2n$-multiple interpolation at the origin holds: $$f(z)=\sum_{k=1}^n \mu_k e^{\lambda_kz}+O(z^{2n}).$$ (In particular, this result has been already obtained in [@Buchmann] and [@Lyubich].) Interpolation of functions by exponential sums with *simple* equidistant nodes was considered by Prony [@Prony]. At present, many works are devoted to this method and its various modifications and applications (see, for instance, [@Korobov; @Beylkin2; @Beylkin], [@Braess Ch. 6] and references there). A vast investigation of *the exponential series* was conducted in the scientific school of Leont’ev [@Leontiev2]. It is worth mentioning here that members of the school also actively studied several *generalizations of the exponential series* (see, for instance, [@Gromov2; @Shevtsov; @Leontiev3]). Namely, they enquired into the problem of completeness of the infinite systems $\{h(\lambda_k z)\}$, where $h$ were entire functions and $\lambda_k$ given numbers. Consequently, they actually considered some *representations* of analytic functions $f$ by amplitude and frequency sums of infinite order, $H_\infty(\{\mu_k\},\{\lambda_k\},h;z)$, and properties of these representations (domains of convergence, admissible classes of the numbers $\lambda_k$ and functions $f$, connections between $\mu_k$ and $\lambda_k$, etc.). In contrast to this approach, we consider *approximations* by amplitude and frequency sums of finite order and respective errors. Moreover, the parameters $\mu_k$ and $\lambda_k$ are not given but uniquely determined by the functions $f$ and $h$. Furthermore, in different applications we regard amplitude and frequency sums as operators with fixed (universal) numbers $\mu_k$ and $\lambda_k$, being determined by the analytic nature of these operators. Examples ======== In this section we give several examples of approximating amplitude and frequency sums for some special functions, in particular, Bessel functions (all arising discrete moment problems are regularly solvable). We will also compare our approximants with similar ones, obtained by other authors. It is known [@Abramovitz §9.1.20-21] that the Bessel function of order zero, $J_0$, has the representation $$J_0(\pm x)=\frac{1}{\pi}\int_{-x}^{x} \frac{\exp(it)}{\sqrt{x^2-t^2}}\,dt= \frac{1}{\pi}\int_{-x}^{x}\frac{\cos(t)}{\sqrt{x^2-t^2}}\,dt, \qquad x>0.$$ From this, (\[Gauss-Chebyshev\]) and (\[Gauss-Chebyshev-error\]) we get the following high-accuracy local approximation of $J_0$ at the point $x=0$ by an amplitude and frequency sum of order $n$: $$\label{J_0} J_0(x)=\frac{1}{n}\sum_{k=1}^n \cos(x\cdot\cos \tfrac{(2k-1)\pi }{2n})+r_n(x), \qquad |r_n(x)|{\leqslant}\frac{|x|^{2n}}{2^{2n-1}(2n)!}.$$ Note that one can obtain the same formula by interpolation of the series [@Abramovitz §9.1.10] $$\label{J_0000} J_0(z)=\sum_{m=0}^\infty\frac{(-1)^m}{(2^mm!)^2}z^{2m}$$ by amplitude and frequency sums with the basis function $h(z)=\exp(z)$. Furthermore, then $$s_{2m}=\frac{(2m)!}{(2^m m!)^2},\qquad s_{2m+1}=0,\qquad m=\overline{1,n-1},$$ cf. $\{s_{2m}\}$ for the Gauss-Chebyshev quadrature in Section 2.2. If in (\[J\_0\]) we use $2n$ instead of $n$ and take into account the symmetry of the frequencies obtained and the parity of cosine, then we get the following amplitude and frequency sum (of the same order $n$): $$\label{J_0_super} H_n(x)=\frac{1}{n}\sum_{k=1}^{n} \cos(x\cdot\cos \tfrac{(2k-1)\pi}{4n}),$$ for which by (\[Gauss-Chebyshev-error\]) we have $$\label{J_0_super_error} J_0(x)\approx H_n(x),\qquad |J_0(x)-H_n(x)|{\leqslant}\frac{|x|^{4n}}{2^{4n-1}(4n)!}.$$ Note that the formula (\[J\_0\_super\]) can be also obtained via (\[interpolaton\_even\]). In [@YF Table 1 and Formula (25)] the following approximant was obtained (for the approach from [@YF] see also Section \[Section6\]): $$\label{YF_J_0_cos} J_0(x)\approx \omega_{11}(x):=\sum_{k=1}^{11}\alpha_m \cos(\beta_m x), \quad \alpha_k\in(0.086,0.096),\quad \beta_k\in(0,1).$$ In comparison with this approximant, the sum (\[J\_0\_super\]) gives much more precise results for $n=11$ in the segment $[0,30]$. Calculations in Maple show that $$\label{YF_J_0_cos++} {\rm log}_{10}\frac{|J_0(x)-\omega_{11}(x)|}{|J_0(x)-H_{11}(x)|}> M(x):=\frac{38}{x+1}, \qquad x\in [0;30].$$ Moreover, in the segment \[0,10\] the minorant $M(x)$ can be exchanged by the more precise $\tilde M(x)=44\log_{10} \frac{1}{x}+50\to\infty$, $x\to 0$. Note that the absolute error of the formula (\[YF\_J\_0\_cos\]) in the segment $[0,10]$ is quite small and close to $\varepsilon=10^{-12}$. For $30{\leqslant}x{\leqslant}40$ the absolute error of the formula (\[J\_0\_super\]) is also less than the one of (\[YF\_J\_0\_cos\]), but for $x{\geqslant}40$ the errors of both approximants can reach $10^{-1}$, thus use of them does not seem to be fair for such $x$. From (\[2n-bis-interpol\]) and (\[J\_0\_super\]) we get the following interpolation formula for the derivative of the Bessel function, $J'_0$ (see [@Abramovitz §9.1.28]): $$\label{derivative_J_0++} J'_0(x)\approx H'_n(x)= -\frac{1}{n}\sum_{k=1}^{n} \cos \tfrac{(2k-1)\pi}{4n}\, \sin(x\cdot\cos \tfrac{(2k-1)\pi}{4n})$$ with the error (see (\[Gauss-Chebyshev-error\])) $$\label{derivative_J_0_super_error} |J'_0(x)-H'_n(x)|{\leqslant}\frac{|x|^{4n-1}}{2^{4n-1}(4n-1)!}.$$ For comparison, look at the approximant from [@YF Table 2 and Formula (30)]: $$\label{YF_J_1} J'_0(x)\approx \Omega_{13}(x):=-\sum_{k=1}^{13}\alpha_m \sin(\beta_m x),\quad \alpha_k\in(0.001,0.08),\quad \beta_k\in(0.12,1).$$ Calculations show that the formula (\[derivative\_J\_0++\]) with the same $n=13$ has noticeably less error in the segment $[0,40]$: $$\label{YF_J_0_cos++++} {\rm log}_{10}\frac{|J'_0(x)-\Omega_{13}(x)|}{|J'_0(x)-H'_{13}(x)|}> M_1(x):=\frac{40}{x+1}, \qquad x\in [0,40].$$ Moreover, in the segment $[0,10]$ the minorant $M_1(x)$ can be exchanged by the more precise $ \tilde M_1(x)=43\log_{10} \frac{1}{x}+48\to\infty$, $x\to 0$. Note that the absolute error of (\[YF\_J\_1\]) in $[0,10]$ is close to $\varepsilon=10^{-12}$. For $40{\leqslant}x{\leqslant}45$ the absolute error of the formula (\[derivative\_J\_0++\]) is also less than the one of (\[YF\_J\_1\]). For $x{\geqslant}45$ the absolute error of both formulas can be $10^{-2}$ and thus use of them may be unreasonable. Let us obtain one more representation for the function $J_0$ by taking $$\label{sinc} \textrm{sinc}\; x=\frac{\sin x}{x}=\sum_{m=0}^\infty \frac{(-1)^m}{(2m+1)!}x^{2m}$$ as a basis function and using the approach for interpolation of even functions from Remark \[remark\_even\_odd\]. Namely, we take into account that $J_0$ and $\textrm{sinc}$ are even and interpolate the function $\tilde{f}(t):=J_0(x)$, $t=x^2$, by amplitude and frequency sums with the basis function ${h}(t)=\textrm{sinc}\; x$. As it can be easily checked (see (\[J\_0000\]) and (\[sinc\])), then $s_m=\frac{(2m+1)!}{(2^mm!)^2}$, $m=\overline{0,2n-1}$. Solving the corresponding problem (\[SRS\]) (in all examples considered we obtained non-negative $\lambda_k$ and real $\mu_k$) yields $$\tilde{f}(t)=\sum_{k=1}^n\mu_k h(\lambda_k t)+r_n(t),\qquad r_n(t)=O(t^{2n}),$$ or, which is the same, $$\label{J_0_sinc} J_0(x)=\sum_{k=1}^n\mu_k \, \textrm{sinc}(\sqrt{\lambda_k}x)+r_n(x),\qquad r_n(x)=O(x^{4n}).$$ We now compare (\[J\_0\_sinc\]) with the approximate equality $J_0(x)\approx \sum_{k=1}^{11}\alpha_k\,\textrm{sinc}(\beta_k z)$, obtained in [@YF Table 3 and Formula (32)] (the amplitudes and frequencies were found there by a numerical method). It turns out that the amplitude and frequency sum in (\[J\_0\_sinc\]) with the same order $n=11$ gives more precise results for $x\in[0;42]$ (especially in a small neighbourhood of $x=0$). For $x{\geqslant}40$ the absolute errors of both formulas can already exceed $10^{-3}$. The authors of [@Beylkin; @Potts] consider numerical methods for interpolation of the function $J_0$ in equidistant nodes by amplitude and frequency sums of the form $\sum_{k=1}^{n}\alpha_k \exp(\beta_k x)$ with complex (but not purely imaginary) $\alpha_k$ and $\beta_k$ and quite fast-decreasing moduli of exponents. This enables them to get good approximants of $J_0$ on large intervals. For example, in [@Beylkin] such an approximant with $n=28$ has the error $\varepsilon{\leqslant}10^{-10}$ for $x\in[0,100\pi]$. The approximant with $n=20$, obtained in [@Potts Example 4.5] by slightly different methods, has the error $\varepsilon{\leqslant}10^{-4}$ for $x\in [0,1000]$. Note that the sums (\[J\_0\_super\]) and (\[derivative\_J\_0++\]) are not recommended for use on such large intervals because of their local character. To guarantee a reasonable rate of approximation, they must have the order comparable with length of the intervals considered (it can be seen from the order-precise estimates (\[J\_0\_super\_error\]) and (\[derivative\_J\_0\_super\_error\])). For example, for $n=20$ they do not give reasonable quality of approximation in the segment $[0,1000]$ but in the subsegments $[0,40]$, $[0,30]$ and $[0,20]$ the approximation errors are less than $10^{-16}$, $10^{-25}$ and $10^{-38}$, correspondingly. Analytic regularization of the interpolation by amplitude and frequency operators {#regularization_section} ================================================================================= [**.1. Variation of the moments.**]{} The $2n$-multiple interpolation problem has great difficulty in the case when the conditions of regularity (see Theorem \[th1\]) are not satisfied. In particular, it can be inconsistent then. In order to avoid this difficulty, we propose a method for analytic regularization of the discrete moment problem. It consists in a certain variation of the right hand sides $s_m$ of the system (\[SRS\]), namely, in adding the generalized power sums ${\sigma}\sum_{k=1}^{\nu}\alpha_k (r\beta_k)^m$ to them (another approach is described in Remark \[REG+++\]). The parameters $\alpha_k$, $\beta_k$ are independent of $s_m$, and $\sigma$ and $r$ depend only on $\max \{|s_m|\}$. From the point of view of the interpolation problem, this corresponds to the fact that we get a new regularly solvable problem of the form (\[2n-interpolation\]) (and (\[SRS\])), where $f$ is exchanged for a new varied function $\tilde f$ such that $\tilde f(z)-f(z)$ is the amplitude and frequency sum $\sigma H_{\nu}(\{\alpha_k\},\{r \beta_k\},h;z)$. We emphasize that the above-mentioned variation of $s_m=s_{m}(h,f)$ in the framework of the interpolation problem is universal as it depends only on $\max \{|s_m|\}$ but not on the functional properties of $f$ and $h$. Moreover, as we will see below, if $\alpha_k$, $\beta_k$, $\sigma$, $r$ are chosen appropriately, then the difference $\tilde f-f$ is just a certain binomial. Let us describe the regularization method in detail. Instead of the function $f$, whose corresponding problem (\[SRS\]) is not regular, we introduce the varied function $$\label{F(z)} {\tilde f}(z):=f(z)+\sigma H_{\nu}(\{\alpha_k\},\{r \beta_k\},h;z),\qquad p\in \mathbb{C},\qquad {\nu}\in \mathbb{N},$$ where $\alpha_k$, $\beta_k$ $\sigma$, $r$ are constants. Since $${\tilde f}(z)= \sum_{m=0}^\infty f_m z^m +\sigma\sum_{k=1}^{\nu}\alpha_k \sum_{m=0}^\infty h_m (r \beta_kz)^m= \sum_{m=0}^\infty \left(s_m+\tau_m \right)h_m z^m,$$ where $\tau_m:=\sigma\sum_{k=1}^{\nu}\alpha_k (r\beta_k)^m$ and $s_m=s_{m}(h,f)$ as above (see (\[ssmm\])), instead of $(\ref{SRS})$ we obtain the system $$\label{Var_disc_mom_syst} \sum_{k=1}^n \mu_k \lambda_k^m=s_m+\tau_m, \qquad m=\overline{0,2n-1},$$ which differs from the system (\[SRS\]) in the regularizing summands $\tau_m$ (if the initial system is regular, then it is natural to set $\sigma=\tau_m\equiv 0$). The task is now to find the numbers $\tau_m$ such that the conditions of Theorem \[th1\] are satisfied. Assume that we have done this, then by Theorem \[th1\] we obtain the interpolation identity $${\tilde f}(z)=H_{n}(\{\mu_k\},\{\lambda_k\},h;z)+O(z^{2n}).$$ Returning to the initial interpolation problem and taking into account (\[F(z)\]) yield $$\label{f_reg} f(z)=H_n(\{\mu_k\},\{\lambda_k\},h;z)-\sigma H_{\nu}(\{\alpha_k\},\{r\beta_k\},h;z) +O(z^{2n}).$$ At the same time it is reasonable to choose ${\nu}$ as small as possible. However, there is a natural restriction on $\nu$, which is seen from the following statement (cf. §4 in [@Lyubich]). \[ranks\] It is necessary for the regularity of the varied system $(\ref{Var_disc_mom_syst})$ that $${\nu}{\geqslant}n-{\operatorname{rank}}\left(s_{i+j-2}\right)_{i,j=1}^n.$$ Consider the Hankel matrices $$\mathbf{H}:=\left(s_{i+j-2}\right)_{i,j=1}^n, \qquad \mathbf{R}:=\sigma\left(\sum_{k=1}^{\nu} \alpha_k (r \beta_k)^{i+j-2}\right)_{i,j=1}^n.$$ For the regularity of the system $(\ref{Var_disc_mom_syst})$ it is necessary that the coefficient before $\lambda^n$ of the corresponding generating polynomial $G_n$ does not vanish, i.e. $\det (\mathbf{H}+\mathbf{R})\ne 0$, ${\operatorname{rank}}(\mathbf{H}+\mathbf{R})= n$. It is well known [@Horn §0.4.5] that for any $n\times n$-matrices $A$ and $B$ $$\label{rank_inequality} {\operatorname{rank}}(A+B){\leqslant}\min\{{\operatorname{rank}}A + {\operatorname{rank}}B;n\}.$$ Consequently, if the system $(\ref{Var_disc_mom_syst})$ is regular, then necessarily $n {\leqslant}{\operatorname{rank}}\mathbf{H} + {\operatorname{rank}}\mathbf{R}$. It remains to note that ${\operatorname{rank}}\mathbf{R}{\leqslant}{\nu}$. Indeed, the following representation is valid: $$\mathbf{R}=\sigma\sum_{k=1}^{\nu} \alpha_k \mathbf{C}(k), \qquad \mathbf{C}(k):=\left((r\beta_k)^{i+j-2}\right)_{i,j=1}^n,$$ where ${\operatorname{rank}}\mathbf{C}(k)=1$ (each next row is the previous one multiplied by $r\beta_k$). From this by the property (\[rank\_inequality\]) we obtain the required bound for ${\operatorname{rank}}\mathbf{R}$. [**.2. Parameters of the regularization.**]{} In the problems under consideration the ranks of matrices $\left(s_{i+j-2}\right)_{i,j=1}^n$ are small. Consequently, taking into account Lemma \[ranks\], we will consider only the overall case ${\nu}=n$ to solve the regularization problem. Then the formula (\[f\_reg\]) has $2n$ summands (if $\sigma\ne 0$, $r\ne 0$), and in this sense the amplitude and frequency sums obtained have no advantage over the $h$-sums of order $2n$. However, an appropriate choice of $\alpha_k$, $\beta_k$, $\sigma$ and $r$ can essentially simplify the latter sum in (\[f\_reg\]). Indeed, let $p$ and $q$ be fixed non-zero complex numbers and $$\label{alpha-beta} \alpha_k=\beta_k=\exp \left(\frac{2\pi (k-1) i}{n}\right),\qquad r=\left(\frac{q}{p}\right)^{1/n},\qquad \sigma=\frac{p^2}{nq} r,\qquad k=\overline{1,n},$$ where the number $r$ is any of the $n$ values of the root. Then, as it can be easily seen, in (\[Var\_disc\_mom\_syst\]) we obtain $$\label{TAU} \tau_{n-1}=p,\quad \tau_{2n-1}=q;\qquad \tau_{m}=0 \quad \hbox{ for other }\quad m.$$ Indeed, $$\tau_m=\frac{p^2}{nq} r^{m+1}\sum_{k=1}^n \exp\left({\frac{2\pi (k-1) i}{n}(m+1)}\right),$$ where the sum of the exponents equals $n$ if $m+1$ is divisible by $n$ and zero if not. Consequently, $$\label{SVORACH} \sigma H_n(\{\alpha_k\},\{r\beta_k\},h;z)= p\, h_{n-1}z^{n-1}+q\,h_{2n-1}z^{2n-1}+O(z^{2n}).$$ Thus, assuming the regularity of the varied problem (\[Var\_disc\_mom\_syst\]), we get the formula $$\label{f_reg_expansion} f(z)=H_n(\{\mu_k\},\{\lambda_k\},h;z)- p\, h_{n-1}z^{n-1}-q\,h_{2n-1}z^{2n-1}+O(z^{2n}).$$ In order to obtain the main result of this section, we now show that the above-mentioned problem is indeed regular for a certain choice of the parameters $p$ and $q$. Below we give a possible way of such a choice. The generating polynomial of the system (\[Var\_disc\_mom\_syst\]) for $\alpha_k$ and $\beta_k$ from (\[alpha-beta\]) has the form $$\label{modif_equ} {G}_n(\lambda)={G}_n(p,q;\lambda):= \left| \begin{array}{ccccc} 1 & \lambda & \cdots & \lambda^{n-1} & \lambda^n \\ s_0 & s_1 & \cdots & s_{n-1}+p & s_{n} \\ s_1 & s_2 & \cdots & s_n & s_{n+1} \\ \cdots & \cdots & \cdots & \cdots & \cdots \\ s_{n-1}+p & s_{n} & \cdots & s_{2n-2} & s_{2n-1}+q \\ \end{array} \right|.$$ Obviously, for the parameters $p$ and $q$, being sufficiently large in modulus (comparing with the moments $s_k$ and independently of each other), the roots of this polynomial are arbitrarily close to those of the polynomial of the form (\[modif\_equ\]) with $s_k=0$, $k=\overline{0,2n-1}$. The latter polynomial, as it can be easily checked by expanding the determinant along the first row, has the form $$(-1)^{n(n+1)/2}p^{n}(\lambda^n-q/p),$$ and all its $n$ roots are pairwise distinct. From here, the formula (\[f\_reg\_expansion\]) and Theorem \[th1\] we obtain the following result. \[th2\] Given $p$ and $q$ sufficiently large in modulus, the varied problem $(\ref{Var_disc_mom_syst})$ with the parameters $(\ref{TAU})$ has a regular solution $\{\mu_k\}$, $\{\lambda_k\}$. Moreover, for the constants $c_1=-ph_{n-1}$ and $c_2=-qh_{2n-1}$ the following interpolation formula holds: $$f(z)=c_1z^{n-1}+c_2 z^{2n-1}+\sum_{k=1}^n\mu_k h(\lambda_k z)+O(z^{2n}).$$ \[REG+++\] The above-mentioned regularization with the parameters from (\[alpha-beta\]) and (\[TAU\]) is actually equivalent to adding the binomial $c_1z^{n-1}+c_2 z^{2n-1}$ with non-vanishing coefficients $c_{1}$ and $c_{2}$ to the function $f$. In what follows we will expand the class of regularizable problems by showing that $c_1$ and $c_2$ can be chosen in a different way and not necessarily non-vanishing. In particular, in the extrapolation problem from Section \[par-extrap\] it will be reasonable to set $c_2=0$. \[rem2\] The conditions on $p$ and $q$, mentioned in Theorem \[th2\], are quite qualitative and need additional specification in practice. Several methods for this will be proposed below in particular applications. In the general case one can use the following observations. The leading coefficient $g_n=g_n(p)$ of the polynomial ${G}_n$ is obviously a polynomial of $p$ of degree $n$, hence $\deg {G}_n(p,q;\lambda)=n$ for all $p$ except those from the set $$\label{Pi-general} \Pi:=\{p: g_n(p)=0\},$$ containing no more than $n$ points. It is possible to obtain some estimates for the boundaries of the set $\Pi$ using that a matrix with strict diagonal dominance is non-singular (see the Levy–Desplanque theorem in [@Horn Th. 6.1.10]). Namely, if we choose $p$ satisfying the inequality $$|s_{n-1}+p|>\sum_{j=1, j\neq n-i+1}^{n}|s_{i+j-2}|, \qquad i=\overline{1,n},$$ then the determinant for $g_n$ is strict diagonally dominant and hence $g_n\ne 0$. For this it is sufficient to take, for example, $$p> n \max_{k=\overline{0,2n-1}} |s_k|.$$ We now suppose that the generating polynomial (\[modif\_equ\]) is of degree $n$. Then the question about “separation” of its multiple roots arises. As it is easily seen, $${G}_n(p,q;\lambda)={\mathcal S}(p;\lambda) +q\mathcal{T}(p;\lambda),$$ where the polynomial ${\mathcal S}$ is of degree $n$ and the polynomial ${\mathcal T}$ is of degree ${\leqslant}n-1$; both polynomials depend only on $p$. The following statement is valid. \[RAZDEL\] Suppose that in each multiple root $($if any$)$ of the polynomial ${G}_n$ the polynomial ${\mathcal T}$ either does not vanish or has a simple root. Then there exists an arbitrarily small variation $\delta\ne 0$ of the parameter $q$ such that the polynomial ${G}_n(p,q+\delta;\lambda)$ has $n$ simple roots. Assume that ${\lambda}_0$ is an $s$-multiple ($s{\geqslant}2$) root of the polynomial ${G}_n(p,q;\lambda)$. Then in a sufficiently small neighbourhood of the root the polynomial $${G}_n(p,q+\delta;\lambda)={G}_n(p,q;\lambda) +\delta{\mathcal T}(p;\lambda)$$ has the form $${G}_n(p,q+\delta;\lambda)=({\lambda}-{\lambda}_0)^s({\alpha}+ O({\lambda}-{\lambda}_0))+\delta (t_0+t_1(\lambda-{\lambda}_0)+O(({\lambda}-{\lambda}_0)^2)),$$ where ${\alpha}\ne 0$, $|t_0|+|t_1|\ne 0$ and values $O({\lambda}-{\lambda}_0)$, $O(({\lambda}-{\lambda}_0)^2)$, $\lambda\to \lambda_0$, are independent of $\delta$. Choose small $\varepsilon>0$ and $\delta=\delta(\varepsilon)$ so that in the disc ${|\lambda-\lambda_0|{\leqslant}2\varepsilon}$ the polynomial ${G}_n$ has no roots, distinct from $\lambda_0$ (we take into account that the roots depend on $\delta$ continuously), and $$|{G}_n(p,q;\lambda)|>|\delta{\mathcal T}(p;\lambda)|,\qquad |\lambda-\lambda_0|=\varepsilon.$$ By Rouché’s theorem, the polynomial ${G}_n(p,q+\delta;\lambda)$ has exactly $s$ roots in the disc ${|{\lambda}-{\lambda}_0|<\varepsilon}$; we will use $\tilde \lambda_k$ to denote them. If $t_0\ne 0$, then these roots satisfy the equation $$({\lambda}-{\lambda}_0)^s=-\frac{\delta t_0}{\alpha}\; (1+O(\varepsilon)),\qquad \varepsilon\to 0.$$ If $t_0=0$, $t_1\ne 0$, then $\tilde \lambda_1=\lambda_0$ and other roots satisfy the equation $$({\lambda}-{\lambda}_0)^{s-1}=-\frac{\delta t_1}{\alpha}\; (1+O(\varepsilon)),\qquad \varepsilon\to 0.$$ In any case we get $s$ simple roots. Suppose that $\varepsilon$ and $|\delta|$ are so small that the above-mentioned method works simultaneously for all the multiple roots but all the simple ones remain simple (it is possible as the roots depend on $\delta$ continuously). Then we get the polynomial ${G}_n(p,q+\delta;\lambda)$ with $n$ simple roots. It follows from the aforesaid that the following conjecture is very likely: the set of the parameters $(p,q)$, for which the interpolation problem considered in this section is regularly solvable, is everywhere dense in $\mathbb{C}^2$. But now we have only the following statement, which is a supplement to Theorem \[th2\]. \[th22\] Suppose that $p\notin \Pi$ $($see $(\ref{Pi-general})$$)$ and the conditions of Lemma $\ref{RAZDEL}$ are satisfied. Then there exists an arbitrarily small variation $\delta\ne 0$ of the parameter $q$ such that the varied problem $(\ref{Var_disc_mom_syst})$ with $\tau_{n-1}=p$, $\tau_{2n-1}=q+\delta$ $($all other $\tau_{m}=0)$ has a regular solution $\{\mu_k\}$, $\{\lambda_k\}$, and for the constants $c_1=-ph_{n-1}$ and $c_2=-(q+\delta)h_{2n-1}$ the following interpolation formula holds: $$f(z)=c_1z^{n-1}+c_2 z^{2n-1}+\sum_{k=1}^n\mu_k h(\lambda_k z)+O(z^{2n}).$$ Numerical differentiation by amplitude and frequency operators {#par-diff} ============================================================== [**.1. Statement of the problem.**]{} As an application of the regularization method we consider the problem of $2n$-multiple interpolation of the function $zf'(z)$ by amplitude and frequency operators $H_n$ with the basis function $f$. (As above we suppose that $f$ is defined and holomorphic in a neighbourhood of the origin.) The solution to this problem would allow us to obtain a high-accuracy formula for numerical differentiation with local precision $O(z^{2n})$. However, the discrete moment problem (\[SRS\]) with $s_m=m$, $m=\overline{0,2n-1}$, which we get in this case, is non-regular (the generating polynomial (\[G\_n\]) is of the degree less than $n$ for $n=1$ and $n{\geqslant}3$ as the algebraic adjunct to $\lambda^n$ obviously vanishes, and has the double root $\lambda=1$ for $n=2$; both cases do not satisfy Theorem \[th1\]). Here we apply the regularization method mentioned in Remark \[REG+++\]. More precisely, given some complex parameters $p$ and $q$, we consider the varied function $$\label{TILDE-f} \tilde{f}(z):=zf'(z)+ p\, f_{n-1}z^{n-1}+q\,f_{2n-1}z^{2n-1}, \qquad zf'(z)=\sum_{m=0}^{\infty} mf_m z^m$$ and the interpolating sum $H_n(\{\mu_k\},\{\lambda_k\},{f};z)$. From here by (\[ssmm\]) we get the set of the varied moments $$\label{diff_moments} s_m=m, \quad m\neq n-1,2n-1; \qquad s_{n-1}=n-1+p,\qquad s_{2n-1}=2n-1+q,$$ which are independent of $f$. Consequently, $$\label{G_n_diff_2_diag} \hat{G}_n(\lambda):=\sum_{m=0}^n \hat{g}_m\lambda^m= \left| \begin{array}{cccccc} 1 & \lambda & \ldots & \lambda^{n-1} & \lambda^n\\ 0 & 1 & \ldots & n-1+p & n\\ 1 & 2 & \ldots & {n} & {n+1}\\ \ldots & \ldots & \ldots & \ldots & \ldots\\ n-1+p & n & \ldots & {2n-2} & {2n-1+q}\\ \end{array} \right|.$$ If for some $p$ and $q$ the generating polynomial $\hat{G}_n(\lambda)$ has exactly $n$ pairwise distinct roots $\lambda_1,\ldots,\lambda_n$, then by Theorem \[th1\] the varied interpolation problem becomes regular and $$\label{REG PROIZ} zf'(z)=H_n(\{\mu_k\},\{\lambda_k\}, f;z)- p\, f_{n-1}z^{n-1}-q\,f_{2n-1}z^{2n-1}+O(z^{2n}),$$ where $\mu_k$ can be calculated using (\[diff\_moments\]), (\[SRS\]) and Lemma \[400\]. [**.2. Coefficients of the generating polynomial.**]{} In the case under consideration the coefficients $\hat{g}_m$ can be written explicitly. \[lemma\_dif\_1\] Let $\kappa:=(-1)^{n(n+1)/2}p^{n-3}$. Then for $n{\geqslant}1$ the coefficients of the polynomial $(\ref{G_n_diff_2_diag})$ have the form $$\begin{aligned} \hat{g}_n&=\kappa p\left(p^2+n(n-1)p +\tfrac{n^2(n^2-1)}{12}\right),\\ \hat{g}_0&=-\kappa \left(p^2q+(2n-1)p^2+(n-1)^2p\,q-\tfrac{n(n^2-1)}{6}p+\tfrac{(n-2)n(n-1)^2}{12}q\right),\quad\quad\quad\;\\ \hat{g}_m&=-\kappa \left((2n-(m+1))p^2-(n-(m+1))p\,q -\tfrac{n(n+1)}{2}\left(\tfrac{n+2}{3}-(m+1)\right)p+\phantom{\tfrac{1}{1}}\right. \\ &\qquad\qquad\qquad\qquad\qquad\left.\phantom{\tfrac{1}{1}} +\tfrac{n(n-1)}{2}\left(\tfrac{2(n+1)}{3}-(m+1)\right)q\right),\qquad m=\overline{1,n-1}.\end{aligned}$$ One can verify the identities for $n=1,2$ directly. From now on, $n{\geqslant}3$. Let us first prove the identity for $\hat{g}_n$ by direct calculation of the algebraic adjunct $(-1)^n D$ to $\lambda^n$ in the determinant (\[G\_n\_diff\_2\_diag\]). We now show that the characteristic polynomial $P_n(\lambda)=\det (A-\lambda I)$ of the matrix $$A:=\left( \begin{array}{ccccc} n-1 & n-2 & n-3 & \ldots & 0 \\ n & n-1 & n-2 & \ldots & 1 \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ 2n-2 & 2n-3 & 2n-4 & \ldots & n-1 \\ \end{array} \right)$$ has the form $$\label{charac_poly} P_n(\lambda)=(-1)^n\lambda^{n-2}\left(\lambda^2-n(n-1) \lambda+\tfrac{n^2(n^2-1)}{12}\right).$$ It is known that for any matrix $B$ $$\label{characht_property_1} \det (B-\lambda I)=(-1)^n(\lambda^n-b_1\lambda^{n-1}+b_2\lambda^{n-2}+\ldots+(-1)^nb_n),$$ where $b_j$ is the sum of all $j$-rowed diagonal minors of the matrix $B$ (see, for instance, [@Jacobson §3.10]). In particular, in terms of the traces of the matrices $B$ and $B^2$ we have $$\label{characht_property_2} b_1=\mathrm{Tr}\,B, \qquad b_2=\tfrac{1}{2}\left((\mathrm{Tr}\,B)^2-\mathrm{Tr}\,B^2\right).$$ For our matrix $A$ all minors of the size greater than two are zero (as subtracting a row from any other one gives a constant row) therefore ${\operatorname{rank}}A=2$. Consequently, the coefficients before the terms with the powers less than $n-2$ in $\det (A-\lambda I)$ are zero. Furthermore, it is clear that $\mathrm{Tr}\,A=n(n-1)$. We now consider the coefficient before $\lambda^{n-2}$. It is easily seen (by direct multiplication of the $k$th row by the $k$th column of the matrix $A$) that $$\mathrm{Tr}\,A^2= \sum_{k=1}^n\sum_{m=0}^{n-1}\left((n-1)^2-(m-k+1)^2\right)= \tfrac{1}{6}n^2(n-1)(5n-7).$$ It follows that $$\tfrac{1}{2}\left((\mathrm{Tr}\,A)^2-\mathrm{Tr}\,A^2\right) =\tfrac{1}{2}\left(n^2(n-1)^2-\tfrac{1}{6}n^2(n-1)(5n-7)\right) =\tfrac{n^2(n^2-1)}{12}.$$ This completes the proof of the formula (\[charac\_poly\]). Let us return to the determinant $D$. Its matrix is mirror symmetric with respect to $A$ (i.e. its columns are placed in a reversed order) and can be obtained by right multiplication of $A$ by the anti-diagonal identity matrix. As is known, the determinant of the $n\times n$ anti-diagonal identity matrix is equal to $(-1)^{n(n-1)/2}$. Hence $$\label{DETER-D} D=(-1)^{\tfrac{n(n-1)}{2}}\det (A+p I)=(-1)^{\tfrac{n(n-1)}{2}}P_n(-p).$$ This and (\[charac\_poly\]) yield the desired formula for $\hat{g}_n=(-1)^n D$, $n{\geqslant}3$. For convenience, we introduce the set of three elements $$\label{Pi} \hat{\Pi}:=\left\{0; \tfrac{n}{2}\left(1-n+d_n\right); \tfrac{n}{2}\left(1-n-d_n\right)\right\},\qquad d_n:=\sqrt{\tfrac{2}{3}(n-1)(n-3)}.$$ Note that if $p\notin \hat{\Pi}$, then $\hat{g}_n\neq 0$. Assume that $p\notin \hat{\Pi}$ and $\hat{g}_n$ are known. Then we can determine the desired identities for the other $n$ coefficients from the following system of $n$ linear equations: $$\label{Newton_formula} \sum_{m=0}^n s_{v-m}\hat g_{n-m}=0,\qquad v=\overline{n,2n-1}.$$ The equation for each $v$ can be obtained by summarizing the products of the algebraic adjuncts to the elements of the first row of the determinant (\[G\_n\_diff\_2\_diag\]) (generally, of the determinant (\[G\_n\])) and the corresponding elements of the $(v+2)$th row. The linear system (\[Newton\_formula\]) with unknowns $\hat{g}_0,\ldots,\hat{g}_{n-1}$ has a non-singular matrix (its determinant is equal to the coefficient $\hat{g}_n\ne 0$, $p\notin \hat{\Pi}$), hence it has a unique solution. Consequently, in order to complete the proof of Lemma \[lemma\_dif\_1\] it is sufficient to verify (\[Newton\_formula\]) by direct substitution of the moments (\[diff\_moments\]) and coefficients given in Lemma \[lemma\_dif\_1\]. This verification is quite simple and can be reduced to calculation of the sums $\sum_{m=0}^{n}m^\nu$ for $\nu=1,2$, so we do not dwell on it. Finally, let $p\in \hat{\Pi}$. The case $p=0$ is not interesting as then we have $\hat{G}_n(\lambda)\equiv 0$ (see (\[G\_n\_diff\_2\_diag\]) for $n{\geqslant}3$). In the case $p\in \hat{\Pi}\setminus \{0\}$ the system (\[Newton\_formula\]) is homogeneous and has a singular matrix hence it has infinitely many non-zero solutions, one of those is given in Lemma \[lemma\_dif\_1\] (we use that the coefficients $\hat{g}_m$ depend on $p$ continuously). [**.3. Factorization of the coefficients of the generating polynomial.**]{} The following lemma about factorization of the coefficients of $\hat{G}_n$ is fundamental since it enables us to use Theorem \[th1\] in the problem under consideration. \[lemma\_factor\] Let $n{\geqslant}3$, $p\notin \hat{\Pi}$ $($see $(\ref{Pi})$$)$ and $$\label{PARAM p q} q=q_0(p):=-2\,{\frac {p\left (3\,p+{n}^{2}-1\right )}{\left ({n}-1 \right )\left (n-2\right )}}.$$ Then the ratios $\hat{g}_m/\hat{g}_n$ for $m=\overline{1,n}$ are independent of $p$ and $\hat{g}_0/\hat{g}_n$ depend on $p$ linearly. More precisely, the generating polynomial has the form $$\label{FACTOR G} \hat{G}_n(\lambda)=\hat{g}_n \left({\lambda}^{n}- \frac{6\lambda\left(\lambda^{n-1}-(n-1)\lambda+n-2\right)}{\left (n- 1\right)\left(n-2\right)\left (\lambda-1\right)^{2}} +2+{\frac {6\,p}{\left (n-1\right )\left (n-2\right)}} \right).$$ There exists an arbitrarily small variation of the parameter $p$ such that all the roots of the polynomial $\hat{G}_n$ are pairwise distinct. Taking into account Lemma \[lemma\_dif\_1\], if we solve the equation $\hat{g}_{n-1}=0$, being linear with respect to $q$, then we get (\[PARAM p q\]). Substitution of the expression for $q$ into the other coefficients gives $$\hat{g}_0=\hat{g}_n\left(2+{\frac {6\,p}{\left (n-1\right )\left (n-2\right )}}\right),\qquad \hat{g}_m=-6\,\hat{g}_n{\frac {n-1-m}{\left (n-1\right )\left ({n}-2 \right )}},\qquad m=\overline{1,n-1},$$ where $\hat{g}_n=(-1)^{\tfrac{n(n+1)}{2}}p^{n-2}\left(p^2+n(n-1)p +\tfrac{n^2(n^2-1)}{12}\right)\neq 0$ as $p\notin \Pi$, therefore $$\hat{G}_n(\lambda)=\hat{g}_n\, \left ({\lambda}^{n}-\frac{6}{\left (n- 1\right )\left (n-2\right )}\sum _{m=1}^{n-1}\,{\left (n-m-1\right ){\lambda}^{m}}+2+{\frac {6\,p}{\left (n-1\right )\left (n-2\right )}} \right ),$$ which yields (\[FACTOR G\]) after calculation of the sum. The conclusion about the simplicity of the roots follows immediately from Lemma \[RAZDEL\]. [**.4. Main theorem about numerical differentiation by amplitude and frequency operators.**]{} From (\[corollary\_form\_Newton++++\]) we get $$\label{corollary_form_Newton} S_{2n} =-\frac{1}{\hat{g}_n}\sum_{m=0}^{n-1} s_{n+m}\hat{g}_{m}.$$ After substituting the moments (\[diff\_moments\]) and the coefficients from Lemma \[lemma\_factor\] into (\[corollary\_form\_Newton\]), direct calculation gives the expression for the $2n$th moment: $$S_{2n} = 2n-C_n(p),\qquad C_n(p):=\frac{6np}{(n-1)(n-2)}.$$ Therefore the remainder $r_n$, which was denoted in (\[REG PROIZ\]) just as $O(z^{2n})$, has the following more precise form: $$\label{r_n_diff} \tilde{f}(z)-\sum_{k=1}^n\mu_k f(\lambda_k z)= \sum_{m=2n}^{\infty}(m-S_m)f_mz^m= C_n(p)f_{2n}z^{2n}+O(z^{2n+1}).$$ From the foregoing, we get the following statement. \[th3\] Given $n{\geqslant}3$, any $p_0\in \mathbb C$ and arbitrarily small $\varepsilon>0$, there exists a value of the parameter $p$, $|p-p_0|{\leqslant}\varepsilon$, $p\notin \hat{\Pi}$ $($see $(\ref{Pi}))$, such that $$\label{diff_formula_common} zf'(z)=\sum_{k=1}^n\mu_k f(\lambda_k z)-pf_{n-1}z^{n-1}- qf_{2n-1}z^{2n-1}+r_n(z),$$ where $r_n(z)=O(z^{2n})$ is the form $(\ref{r_n_diff})$ and $q=q_0(p)$ $($see $(\ref{PARAM p q})$$)$. Moreover, the frequencies $\lambda_k$ are the pairwise distinct roots of the polynomial $(\ref{FACTOR G})$ and the amplitudes $\mu_k$ are determined uniquely by Lemma \[400\]. Furthermore, $\mu_k=\mu_k(p,n)$ and $\lambda_k=\lambda_k(p,n)$, so they are independent of the function $f$ and universal in this sense. The interpolation formula is exact for the polynomials of degree ${\leqslant}2n-1$, i.e. ${r_n(z)\equiv 0}$ in $(\ref{diff_formula_common})$ for the polynomials $f$ such that $\deg f{\leqslant}2n-1$. Note that the method, which we consider in this section, can be easily extended to interpolation of the functions $z^\nu f^{(\nu)}(z)$, $\nu{\geqslant}2$, hence one can obtain the formulas for numerical differentiation of higher order. **.5. Remarks and examples.** We now make several remarks about practical applications of Theorem \[th3\]. The remainder in the formula (\[diff\_formula\_common\]) is of quite high infinitesimal order, $O(z^{2n})$, and this is achieved by knowing only $n$ values of $f$ and two fixed values of its derivatives at $z=0$. Traditional interpolation approaches with such a number of known values usually have remainders of order $O(z^{n+2})$. In other words, the formula $(\ref{diff_formula_common} )$ is exact for the polynomials of degree ${\leqslant}2n-1$, whereas usual $(n+2)$-point interpolation formulas are exact only for the polynomials of degree ${\leqslant}n+1$. Another important feature of the formula (\[diff\_formula\_common\]) is that the variable interpolation nodes $\lambda_k z$ depend only on the point $z$, where we calculate $zf'(z)$, and are independent of $f$ (in this sense the amplitudes $\mu_k=\mu_k(p,n)$ and frequencies $\lambda_k=\lambda_k(p,n)$ are universal for the whole class of analytic functions). It is seen from the formula (\[diff\_formula\_common\]) that its precision strongly depends on the precision of the values $f_{\nu}=f^{(\nu)}(0)/\nu!$, $\nu=n-1,2n-1$ (of course we assume that the values of the function $f$ are known). In several particular cases this difficulty can be overcome. For instance, if it is known a priori that the function $f$ is even (odd), then there is no necessity in calculation of $f_{n-1}$ and $f_{2n-1}$ for even (odd) $n$ since then $pf_{n-1}z^{n-1}+ qf_{2n-1}z^{2n-1}\equiv 0$ ($pf_{n-1}z^{n-1}\equiv 0$) and the local precision is $O(z^{2n})$ ($O(z^{2n-1})$). In the more general case, when $f$ is even or odd, the formula (\[diff\_formula\_common\]) can be applied to the even auxiliary function $\omega(z)=f^2(z)$ for even $n$. Then the corresponding coefficients $\omega_{n-1}=\omega_{2n-1}\equiv 0$ and $$2zf(z)f'(z)=\sum_{k=1}^n\mu_k f^2(\lambda_k z)+O(z^{2n}).$$ In the most general case, if we want to use (\[diff\_formula\_common\]) systematically, it is necessary to calculate the regularizing binomial $pf_{n-1}z^{n-1}+qf_{2n-1}z^{2n-1}$ for each [*fixed*]{} function $f$. For this purpose some known formulas for numerical differentiation of analytic functions at $z=0$ can be used. For example, one can use several high-accuracy formulas for $f_{\nu}=f^{(\nu)}(0)/\nu!$, obtained in [@Schmeisser]. Note that, in contrast to the formulas for calculation of Taylor coefficients as, for instance, in [@Schmeisser], the formula (\[diff\_formula\_common\]) works well only in a deleted neighbourhood of the point $z=0$. Below we cite several known interpolation formulas for numerical differentiation, being close in form to (\[diff\_formula\_common\]). In [@Ash_Janson_Jones] the following $n$-point formulas for numerical differentiation of real functions were obtained: $$\label{1} f_\nu x^{\nu}= \sum_{k=1}^n\mu_k f(\lambda_k x)+O(x^n),\qquad {\nu}=1,2,\qquad n{\geqslant}{\nu}+1,$$ where $\lambda_k x$, $|\lambda_k-\lambda_j|>1$, are real nodes, minimizing the generalized power sums $S_v=\sum_{k=1}^n\mu_k\lambda_k^v$ for $v{\geqslant}n+1$ (this corresponds to minimization of the remainder). In [@Salzer] interpolation formulas of the Lagrange type for numerical differentiation were constructed on basis of special non-uniformly distributed nodes, also minimizing the remainder. Formulas of the form (\[1\]) for analytic functions were obtained in [@Lyness] via contour integrals. Moreover, it was shown there that for $\lambda_k=\exp (2\pi i (k-1)/n)$ and appropriate $\mu_k$ their formulas were exact for the polynomials of degree ${\leqslant}n+{\nu}-1$. This result can be extended. Indeed, by direct substitution (see the discussion near the formulas (\[alpha-beta\]) and (\[TAU\])) one can check that for any non-vanishing parameters $p$ and $q$ and any integer $0{\leqslant}\nu{\leqslant}n-1$ we have $$pf_\nu z^\nu+qf_{\nu+n}z^{\nu+n}=\sum_{k=1}^n\mu_k f\left(\lambda_k z\right)+ O(z^{2n+\nu}),\quad \lambda_k=\left(\frac{q}{p}\right)^{1/n}\exp\left(\frac{2\pi (k-1)i}{n}\right),$$ where $\mu_k=(-1)^{n-\nu-1}\lambda_k \sigma_{n-\nu-1}^{(k)}\,p^2 \,(qn)^{-1}$ $($see Lemma $\ref{400})$ and in $\lambda_k$ one can take any value of the root. The following formula for analytic functions $f$ is contained in [@Dan2008]: $$f_\nu z^{\nu}= \sum_{k=1}^{(\nu+1) N}\lambda_kf(\lambda_kz)+O(z^{n}), \qquad N=\left[\tfrac{n}{\nu+1}\right], \qquad n>\nu+6.$$ Here the numbers $\lambda_k$ do not depend on $f$ and are non-zero roots of the polynomial $P_n(\lambda)=\sum_{k=0}^N (-1)^k \lambda^{n-k-\nu k}/((\nu+1)^k k!)$ (see estimates for the remainder in [@Dan2008]). Other results of this type were also obtained in [@Chu2010; @Fryantsev]. Let $n=4$ and $p=-1$. Then by (\[PARAM p q\]) and (\[FACTOR G\]) we get $q=4$ and $\hat{G}_4(\lambda)= 9\left(\lambda^4-\lambda^2-2\,\lambda+1\right)$. The roots of the generating polynomial $\hat{G}_4$ are $$\lambda_1\approx 1.38647,\quad \lambda_2\approx0.42578,\quad \lambda_{3,4}\approx-0.90612\pm 0.93427\,i.$$ Lemma \[400\] gives $$\mu_1\approx0.967276,\quad \mu_2\approx-0.79945,\quad \mu_{3,4}\approx-0.08390\pm 0.08175\, i,$$ and the formula (\[diff\_formula\_common\]) takes the form $$\label{diff_example_n=4} zf'(z)=\sum_{k=1}^4\mu_k f(\lambda_k z)+f_{3}z^{3}-4f_{7}z^{7}+r_4(z),\quad r_4(z)=-4 f_8 z^8-9 f_9 z^9+\cdots.$$ For instance, set $f(z)=(z+2)^{-1}$ (then $f_3=-1/16$, $f_7=-1/256$). Calculations in Maple show that the error of  (\[diff\_example\_n=4\]) does not exceed $10^{-4}$ for $z\in [-0.5,0.5]$. For $n=7$ and $n=10$ the corresponding errors are less than $10^{-8}$ and $10^{-12}$, correspondingly. Now consider the Bessel function $f=J_0$ from (\[J\_0000\]). This function is even and consequently the regularizing binomial $pf_{n-1}z^{n-1}+ qf_{2n-1}z^{2n-1}$ vanishes for even $n$. Therefore from (\[diff\_example\_n=4\]) we get $$zJ'_0(z)\approx \sum_{k=1}^4 \mu_kJ_0(\lambda_kz).$$ The error of the approximant does not exceed $10^{-4}$ for $z\in[-1,1]$. For $n=6$ and $n=8$ the corresponding errors are less than $10^{-9}$ and $10^{-14}$. **.6. Some estimates.** Absolute values of the amplitudes $\mu_k$ and frequencies $\lambda_k$ play an important role in calculations by the formula (\[diff\_formula\_common\]). We now estimate the frequencies. \[500\] For the roots $\lambda_k$ of the polynomial $(\ref{FACTOR G})$ we have $$|\lambda_k|{\leqslant}1+\frac{O(1)}{\sqrt{n}}, \qquad O(1)>0, \qquad n\to \infty.$$ More precisely, given $n{\geqslant}3$, $$|\lambda_k|{\leqslant}\Lambda:=\left(2\delta\right)^{\frac{3}{\sqrt{n-2}}},\qquad \delta:=1+\frac{3|p|}{(n-1)(n-2)}, \qquad p\notin \Pi.$$ First, we estimate the absolute value of the sum of the last three terms in the brackets in $(\ref{FACTOR G})$: $$\begin{aligned} V:&=\left|-\frac{6\lambda\left({\lambda}^{n-1}-(n-1){\lambda}+n-2\right)}{\left(n-1\right)\left(n-2\right)\left (\lambda-1\right)^{2}} +2+{\frac {6\,p}{\left (n-1\right )\left (n-2\right )}}\right|\\ &{\leqslant}|\lambda|^n \left(\frac{6\left(1+(n-1)|\lambda|^{2-n}+(n-2)|\lambda|^{1-n}\right)}{(n-1)(n-2)\left(|\lambda|-1\right)^{2}} +\frac{2\delta}{|\lambda|^n}\right).\end{aligned}$$ It is easily seen that $\left(2\delta\right)^{\frac{3}{\sqrt{n-2}}}-1{\geqslant}\tfrac{3\ln 2}{\sqrt{n-2}}$ for $\delta>1$. Therefore substituting $|\lambda|=\Lambda$ into the latter expression yields $$V{\leqslant}|\lambda|^n \left(\frac{6 \cdot\left(1+(2n-3)/(2\delta)^{3\sqrt{n-2}}\right)}{(3\ln 2)^2 \cdot(n-1)} +\frac{1}{(2\delta)^{\frac{3n}{\sqrt{n-2}}-1}}\right).$$ It is also easy to check that for $n{\geqslant}3$ and $\delta>1$ the expression in the brackets is less than one, so $V<\Lambda^n$ for the above-mentioned $|\lambda|=\Lambda$. This and Rouché’s theorem imply that all the roots of the polynomial (\[FACTOR G\]) lie in the disc $|\lambda|< \Lambda$. One can use Lemmas \[400\] and \[500\] to obtain estimates for the amplitudes $\mu_k$ but this problem needs more delicate analysis and we do not dwell on it in the present paper. Numerical extrapolation by amplitude and frequency operators {#par-extrap} ============================================================ [**.1. Statement of the problem.**]{} Let us briefly describe the idea of the extrapolation. Let $a>0$, $p,q\in \mathbb{R}$ and $f$ be a function analytic in a disc $|z|<\rho$, $\rho>0$. Consider the problem of multiple interpolation of the function $f(az)$ in a neighbourhood of $z=0$ by the amplitude and frequency operator ${H}_n(\{\mu_k\},\{\lambda_k\},f;z)$, where $f$ is chosen as a basis function. As in the case of differentiation, we get a non-regular discrete moment problem with $s_m=a^m$. To regularize it, we introduce the varied function $$\label{ZADACHA extrapol} \tilde{f}(z):=f(az)+pf_{n-1}z^{n-1}+qf_{2n-1}z^{2n-1}$$ with some parameters $p$ and $q$, being non-zero simultaneously. By the same approach that we used at the beginning of Section \[par-diff\], in order to construct the interpolating sum ${H}_n(\{\mu_k\},\{\lambda_k\},f;z)$, we find the sequence of varied moments $$\label{moments_extropal} s_k=a^{k}, \;\; k\neq n-1,2n-1, \;\; k\in\mathbb{N}_0; \quad s_{n-1}=a^{n-1}+p,\quad s_{2n-1}=a^{2n-1}+q,$$ and construct the generating polynomial $$\label{G_n_extrapol_p_q} \check{G}_n(\lambda):=\sum_{m=0}^n \check{g}_m\lambda^m= \left| \begin{array}{cccccc} 1 & \lambda & \ldots & \lambda^{n-1} & \lambda^n\\ 1 & a & \ldots & a^{n-1}+p & a^n\\ a & a^2 & \ldots & a^{n} & a^{n+1}\\ \cdots & \cdots & \cdots & \cdots & \cdots\\ a^{n-1}+p & a^n & \ldots & a^{2n-2} & a^{2n-1}+q\\ \end{array} \right|.$$ If for some $a> 0$, $p$ and $q$ the polynomial $\check{G}_n$ is of degree $n$ and all its roots $\lambda_1,\ldots,\lambda_n$ are pairwise distinct, then by Theorem \[th1\] the varied problem for the function (\[ZADACHA extrapol\]) is regularly solvable, so the following interpolation formula holds: $$\label{100} f(az)=H_n(\{\mu_k\},\{\lambda_k\}, f;z)- p\, f_{n-1}z^{n-1}-q\,f_{2n-1}z^{2n-1}+O(z^{2n}).$$ (Of course, we assume that all the arguments of the function $f$ lie in the disc ${|z|<\rho}$, where it is analytic.) Suppose also that the inequalities ${|\lambda_k|<\delta a}$ with some ${\delta\in (0,1)}$ are valid for all $k=\overline{1,n}$. Then it is natural to call the formula (\[100\]) [*extrapolational*]{} as the values of the function $f$ at the points $\zeta=az$ are approximated by the values of this function at the points $\lambda_kz$, belonging to the disc $\{\xi: |\xi|<\delta |\zeta|\}$, $\delta <1$. In the present section we will obtain such an extrapolation formula and a quantitative estimate for its remainder. We start with a formal description. As in Section \[par-diff\], we first analyse the coefficients and roots of the polynomial $\check{G}_n$ of the form (\[G\_n\_extrapol\_p\_q\]). [**.2. Coefficients of the generating polynomial.**]{} The following statement gives an explicit form of the coefficients $\check{g}_m$. \[lemma\_extrapol\_1\] Let $\kappa:=(-1)^{n(n+1)/2}p^{n-2}$, $p\neq 0,\;-na^{n-1}$. The polynomial $\check{G}_n$ has the following coefficients: $$\label{koef_extrapol} \begin{array}{c} \check{g}_n=\kappa p\left(na^{n-1}+p\right),\qquad \check{g}_0=-\kappa\left(a^{2n-1}p+(n-1)a^{n-1}q+p\,q\right), \\ \check{g}_m=-\kappa a^{n-1-m}\left(a^np-q\right),\qquad m=\overline{1,n-1}. \end{array}$$ The method of proof is the same as in Lemma \[lemma\_dif\_1\]. We first prove the identity for ${\check{g}}_n$ by direct computation of the algebraic adjunct $(-1)^n D$ to the element $\lambda^n$ in the determinant (\[G\_n\_extrapol\_p\_q\]). For this we show that given the matrix $$A:=\left( \begin{array}{ccccc} a^{n-1} & a^{n-2} & a^{n-3} & \ldots & 1 \\ a^n & a^{n-1} & a^{n-2} & \ldots & a \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ a^{2n-2} & a^{2n-3} & a^{2n-4} & \ldots & a^{n-1} \\ \end{array} \right),$$ the characteristic polynomial $P_n(\lambda)=\det (A-\lambda I)$ has the form $$\label{charac_poly_2} P_n(\lambda)=(-1)^n\lambda^{n-1}\left(\lambda-na^{n-1}\right).$$ Indeed, in this case ${\operatorname{rank}}A=1$ as any two rows of $A$ are proportional. Therefore, by (\[characht\_property\_1\]) and (\[characht\_property\_2\]) the coefficients before the terms with the powers less than ${n-1}$ are zero in $\det (A-\lambda I)$. To prove (\[charac\_poly\_2\]), it remains to notice that $\mathrm{Tr} A=na^{n-1}$. Now we return to the determinant $D$. In the same way as in Lemma \[lemma\_dif\_1\], (\[DETER-D\]) and (\[charac\_poly\_2\]) yield the desired formula for ${\check{g}}_n$. If we suppose that ${\check{g}}_n$ are known, then the other $n$ coefficients as in the above-mentioned case of differentiation can be found from the system (\[Newton\_formula\]) of $n$ linear equations (with the exchange of $\hat{\phantom{o}}$ by $\check{\phantom{o}}$). This system with respect to unknowns ${\check{g}}_0,\ldots,{\check{g}}_{n-1}$ has a unique solution for $p\neq 0,\;-na^{n-1}$ as its determinant is equal to ${\check{g}}_n\ne 0$. Thus it suffices to check (\[Newton\_formula\]) by direct substitution of the values (\[moments\_extropal\]) and (\[koef\_extrapol\]). This is quite easy and thus we do not dwell on it. [**.3. Roots of the generating polynomial.**]{} From now on we suppose that $q=0$; then, as we will see below, $\check{G}_n$ has exactly $n$ pairwise distinct roots and its coefficients can be calculated by quite simple formulas. Let $p>0$, $q=0$ and $a>0$. Then $$\label{COEFF G_n_extrapol_p_q_special} {\check{g}}_n=\kappa p\left(na^{n-1}+p\right)\neq 0,\qquad {\check{g}}_m=-\kappa p \,a^{2n-1-m},\qquad m=\overline{0,n-1},$$ and the generating polynomial has the form $$\label{G_n_extrapol_p_q_special} \check{G}_n(\lambda)= {\check{g}}_n\left(\lambda^n-\frac{a^{2n-1}}{na^{n-1}+p}\sum_{m=0}^{n-1} \frac{\lambda^m}{a^m}\right)={\check{g}}_n \left(\lambda^n-\frac{a^{n}}{na^{n-1}+p} \frac{\lambda^n-a^n}{\lambda-a}\right).$$ Moreover, $\check{G}_n$ has exactly $n$ pairwise distinct roots. The representation (\[G\_n\_extrapol\_p\_q\_special\]) can be obtained by direct substituting $q=0$ into the formulas (\[koef\_extrapol\]) from Lemma \[lemma\_extrapol\_1\] for $p>0$. It remains to show that the polynomial (\[G\_n\_extrapol\_p\_q\_special\]) has no multiple roots. We rewrite $\check{G}_n$ in the form $$\check{G}_n(\lambda)={\check{g}}_n \frac{P_{n+1}(\lambda)}{\lambda-a},\qquad P_{n+1}(\lambda):=\lambda^{n+1}-a\ \left(1+\frac{a^{n-1}}{na^{n-1}+p}\right)\lambda^n+\frac{a^{2n}}{na^{n-1}+p}.$$ The set of roots of the polynomial $P_{n+1}$ contains all the roots of the polynomial $\check{G}_n$ and one more root $\lambda=a$. If the polynomial $P_{n+1}$ had a multiple root, then this root would be also a root of its derivative $P'_{n+1}$. However, $$P'_{n+1}(\lambda)=(n+1)\lambda^{n-1}\left(\lambda-\lambda^{*}\right),\qquad \lambda^{*}:=\frac{a n}{n+1}\left(1+\frac{a^{n-1}}{na^{n-1}+p}\right),$$ and, as it can be easily seen, in both roots of the derivative $P'_{n+1}$, namely, $0$ and $\lambda^{*}$, the polynomial $P_{n+1}$ does not vanish (for $p>0$), more precisely, $P_{n+1}(0)>0$, $P_{n+1}(\lambda^{*})<0$. Consequently, both $P_{n+1}$ and $\check{G}_n$ have no multiple roots. [**.4. Estimates of the roots of the generating polynomial.**]{} We now estimate the roots of the polynomial (\[G\_n\_extrapol\_p\_q\_special\]) assuming $q=0$ as above. \[lemma\_extrapol\_2\] Given $p>0$ and $a>0$, for the roots $\lambda_k$ of the polynomial $\check{G}_n$ we have $$\label{COR-G_n} |\lambda_k|<\delta a<a,\qquad \delta=\delta(n,a,p):=\left(1+\frac{p}{na^{n-1}}\right)^{-1/n}, \qquad k=\overline{1,n}.$$ For $|\lambda|=\delta a$ we get $$\frac{a^{2n-1}}{na^{n-1}+p}\sum_{m=0}^{n-1}\left|\frac{\lambda}{a}\right|^m= \frac{a^{2n-1}}{na^{n-1}+p}\sum_{m=0}^{n-1}\left|\delta\right|^m<\frac{na^{n-1}}{na^{n-1}+p}\,a^n= (\delta a)^n=|\lambda|^n.$$ From this, taking into account the first identity in $(\ref{G_n_extrapol_p_q_special})$ and Rouché’s theorem, we conclude that all $n$ roots of the polynomial $\check{G}_n$ belong to the disc $|\lambda|<\delta a$. \[primech\_ext\] We now mention several properties of $\delta=\delta(n,a,p)$, which plays a key role in the process of extrapolation. For a fixed $n$ we obviously have $$\label{COR-G_n-asymp} \delta\in (0,1),\qquad \delta=\left(\frac{n}{p}\right)^{1/n}a^{1-\tfrac{1}{n}} \left(1-O\left(\frac{a^{n-1}}{p}\right)\right),\qquad \frac{a^{n-1}}{p}\to 0,$$ where $O\left(a^{n-1}/p\right)$ is a positive real value. From this it follows, in particular, that if the fraction $a^{n-1}/p$ decreases, then all the roots $\lambda_k$ come closer to the origin. For example, for a fixed $p$ and $a\to 0$ the largest absolute value of $\lambda_k$ is bounded by a value of order $a^{2-1/n}$. [**.5. Main theorem about numerical extrapolation by amplitude and frequency operators. Remarks and examples.**]{} We now aim to estimate the extrapolation remainder (\[100\]) and then formulate the main result of the section. To do so, we need estimates for the generalized power sums $S_v$, $v{\geqslant}2n$, taking into account the sums (\[moments\_extropal\]) with indexes ${\leqslant}2n-1$ and $q=0$ as before. \[lemma\_extrapol\_3\] For $p>0$ and $a>0$ the following inequalities hold: $$\label{200} 0{\leqslant}S_v{\leqslant}a^v, \qquad v{\geqslant}2n.$$ We prove this by induction, based on (\[COEFF G\_n\_extrapol\_p\_q\_special\]) and the identities  (\[corollary\_form\_Newton++++\]) for $v{\geqslant}2n$. For $v=2n$ from (\[moments\_extropal\]) and (\[COEFF G\_n\_extrapol\_p\_q\_special\]) we get $$S_{2n} = \frac{na^{3n-1}}{na^{n-1}+p}{\leqslant}a^{2n}.$$ Furthermore, suppose that the inequality $S_v{\leqslant}a^v$ is valid for all $v=\overline{2n,N}$ (hence for all $v=\overline{0,N}$). Under this assumption, we obtain $$S_{N+1} = \frac{a^{2n-1}}{na^{n-1}+p}\sum_{m=0}^{n-1} \frac{S_{N-n+m+1}}{a^m}{\leqslant}\frac{na^{N+n}}{na^{n-1}+p}{\leqslant}a^{N+1},$$ which completes the proof. Now, using (\[200\]), we can estimate the extrapolation remainder (\[100\]), where $q=0$: $$|r_n(z)|=\left|\sum_{m=2n}^{\infty}(a^m-S_m)f_mz^m\right| {\leqslant}\sum_{m=2n}^{\infty}|f_m||az|^m.$$ Note that this estimate is independent of $p$. Summarizing, we formulate the main result of this section. \[th4\] Let $f$ be analytic in the disc $|z|<\rho$, $a>0$, $p>0$. Then for $|z|<\rho/a$ the following extrapolation formula holds: $$\label{extrapol_formula} f(az)=\sum_{k=1}^n\mu_k f(\lambda_k z)-pf_{n-1}z^{n-1}+r_n(z),\qquad |r_n(z)|{\leqslant}\sum_{m=2n}^{\infty}|f_m||az|^m,$$ where the frequencies $\lambda_k$ are the pairwise distinct roots of the polynomial $(\ref{G_n_extrapol_p_q_special})$, and $($see $(\ref{COR-G_n}))$ $$|\lambda_k z|<\delta a|z|<a|z|,\qquad \delta=\left(1+\frac{p}{na^{n-1}}\right)^{-1/n}.$$ The amplitudes $\mu_k$ are uniquely determined by Lemma \[400\]. Moreover, $\lambda_k=\lambda_k(a,p,n)$ and $\mu_k=\mu_k(a,p,n)$, so they are independent of the function $f$ and universal in this sense. The extrapolation formula $(\ref{extrapol_formula})$ is exact for the polynomials of degree $2n-1$, i.e. $r_n(z)\equiv 0$ for the polynomials $f$ such that $\deg f{\leqslant}2n-1$. If $f_{n-1}=0$, then the extrapolation formula has a particular simple form and high degree of accuracy. For instance, for even (odd) functions and even (odd) $n$ the identity from (\[extrapol\_formula\]) has the form $$f(az)=\sum_{k=1}^n\mu_k f(\lambda_k z)+r_n(z).$$ Calculation of $f_{n-1}$ in the general case was discussed at the end of Section \[par-diff\]. We emphasize that the extrapolation character of the formula (\[extrapol\_formula\]) is specified by a proper choice of the parameters $p$ and $q$ in the problem (\[moments\_extropal\]). For instance, for $p=q=0$ this problem is also solvable (but non-regular): one can take $\mu_1=1$ and $\lambda_1=a$ with the rest of parameters $\mu_k$, being equal to zero, and any $\lambda_k$. However, in this case (\[extrapol\_formula\]) becomes a trivial identity. If one takes, for example, $p>0$ and $q=a^n p-\check{g}_n/\kappa$, then the problem (\[moments\_extropal\]) turns out to be regular and, moreover, the coefficients of the generating polynomial $\check{G}_n$ can be also calculated easily: $$\check{g}_0=-\check{g}_np_0, \qquad p_0:=\left(na^{n-1}+p\right)\left(\kappa p \,a^n/\check{g}_n-1\right)+a^{n-1},$$ $$\check{g}_m=-a^{n-1-m}\check{g}_n,\qquad m=\overline{0,n-1}.$$ However, in this case there is no extrapolation since the inequalities $|\lambda_k|<a$ are not valid anymore and we just have an interpolation formula of the form (\[extrapol\_formula\]). Note that for different $a$ one has different extrapolation formulas of the form (\[extrapol\_formula\]), not arising from each other. In particular, they cannot be reduced to the case $a=1$ by linear substitution. Indeed, substituting $t=az$ into (\[extrapol\_formula\]) gives $$\label{extrapol_formula_substitute} f(t)=\sum_{k=1}^n\mu_k(a) f\left(\tilde\lambda_k(a) t\right)-pf_{n-1}(t/a)^{n-1}+r_n(t/a),\qquad \tilde\lambda_k(a)=\lambda_k(a)/a,$$ where by Lemma \[lemma\_extrapol\_2\] for any $p>0$ and $a>0$ $$|\tilde\lambda_k(a)|{\leqslant}\delta(n,a,p)=\left(1+\frac{p}{na^{n-1}}\right)^{-1/n}<1,$$ i.e. $a$ does not disappear and still is a controlling parameter. If $a<1$, then for any fixed $p>0$ we get $$|\tilde{\lambda}_k(a)|<\delta(n,a,p)\to a,\qquad n\to\infty,$$ Thus, for large $n$ all the arguments $\tilde{\lambda}_k(a)\,t$ lie almost $a$-times closer to the origin than $t$ (see also Remark \[primech\_ext\]). Realizing the $n$-point simple or multiple extrapolation (interpolation) on basis of the Lagrange polynomials or other similar approaches, one usually obtains extrapolation (interpolation) formulas, being exact for the polynomials of degree $n-1$ (see, for instance, [@Salzer2; @DanChu2011; @Chu2012]). However, our extrapolation formula is exact for the polynomials of degree ${\leqslant}2n-1$. It is interesting that the doubling of precision is gained by adding just one regularizing power term $pf_{n-1}z^{n-1}$. We also emphasize that, due to Remark \[primech\_ext\], if $p\to\infty$ and all other parameters are fixed, then the extrapolation nodes tend to the point $z=0$, but at the same time the theoretical error of extrapolation does not increase (see (\[extrapol\_formula\])) as is independent of $p$. The same phenomenon of the convergence of nodes to the origin was noticed in similar extrapolation problems in [@DanChu2011; @Chu2012]. Let $n=2$, $a=1/2$ and $p =2$ ($q=0$ as above). Then the generating polynomial (\[G\_n\_extrapol\_p\_q\_special\]) has the form $$\check{G}_2(\lambda)=-6\,{{\lambda}}^{2}+\frac{1}{2}\,{\lambda}+\frac{1}{4}.$$ We find its (pairwise distinct) roots and then determine the amplitudes by Lemma \[400\]: $$\label{example1-extrapolation-lambda+} \lambda_{1}=- \frac{1}{6} \qquad \lambda_{2}=\frac{1}{4},\qquad \mu_{1}=-\frac{27}{5},\qquad \mu_{2}=\frac{32}{5}.$$ Thus we get the following extrapolation formula (written in the form (\[extrapol\_formula\_substitute\])): $$\label{example1-extrapolation++} f(z)\approx -\frac{27}{5} f\left(-\frac{1}{3}z\right)+\frac{32}{5}f\left(\frac{1}{2}z\right)-4f_1z.$$ For example, for $f(z)=e^z$ the absolute error of this formula does not exceed $0.002$ in the real segment $z\in [-0.5,0.5]$. For $n=4$ and $n=8$ and the same parameters $a=1/2$ and $p=2$ the error of the extrapolation formula (\[extrapol\_formula\]) for $e^z$ in $[-1,1]$ does not exceed $10^{-7}$ and $10^{-18}$ correspondingly. Moreover, in both cases the moduli of extrapolation nodes are less than $0.58|z|$. On the numerical method for constructing amplitude and frequency operators {#Section6} ========================================================================== As we have already mentioned in Remark \[remark2\], some authors studied numerical methods for solving the systems (\[SRS\_M\]). In [@YF] one of such approaches, the method of small residuals in overdetermined moment systems, was used for approximation by amplitude and frequency sums (\[gH\]) in a neighbourhood of the point ${z=0}$. Some discussions from this paper (see Theorems 3 and 5 and Remark 1 there) raise the following important question: can one use the method of small residuals in the context of the Padé interpolation (\[2n-interpolation\]) and approximation by amplitude and frequency operators? From the undermentioned observations it is seen that this method can work well only for the quite narrow class of consistent systems, but in the general case one has to give a negative answer to the question. For the discussion we can consider only $M=2n$ as the case $M>2n$ follows from the below-mentioned counterexamples by adding equations with arbitrary right hand sides. Following [@YF], instead of the system (\[SRS\]) we consider the one with small residuals $\delta:=\{\delta_m\}_{m=0}^{2n-1}$: $$\label{NVZ} s_m-\sum_{k=1}^n\mu_k\lambda_k^m=\delta_m,\qquad m=\overline{0,2n-1}, \qquad |\delta|:=\max_m|\delta_m|{\leqslant}\varepsilon.$$ It is not difficult to show that for an arbitrarily small $\varepsilon>0$ one can choose the residuals $\delta$, $|\delta|{\leqslant}\varepsilon$, such that the system (\[NVZ\]) is solvable (both for consistent and inconsistent systems (\[SRS\])). Furthermore, using this solution, one can construct the corresponding amplitude and frequency sum $H_n(\delta;z)$ of the form (\[gH\]) such that $$\label{NVZ+} f(z)-H_n(\delta;z)=\sum_{m=0}^{2n-1} h_m\delta_m z^m+B_n(\delta;z),\qquad B_n(\delta;z)=O(z^{2n}).$$ If one can take $|\delta|=0$, then we deal with a consistent moment problem (\[SRS\]). As we have already mentioned above, this case is not of big interest for numerical analysis because analytical methods effectively work. If $|\delta|\ne 0$, then obviously one cannot get (\[2n-interpolation\]) from (\[NVZ+\]) even for regular problems. Indeed, given fixed residuals $\delta$, $|\delta|\ne 0$, the right hand side of (\[NVZ+\]) is just of order $h_k\delta_k z^k=O(|\delta|)z^k$, $z\to 0$, where $k$ is the index of the first non-zero residual. There also exist other serious obstacles to realization of the method of small residuals in the interpolation and approximation problems. The matter is that for inconsistent moment problems the decreasing of $|\delta|$ to zero always results in growing to infinity of at least one component of the solution to the problem (\[NVZ\]), i.e. the amplitudes $\mu_k(\delta)$ or frequencies $\lambda_k(\delta)$. This happens because the solution to the initial, non-regularized inconsistent problem (\[SRS\]) does not exist. But a similar situation can occur even for some consistent systems (see Example \[ex6-3\]). In such cases the approximation is impossible: either the computational errors considerably exceed the residuals or, which is even worse, the arguments $\lambda_k(\delta) z\to\infty$ leave the domain of the function $h(z)$ (see Example \[ex6-2\]). Moreover, then the corresponding interpolation formulas are usually in unstable relation not only to the norms of the residuals but also to their single components that makes impossible the error estimation via the norm $|\delta|$ (see Examples \[ex6-1\]-\[ex6-3\]). We now give several examples of such a divergence for consistent and inconsistent moment problems. For simplicity, let $n=2$. \[ex6-1\] Set $s_0=0$, $s_1=1$, $s_2=0$, $s_3=0$. The system (\[SRS\]) is inconsistent. Solving the system (\[NVZ\]) by the Prony-Sylvester formulas and Lemma \[400\], we get $$\mu_{1}+\mu_2=\delta_0,\qquad \mu_{1}=\frac {(1+\delta_1)^2}{\sqrt {4\,\delta_3 \delta_1-3\,{\delta_2}^{2}+4\,\delta_3}}+O(|\delta|)\to\infty, \qquad |\delta|\to 0.$$ Thus, passage to the limit in $H_n(\delta;z)$ as $|\delta|\to 0$ predictably does not determine any Padé amplitude and frequency sum, moreover, the parameters of the sum obtained are in unstable relation to the components of the residuals. \[ex6-2\] Set $s_0=1$, $s_1=0$, $s_2=0$, $s_3=1$. The system (\[SRS\]) is inconsistent. By the same formulas for (\[NVZ\]) we get $$\lambda_{1}=\frac{1+O\left(|\delta|\right)}{\delta_0\delta_2+\delta_2-\delta_1^2}\; \to\infty, \qquad |\delta|\to 0,$$ i.e. the argument $\lambda_{1} z$ of the amplitude and frequency sum $H_n(\delta;z)$ in (\[NVZ+\]) tends to infinity and can leave the domain of the basis function $h$. The following example (which arises in the extrapolation problem considered in Section \[par-extrap\]) shows that the method of small residuals can be unfit even for consistent systems. \[ex6-3\] Set $s_0=1$, $s_1=1$, $s_2=1$, $s_3=1$. The system (\[SRS\]) is consistent (but not regular); one of its solutions is obvious: $\lambda_1=\mu_1=1$, $\lambda_2=0$, $\mu_2$ is arbitrary. However, the method of small residuals leads to the indeterminacy $$\lambda_1\cdot \lambda_2=\frac {\delta_3+\delta_1+\delta_1\delta_3 -2\,\delta_2-\delta_2^2}{\delta_2-2\,\delta_1+\delta_0+\delta_0\delta_2- \delta_1^{2}}.$$ It is not clear how to choose the residuals for the convergence of the process. If one takes, for instance, $\delta_0=\delta_1=0$, $\delta_2=\delta_3^2$, then again the argument in the amplitude and frequency sum tends to infinity: $$\lambda_1\cdot \lambda_2= \frac{1-2\delta_3-\delta_3^3}{\delta_3}\; \to\infty, \qquad |\delta|\to 0.$$ \[ex6-4\] In [@YF §4.4.2] the following system arises in the approximation of the derivative of the Bessel function, $J'_0$, by amplitude and frequency sums with the basis function $h(x)=(1-J_0(x))/x$ and $n=2$: $$\sum_{k=1}^2\mu_k\lambda_k^{2m+1}=2(m+1),\qquad m=\overline{0,M-1}.$$ It is easily seen that it is inconsistent for $M=4$ (cf. the non-regularized system in the problem of numerical differentiation in Section \[par-diff\]). The authors of [@YF] solve it numerically for $M>4$ and $\varepsilon=2\cdot10^{-16}$ and obtain the following results: $$\mu_1=\overline{\mu_2}=\tfrac{1}{2}+\mu i,\qquad \lambda_1=\overline{\lambda_2}=1+\lambda i,\qquad \mu\approx -9.8\cdot 10^7,\qquad \lambda\approx 5.1\cdot 10^{-9}.$$ It is natural to expect that further decreasing of the residual will cause the moduli of the amplitudes $\mu_k$ to grow to infinity and the moduli of the frequencies $\lambda_k$ to tend to one. Again one gets a divergent process in the context of the Padé interpolation under consideration. Thus on account of the above-mentioned reasons the method of small residuals has to be used for constructing amplitude and frequency sums with great circumspection as it can lead to unacceptable results. Note that the regularization method that we propose also uses residuals; generally speaking, they are two: $p$ and $q$. The important difference between our method and the one with small residuals is in the fact that $p$ and $q$ are fixed, not necessary small and can be calculated by special formulas depending on the specificity of the problems considered in Sections \[regularization\_section\]-\[par-extrap\]. For example, in the problems of numerical differentiation and extrapolation we obtained explicit expressions for $p$ and $q$, being independent of $z$, $f$ and $h$. Moreover, the corresponding residuals in the interpolation disappear not because of decreasing $|\delta|$ but due to adding a fixed regularizing binomial $c_1z^{n-1}+c_2z^{2n-1}$ to an amplitude and frequency sum. For instance, the following regularized interpolation formula corresponds to Example \[ex6-2\]: $$f(z)=-h_{1}z+\sum_{k=1}^2\mu_kh(\lambda_kz)+O(z^4),\quad \mu_{1,2}=\tfrac{1}{2}\left(1\mp\tfrac{3\sqrt{5}}{5}\right),\quad \lambda_{1,2}=-\tfrac{1}{2}\left(1\pm\sqrt{5}\right),$$ where the remainder depends only on $z$, $f$ and $h$. In conclusion, it is worth mentioning that for increasing the rate of approximation one can use the regularization method even for some regular systems, for example, for those which can be obtained from non-regular ones by a small variation of the moments. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank the referees for their useful suggestions, which helped to improve the paper. [99]{} M. Abramowitz and I.A. Stegun, *Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables*, Dover, New York (1972). N.I. Akhiezer, *The Classical Moment Problem and Some Related Questions in Analysis*, Oliver & Boyd, Edinburgh (1965). J.M. Ash, S. Janson and R.L. Jones, Optimal numerical differentiation using $N$ function evaluations, *Calcolo* **21**(2) (1984), 151-169. G.A. Baker Jr. and P. Graves-Morris, *Pade Approximants. Part I: Basic Theory. Part II: Extensions and Applications*, Addison-Wesley, Reading, Mass. (1981). G. Beylkin and L. Monzón, On approximation of functions by exponential sums, *Appl. Comput. Harmon. Anal.* **19** (2005), 17–48. G. Beylkin and L. Monzón, Approximation by exponential sums revisited, *Appl. Comput. Harmon. Anal.* **28** (2010) 131–149. D. Braess, Nonlinear Approximation Theory, Springer Ser. Comput. Math., vol. 7, Springer-Verlag, Berlin, 1986. M. Buchmann and A. Pinkus, On a recovery problem, *Ann. of Num. Math.* **4** (1997), 129–142. P.V. Chunaev, On a nontraditional method of approximation, *Proc. of the Steklov Inst. Math.* **270**(1) (2010), 278-284. P.V. Chunaev, On the extrapolation of analytic functions by sums of the form $\sum_k\lambda_k h(\lambda_k z)$, *Math. Notes* **92**(5-6) (2012), 727-730. P.V. Chunaev and V.I. Danchenko, Approximation by the amplitude and frequency operators, arXiv:1409.4188v1, September 15, 2014. V.I. Danchenko, Approximation properties of sums of the form $\sum_k\lambda_kh(\lambda_k z)$”, *Math. Notes* **83**(5) (2008), 587–593. V.I. Danchenko and P.V. Chunaev, Approximation by simple partial fractions and their generalizations, *J. of Math. Sci.* **176**(6) (2011), 844-859. V.I. Danchenko and P.V. Chunaev, On approximation by amplitude and frequancy sums, Intl. Summer Kazan School-Conference on Function theory, its applications and similar problems: Proceedings, Vol. 46, Kazan University, August 22–28, 2013, 174-175. V. Danchenko and P. Chunaev, Approximation by amplitude and frequency sums, Joint CRM-ISAAC Conference on Fourier Analysis and Approximation Theory: Abstracts, CRM, November 4–8, 2013, 12. V.I. Danchenko and A.E. Dodonov, Estimates for exponential sums. Applications, *J. of Math. Sci.* **188**(3) (2013), 197-206. V.K. Dzyadyk, Generalized problem of moments and the Pade approximation, *Ukrainian Math. J.* **35**(3) (1983), 254-259. A.V. Fryantsev, On numerical approximation of differential polynomials, *Izv. Saratov. Univ. Mat. Mekh. Inform.* **7**(2) (2007), 39–43 \[in Russian\] V.P. Gromov, On the growth of functions defined by series of the form $\sum_1^\infty d_nf(\lambda_nz)$, *Mat. Sb.* **67**(109:2) (1965), 190–209 \[in Russian\]. R.A. Horn and Ch.R. Johnson, *Matrix Analysis*, Cambridge University Press, Cambridge (2013). N. Jacobson, *Basic algebra I*, W.H. Freeman and Company, New York (1985). N.M. Korobov, Exponential sums and their applications, *Mathematics and Its Applications (Soviet Series)* **80**, Kluwer Academic Publishers (1992). V.I. Krylov, *Approximate Calculation of Integrals*, The Macmillan Co., New York-London (1962). J.P.S. Kung, Canonical forms of binary forms: Variations on a theme of Sylvester. Invariant theory and tableaux (Minneapolis, MN, 1988), *IMA Vol. Math. Appl.* **19** (1990) 46–58, Springer, New York. S.-Y. Kung, A new identification and model reduction algorithm via singular value decomposition, Proc. 12th Asilomar Conf. Circuits, Syst. Comput., Pacific Grove, CA, 1978, 705–714. A.F. Leont’ev, *Sequences of Exponentials*, Nauka, Moscow (1980) \[in Russian\]. A. F. Leont’ev, Representation of functions by generalized exponential series, *Math. Sb.* **62**(2) (1989), 491–505. J.N. Lyness, Differentiation formulas for analytic functions, *Math. Comp.* **22** (1968), 352-362. Y.I. Lyubich, Gauss type complex quadrature formulae, power moment problem and elliptic curves, *Mat. Fiz. Anal. Geom.* **9**(2) (2002), 128–145 \[in Russian\]. Y.I. Lyubich, The Sylvester-Ramanujan system of equations and the complex power moment problem, *Ramanujan J.* **8** (2004), 23–45. R. Prony, Sur les lois de la Dilatabilité des fluides élastiques et sur celles de la Force expansive de la vapeur de l’eau et de la vapeur de l’alkool, à différentes températures, *J. de l’Ecole Polytech.* **2**(4) (1795), 28-35 \[in French\]. D. Potts and M. Tasche, Parameter estimation for nonincreasing exponential sums by Prony-like methods, *Linear Algebra Appl.* **439**(4) (2013),1024–1039. S. Ramanujan, Note on a set of simultaneous equations, *J. Indian Math. Soc.* **4** (1912), 94–96. H.E. Salzer, Optimal points for numerical differentiation, *Num. Math.* **2**(1) (1960), 214-227. H.E. Salzer, Formulas for best extrapolation, *Num. Math.* **18** (1971), 144-153. G. Schmeisser, Numerical differentiation inspired by a formula of R.P. Boas, *J. Approx. Theory*, **160**(1-2) (2009), 202-222. V.I. Shevtsov, The representation of integral functions by series of the form $\sum_{n=1}^\infty \alpha_n f(\lambda_n z)$, *Mat. Zametki* **4**(5) (1968), 579–588 \[in Russian\]. J.J. Sylvester, On a remarkable discovery in the theory of canonical forms and of hyperdeterminants, *Phil. Magazine* **2** (1851), 391–410. C.E. Yarman and G.M. Flagg, Generalization of Padé approximation from rational functions to arbitrary analytic functions — Theory. *Math. Comp.* **84** (2015), no. 294, 1835–1860. **Petr Chunaev** Departament de Matemàtiques, Universitat Autònoma de Barcelona Edifici C, Facultat de Ciències, 08193 Bellaterra (Barcelona), Spain e-mail: **Vladimir Danchenko** Functional Analysis and Its Applications Department, Vladimir State University Belokonskoy str. 3/7, Building 3, 600000 Vladimir, Russia e-mail:
{ "pile_set_name": "ArXiv" }
--- abstract: 'In mammals, female germ cells are sheltered within somatic structures called ovarian follicles, which remain in a quiescent state until they get activated, all along reproductive life. We investigate the sequence of somatic cell events occurring just after follicle activation. We introduce a nonlinear stochastic model accounting for the joint dynamics of two cell types, either precursor or proliferative cells. The initial precursor cell population transitions progressively to a proliferative cell population, by both spontaneous and self-amplified processes. In the mean time, the proliferative cell population may start either a linear or exponential growing phase. A key issue is to determine whether cell proliferation is concomitant or posterior to cell transition, and to assess both the time needed for all precursor cells to complete transition and the corresponding increase in the cell number with respect to the initial cell number. Using the probabilistic theory of first passage times, we design a numerical scheme based on a rigorous Finite State Projection and coupling techniques to assess the mean extinction time and the cell number at extinction time. We also obtain analytical formulas for an approximating branching process. We calibrate the model parameters using an exact likelihood approach using both experimental and in-silico datasets. We carry out a comprehensive comparison between the initial model and a series of submodels, which help to select the critical cell events taking place during activation. We finally interpret these results from a biological viewpoint.' author: - Frédérique Clément - Frédérique Robin - Romain Yvinec bibliography: - 'Condition\_Initiale.bib' date: 'Received: date / Accepted: date' title: Stochastic nonlinear model for somatic cell population dynamics during ovarian follicle activation --- The authors wish to thank Ken McNatty for providing the experimental dataset and Danielle Monniaux for helpful discussions.
{ "pile_set_name": "ArXiv" }
--- abstract: | The ATLAS BPTX stations are comprised of electrostatic button pick-up detectors, located 175 m away along the beam pipe on both sides of ATLAS. The pick-ups are installed as a part of the LHC beam instrumentation and used by ATLAS for timing purposes. The usage of the BPTX signals in ATLAS is twofold: they are used both in the trigger system and for LHC beam monitoring. The BPTX signals are discriminated with a constant-fraction discriminator to provide a Level-1 trigger when a bunch passes through ATLAS. Furthermore, the BPTX detectors are used by a stand-alone monitoring system for the LHC bunches and timing signals. The BPTX monitoring system measures the phase between collisions and clock with a precision better than 100 ps in order to guarantee a stable phase relationship for optimal signal sampling in the subdetector front-end electronics. In addition to monitoring this phase, the properties of the individual bunches are measured and the structure of the beams is determined. On September 10, 2008, the first LHC beams reached the ATLAS experiment. During this period with beam, the ATLAS BPTX system was used extensively to time in the read-out of the sub-detectors. In this paper, we present the performance of the BPTX system and its measurements of the first LHC beams. address: - 'Stockholm University, Department of Physics, 106 91 Stockholm, Sweden' - 'European Organization for Nuclear Research, 1211 Genève, Switzerland' author: - 'C. Ohm' - 'T.Pauly' title: 'The ATLAS beam pick-up based timing system' --- ATLAS ,Beam monitoring ,Level-1 trigger ,BPTX ,LHC ,LHC timing signals Introduction ============ The ATLAS experiment [@detectorpaper] at the Large Hadron Collider (LHC) [@lhcmachinepaper] must be synchronized to the collisions to ensure the quality of the event data recorded by its sub-detectors. In order to facilitate this, the LHC provides beam related timing signals to the experiments via optical fibers that are several kilometers long [@ttc]. The phase of these clock signals can drift, e.g. due to temperature fluctuations, causing front-end electronics to sample at non-optimal working point. On both sides of ATLAS, 175m upstream from the interaction point, beam pick-up detectors are installed along the LHC beam pipe. This paper describes how these detectors are used - to monitor the phase between the collisions and the LHC clock signals that drive the ATLAS electronics - to monitor the structure and uniformity of the LHC beams - as input to the trigger system The BPTX detectors ================== ![A photograph of one of the two ATLAS BPTX stations.](bptx_station.png){width="40.00000%"} \[fig:bptxstation\] The BPTX stations are beam position monitors provided by the LHC machine, but operated by experiments for timing purposes. They are comprised of four electrostatic button pick-up detectors, arranged symmetrically in the transverse plane around the LHC beam pipe. Since the signal from a passing charge distribution is linearly proportional to distance to first order, the signals from all four pick-ups are summed to cancel out potential beam position deviations. The resulting signal is then transmitted to the underground counting room *USA15* via a 220m low-loss cable. Figure \[fig:bptxstation\] shows the installed BPTX station for beam 2 on the C-side of ATLAS. At the bottom of the photograph, the cables from the four button pick-ups are visible. Usage of the beam pick-up signals ================================= The BPTX signals are used for two separate purposes within ATLAS, by the trigger system and by a monitoring system for the LHC beams and timing signals. Figure \[fig:bptxcontextdiagram\] shows the BPTX system and how it interacts with the related systems [@timing-in; @masterohm]. The optical timing signals from the LHC arrive in the underground counting room to a receiver module, the `RF2TTC`. This module converts the optical signals to *TTC*[^1] signals and can also manipulate their phase, duration etc. if needed. The electrical signals are then transmitted to the ATLAS sub-detectors via the *Central Trigger Processor* (CTP) of the Level-1 trigger system and to the BPTX monitoring system. ![Diagram showing the BPTX system and how it interacts with the related systems in ATLAS.[]{data-label="fig:bptxcontextdiagram"}](bptx_context_diagram2.pdf){width="40.00000%"} Level-1 Trigger --------------- The ATLAS trigger system is designed in three levels, each level sequentially refining the selection of events to be saved for further offine analysis. The Level-1 trigger is implemented in custom electronics and performs a first selection of events within 2.5 $\mu$s, based primarily on reduced-granularity data from the calorimeters and the muon spectrometer. The selected events are processed further by the *High Level Trigger* system which is implemented in software. The signals from the BPTX stations are discriminated with a constant-fraction discriminator to provide ATLAS with an accurate and reliable timing reference in the form of a standard NIM pulse. This pulse is fed into the Level-1 *Central Trigger Processor* where it serves as a trigger condition indicating a bunch passing through ATLAS. Monitoring of the LHC beams and timing signals ---------------------------------------------- Furthermore, the BPTX detectors are used by a stand-alone monitoring system for the LHC bunches and timing signals. The BPTX and LHC timing signals are digitized by a deep-memory, high sampling rate (5GHz) oscilloscope and transferred to a computer running Linux for analysis. The features of the scope enables capturing a full LHC turn in one acquisition while retaining enough detail to get about 5 measurement points on the sharp falling edge of each BPTX pulse (see e.g. Figure \[fig:firstbunch\]). Since most of the high-frequency content of the BPTX signals is attenuated by the long transmission line, the frequency spectrum of the signals arriving in ATLAS peaks around 400 MHz, making an analog bandwidth of 600 MHz sufficient for the oscilloscope used for digitization. By making fits to the identified bunch pulses and clock edges, the BPTX monitoring system measures the phase between each bunch and the clock signal with high accuracy. Monitoring these quantites is crucial to guarantee a stable phase relationship for optimal signal sampling in the subdetector front-end electronics. In addition to monitoring this phase, the intensity and longitudinal length of the individual bunches are measured and the structure of the beams is determined. Using the BPTX monitoring applications, the shifter in the control room can verify that the timing signals are synchronized to the collisions, and also look for so-called *satellite bunches*, out-of-time bunches that would cause off-center collisions in ATLAS. The monitoring system is running independently from the ATLAS online data acquisition infrastructure, enabling monitoring of the LHC machine in the control room even when ATLAS is not taking data. Summary data from the BPTX monitoring system, e.g. mean bunch intensity and phase, is published to the ATLAS *Detector Control System*[@dcs] and ultimately saved to the conditions database. Results from the first LHC beams ================================ The first proton bunches in ATLAS --------------------------------- On September 10, 2008, the first LHC proton bunch reached ATLAS. Figure \[fig:firstbunch\] shows the pulse recorded by the BPTX monitoring system. ![The first LHC bunch on its way to ATLAS.[]{data-label="fig:firstbunch"}](bptx_first_bunch.pdf){width="\linewidth"} A few hours later, a bunch was successfully circulated 8 turns around the accelerator and seen by ATLAS as depicted in Figure \[fig:8turns\]. The pulses are separated by 89$\mu$s, corresponding to the time it takes for an LHC bunch to circulate around the 27km long ring. The pulse amplitude, which is proportional to the bunch intensity, is degrading from turn to turn, which is consistent with the beam loss and debunching expected for a beam not yet captured by the LHC RF system. ![A bunch passing ATLAS in 8 consecutive turns.[]{data-label="fig:8turns"}](bptx_8_bunches.pdf){width="\linewidth"} Monitoring of a longer LHC run ------------------------------ Around 1 AM on September 12, 2008, a single bunch was circulated around the LHC for about 20 minutes after being captured by the RF system. The BPTX monitoring system measured the intensity during this period, and the resulting plot is shown in Figure \[fig:longfill\]. It should be noted that this is a relative but not yet normalized intensity measurement. The scattering of the data points suggests that the precision is around 10%. ![Intensity measured by the BPTX monitoring system during 20 minutes of circulating beam.[]{data-label="fig:longfill"}](long_fill_080912_t.png){width="40.00000%"} Figure \[fig:persistency\] shows an oscilloscope picture recorded in persistency mode during the same time period. The falling edge of the analog BPTX signal for beam 2 (the scope channel with bipolar pulses to the left) is used as scope trigger and can be seen together with the discriminated BPTX signal used as Level-1 trigger input (with longer NIM pulse to the right). The clock related to beam 2 (bottom) is stable within an RMS of 40 picoseconds with respect to the beam, indicating RF capture. The reference clock signal (top), corresponding to the bunch frequency at a higher energy, has a different frequency. ![Oscilloscope traces from 20 minutes of circulating beam with persistency.[]{data-label="fig:persistency"}](bptx_long_coast.jpg){width="40.00000%"} Conclusions =========== In the first period of beam in the LHC, the BPTX system was used extensively as a trigger to time in the read-out windows of the sub-detectors of the ATLAS experiment. The BPTX monitoring system was able to record the very first LHC bunch approaching ATLAS, and provided detailed information about the beams during these first days of data taking. Acknowledgments {#acknowledgments .unnumbered} =============== The authors would like to thank the ATLAS Collaboration and its Level-1 Central Trigger group in which most of this work was carried out. We would also like to express our deepest gratitude to the LHC community, both for the support and for providing the BPTX detectors. [00]{} The ATLAS Collaboration 2008 “The ATLAS Experiment at the CERN Large Hadron Collider” (2008) S08003. L. Evans and P. Bryant (editors) 2008 “LHC Machine” (2008) S08001. S. Baron 2005 “TTC challenges and upgrade for the LHC” pp 125-129. T. Pauly et al. 2005 “ATLAS Level-1 Trigger Timing-In Strategies” pp 274-278. C. Ohm 2007 M.Sc. thesis, Linköping University, Sweden, LITH-IFM-EX–07/1808–SE. A. Barriuso Poy, H. Boterenbrood, H. J. Burckhart, J. Cook, V. Filimonov, S. Franz, O. Gutzwiller, B. Hallgren, V. Khomutnikov, S. Schlenker and F. Varela 2008 “The detector control system of the ATLAS experiment”. (2008) P05006. [^1]: TTC is the standard hardware system used across the LHC experiments for distribution of fast Timing, Trigger and Control signals
{ "pile_set_name": "ArXiv" }
--- abstract: 'Computers have been used to analyze and create music since they were first introduced in the 1950s and 1960s. Beginning in the late 1990s, the rise of the Internet and large scale platforms for music recommendation and retrieval have made music an increasingly prevalent domain of machine learning and artificial intelligence research. While still nascent, several different approaches have been employed to tackle what may broadly be referred to as “musical intelligence.” This article provides a definition of musical intelligence, introduces a taxonomy of its constituent components, and surveys the wide range of AI methods that can be, and have been, brought to bear in its pursuit, with a particular emphasis on machine learning methods.' address: - 'Computer Science Department, The University of Texas at Austin' - 'SparkCognition Research, Austin, TX' author: - Elad Liebman - Peter Stone bibliography: - 'refs\_prop.bib' title: 'Artificial Musical Intelligence: A Survey' --- \[chap:taxonomy\] Artificial Musical Intelligence; Music and Artificial Intelligence; Music Informatics; Introduction {#intro} ============ Since its emergence in the 1950s, artificial intelligence has become an ever more prevalent field of scientific research. Technology to which we may assign varying degrees of intelligence is virtually all around us – from from sophisticated navigation systems[@duckham2003simplest] and anti-collision sensors placed on cars [@wolterman2008infrastructure] to recommender systems meant to help us pick a book or movie[@adomavicius2005toward]. However, while great emphasis has been placed on improving the performance of such systems, other meaningful facets have not been as thoroughly explored. Such additional facets cover a wide array of complex mental tasks which humans carry out easily, yet are hard for computers to mimic. These include the human ability to understand social and cultural cues, to interpret and infer hidden meanings, to perceive the mental state of their counterparts, and to tailor their responses accordingly. A prime example for a domain where human intelligence thrives, but machine understanding is limited, is music. In recent years, the application of algorithmic tools in cultural domains has become increasingly frequent. An interesting example is Sisley the Abstract Painter, a project aimed to algorithmically emulate modern paintings of varying abstraction levels, given an input photograph[@zhao2010sisley]. Another example uses visual recognition tools to study what makes the architectural styles in different cities distinctive[@doersch2012makes]. A more mainstream example for the application of machine learning in a cultural domain can be seen in a recent paper in which 16 episodes from the famous TV series Lost are automatically tagged for character presence using weakly supervised data [@cour2009learning]. In the domain of natural language processing, many works have used literary texts as input data, and some works have cultural goals such as document style classification [@argamon2003gender], authorship attribution [@stamatatos2009survey], and literature analysis [@kirschenbaum2007remaking]. A theme that surfaces from examining this type of research is that tools from the AI and machine learning communities often reveal new insights and shed new light on fundamental cultural questions – what characterizes an author (or an architect); which geometric properties best separate Kandinsky from Pollock (Or Steven Spielberg from Stanley Kubrick); is it possible to chart the evolution of Latin dance styles, etc. Another important observation is that such cultural domains may often prove excellent testbeds for new algorithmic tools and approaches. There are many ways in which artificial intelligence and music intersect, ranging from analysis of large bodies of existing music to the creation of music itself. Computers have accompanied both the analysis and creation of music almost since they first came into existence. In 1957, Ljaren Hiller and Leonard Isaacson developed software that applied Markov chains and rule-based logic to compose a string quartet [@hiller1959experimental]. Iannis Xenakis used computers in the early 1960s to generate numerical patterns, which he later transcribed into sheet music [@xenakis1992formalized], and later led the development of the first music programming language, the Stochastic Music Programme (SMP) [@xenakis1965free]. A decade later, Pierre Boulez founded IRCAM (Institut de Recherche et Coordination Acoustic/Musique), where composers, computer scientists, and engineers study music and sound and invent new tools for creating and analyzing music [@born1995rationalizing]. Only a few years after its establishment, IRCAM already served as the home of the Spectralist movement, where composers such as Gerard Grisey and Tristan Murail used computers and spectral analysis to compose new music [@anderson2000provisional]. Since then, the notion of applying artificial intelligence to create music has remained of interest to many, and there are many other examples for this type of composition, ranging from stochastic generation tools and elaborate mathematical models to grammar-based generation and evolutionary approaches [@jackendoff1972semantic; @munoz2016memetic; @quick2010generating]. Another recent body of work lying at the intersection between artificial intelligence and music analysis is that of the music information retrieval (or MIR) community. Over the last decade, many researchers have applied computational tools to carry out tasks such as genre identification [@doraisamy2008study], music summarization [@mardirossian2006music], music database querying [@eric2003name], melodic segmentation [@pearce2008comparison], harmonic analysis [@chen2011music], and so on. Additional research questions with implications for preference learning and computational musicology include (but are not limited to) performance analysis and comparison [@liebman2012phylogenetic], music information modeling[@conklin1995multiple], music cognition[@krumhansl2001cognitive], and surprise[@abdallah2009information]. Indeed, the study of music perception within the cognitive science community has also served as a bridgehead between computational learning research and music analysis. Considerable effort has been put into using algorithmic tools to model patterns of psycho-physical responses to music stimuli [@juslin2008emotional], and the interaction between musical concepts and their interpretations in the brain[@krumhansl2001cognitive]. Another related field of study is that of human-computer interaction and human-robot interaction. Previous work has been carried out in order to provide AI with the ability to interact with humans in one form of social setting or another[@dautenhahn1995getting]. These works, however, usually do not capture the complexity of human interaction, and more importantly, rarely take into account the complex array of pre-existing cultural knowledge that people “bring to the table” when they interact socially, or the cultural information they accrue through interaction. The separate fields and perspectives on music informatics, spanning music information retrieval, cognitive science, machine learning and musicology, have largely proceeded independently. However, they are all concerned with overlapping facets of what we define in this survey as “musical intelligence”, specifically in the context of artificial intelligence. To define something as complex and as abstract as “musical intelligence” is at least as difficult as defining what intelligence is in general - a notoriously slippery and tenuous endeavor. However, for the purposes of this article, we adopt the following working definition: > \[music\_intelligence\_def\] “Musical Intelligence”, or “music understanding”, describes a system capable of reasoning end-to-end about music. For this purpose, it needs to be able to reason at different levels of abstraction with respect to music; from perceiving low-level musical properties, to intermediate levels of abstraction involving the organizational structure of the music, to high level abstractions involving theme, intent and emotional content. The breakdown of musical abstractions as “low-level”, “intermediate” and “high-level” is rather murky. Nonetheless, we can consider basic auditory properties regarding the overall spectrum, tempo, instrumentation etc. as the lowest levels of music understanding. Intermediate levels of abstraction include concepts such as identifying melody vs. accompaniment, identifying the functional harmonic structure of musical segments, identifying recurring motifs, or placing the music in broad genre terms. High-level abstractions include more principled motific and thematic analysis, understanding the intended emotional valence of various pieces of music, the interplay between different structural and motific choices, drawing connections between different pieces of music, recognizing the style of individual musicians, being able to successfully characterize the musical tastes of others, and ultimately, being able to generate musical sequences that people would consider meaningful. While somewhat analogous to the notion of scene understanding in machine vision [@li2009towards], musical intelligence is a much more elusive term, given that the “objects” music deals with are considerably less concrete or functionally defined than those usually considered in computer vision. Nonetheless, the definition above is useful in providing a high-level motivation and goal for connecting disparate aspects of music informatics research. The purpose of this survey article is threefold. First, it is meant to serve as an updated primer for the extremely large and interdisciplinary body of work relating to artificial musical intelligence. Second, it introduces a detailed taxonomy of music related AI tasks that is meant to provide a better perspective on the different achievements made in the intersection of both worlds in the past two decades. Third, this survey analyses different evaluation methods for various music-related AI tasks. Due to the enormous literature that is relevant to this survey, we limit its scope in several ways. We focus on works that involve a significant machine learning or artificial intelligence component. We focus on several potential representations of music, either symbolic or at the audio level, and consider tasks primarily involving this input. While we acknowledge the large body of work which focuses on the signal-processing and audio-extractive aspects of automated music analysis, we do not consider it a core part of this survey, and only reference it to the extent that such work lies at the heart of many of the feature extraction procedures used in machine learning frameworks for music related tasks. Another large body of work, which focuses on natural language processing of song lyrics, music reviews, user-associated tags etc. is also considered outside the scope of this article. We also consider automated sheet music recognition (traditionally through image processing techniques) as outside the scope of this survey. The structure of this article is as follows: in Section \[chap8:back\] we discuss the target audience of this survey, and provide initial background for reading the article. We proceed to discuss the motivation behind music-related AI research and its potential uses. In Section \[chap8:tax\] we focus on the proposed taxonomy and break down the extensive body of literature into different categories, utilizing different perspectives. Subsequently, in Section \[chap8:tasks\] we review the literature from the perspective of the tasks that have been studied. In Section \[chap8:repr\] we discuss the different types of representations that have been used in the literature. In Section \[chap8:technique\] we break down the extensive list of AI techqniques that have been applied in music AI research. Section \[chap8:eval\] discusses the different evaluation methods used in the literature to assess the effectiveness of proposed approaches. Lastly, in Section \[chap8:musint\] we summarize the contributions of this survey, consider the idea of artificial musical intelligence from a broader perspective, and discuss potential gaps existing in the literature. Background and Motivation {#chap8:back} ========================= This survey article is aimed at computer scientists working in AI who are interested in music as a potential research domain. Since both the study of music and the artificial intelligence and machine learning literature are too extensive to be covered by any single survey paper, we assume the reader has at least some knowledge about the basic machine learning paradigm (e.g. supervised vs. unsupervised learning, partition to training and testing data, evaluative metrics for learning algorithms such as area under the ROC curve etc). We also assume some familiarity with various learning architectures and algorithms, such as regression, support vector machines, decision trees, artificial neural networks, probabilistic and generative models, etc. From a more classical AI viewpoint, some understanding of knowledge representation, search and planning approaches is assumed, but is not directly relevant to a large component of this paper. Good introductory sources for machine learning and AI concepts can be found in various textbooks (such as [@russell1995modern]). Regarding music terminology, we assume familiarity with a few basic terms. These terms include pitch, note, scale, key, tempo, beat, chord, harmony, cadenzas and dynamics. We provide brief explanations for these terms and more in Appendix \[app:glossary\]. Further details can be found in sources such as The Oxford dictionary of musical terms [@latham2004oxford], among many others. Throughout the article we will assume the general meaning of these terms is known. This survey lays down the case that work at the intersection of artificial intelligence and music understanding is beneficial to both communities on multiple levels. As a rich, complex research domain, we expect that the study of artificial musical intelligence will potentially produce fundamental scientific discoveries, as well as engineering insights and advances which could be applicable in other domains. These expectations are supported by the following lines of reasoning: - [**Music is a quintessential form of intelligence:**]{} Music, being intrinsically complex and multifaceted, involves some of the most sophisticated mental faculties humans have. Musical skills such as playing, analyzing or composing music involve advanced data analysis, knowledge representation and problem solving skills. The challenge of developing such skills in artificial agents gives rise to interesting research problem, many of which are transferable to other application domains (such as analyzing video or interactive gameplay). Furthermore, some abstract issues such as modeling a “utility function” that captures a person or a group’s enjoyment of different types of musical information are in fact inherent to any attempt to quantify aesthetic value, mass appeal or creative content. Advances in the modeling of such a function would have immediate applications in any case where understanding “what people want” is key to good performance but no easily quantifiable objective functions exist. - [**Music is inherent to the human experience, and therefore to social interaction:**]{} If we envision a future where intelligent artificial agents interact with humans, we would like to make this interaction as natural as possible. We would therefore like to give AI the ability to understand and communicate within cultural settings. This issue has actual benefits, as software capable of tailoring its behavior to the tastes and the preference of specific people would do better both in understanding the behavior of its human counterpart and influence it, leading to a much more successful interaction. - [**Deeper music AI will lead to better performance of real world systems:**]{} Let us consider a recommendation system for music. Such a system would greatly benefit from the ability to model the intrinsic properties of the music it deals with, rather than solely rely on statistical correlations or simplistic measures. This capacity would also enable recommendation models to learn with less input data, thus ameliorating the infamous cold start problem in recommender systems. The work of Liebman et al. [@DJMC] is an example for this approach. The architecture presented in that work is able to learn some basic signal of what a person likes based on very little experience by directly mapping musical properties of songs and transitions to predicted human preferences. - [**AI can lead to new cultural insights:**]{} The intersection of artificial intelligence and music often leads to insights regarding music, how it is perceived by humans, and what makes it unique. These observations have significant cultural value, and are of interest to many researchers in a wide range of disciplines. While admittedly the study of musical intelligence can be seen as somewhat more esoteric than other core academic disciplines and application areas, and the assessment of musical quality is inherently subjective, to those concerned about such issues we offer the following observations: - [**Widespread commercial interest:**]{} The market for music recommendation, for instance, is large[^1], and growing. Video games such as Rocksmith[^2] , which automatically analyzes actual instrument playing to provide feedback and accompaniment, are also growing in popularity. The commercial success of such applications reflects a strong industrial interest in research that enables better autonomous music understanding. - [**Widespread academic interest:**]{} In the past two decades, there have been hundreds of papers at the intersection of AI and music published in top tier conferences and journals (including those which we discuss in this survey), with thousands of citations, cumulatively. This fact in itself serves as evidence for the existing interest for such work across varied research communities. - [**Realizable goals exist:**]{} While the subjectivity inherent to music may pose difficulties in evaluating the performance of various music AI systems, many inter-subjective goals (such as increasing user satisfaction and engagement, or better matching people’s perceptions and expectations) can be effectively evaluated using lab experiments and crowd-sourcing. A Taxonomy of Music AI Problems {#chap8:tax} =============================== Consider a song by the Beatles, or a piano trio by Beethoven. What kinds of computational research questions can we ask about these cultural artifacts? What kinds of tasks might we expect intelligent software to perform with respect to them? Due to the complexity and richness of the music domain, countless different perspectives can be assumed when studying the intersection of music and artificial intelligence. Different perspectives give rise to different research questions and different approaches. In order to compare and contrast the literature using a consistent and unified language, we introduce the following dimensions along which each contribution can be placed: - The target task - The input type - The algorithmic technique(s) In this section we broadly outline these three perspectives, which together span the taxonomy introduced in this survey. A visual representation of the proposed taxonomy is shown in Figure \[chap8:fig\_tax\]. ![Visual high-level illustration of the proposed taxonomy.[]{data-label="chap8:fig_tax"}](overview_taxonomy_survey.png){width=".8\linewidth"} Partition by the Nature of the Task ----------------------------------- There is a wide variety of potential research tasks we might concretely try to accomplish in the music domain. We use the term “task” to describe a small, concrete and well-defined problem. For instance, in the Beatles song example above, we may wish to discern the chorus from the refrain, or identify the beat and the key of the song, or identify whether it is an early vs. a late Beatles song. While these are all small and concrete tasks, they are not atomic or independent; knowing the key and the beat of a song is relevant both to determining its structure (chorus vs. refrain), identifying which sub-genre it belongs to, etc. To better understand shared themes across tasks and facilitate a more helpful overview of the current literature, we break tasks down to several categories: 1. Classification and Identification - any tasks which associate musical segments with one or more out of a closed set of labels. For example, classifying pieces by composer and/or genre. 2. Retrieval - as in the broader case of information retrieval, these tasks involve obtaining relevant items, often ranked by relevance, from a music dataset. A typical example is a recommender system that suggests specific songs to a specific user given his listening history. 3. Musical Skill Acquisition - this category encompasses the varied set of basic analysis skills required for music processing, from pitch and tempo extraction to chord recognition. 4. Generation - these tasks involve some facet of creating new musical expression, ranging from generating expressive performance from audio, generating meaningful playlists by sequencing existing songs, and, probably the most elusive of all, generating new music. These categories aren’t mutually exclusive, as many actual tasks might share more than one aspect, or contain components that belong in other categories. Still, we believe it is a natural way to group tasks in a way that sheds light on recurring themes and ideas. Partition by Input Type ----------------------- It is almost always the case that the type of input dramatically affects the range and complexity of tasks which can be performed on that input. Generally, there are three input categories – 1. Symbolic Music Representations - these are the simplest and easiest to analyze, as they capture pitched event information over time. Variants of symbolic representation range from the ubiquitous MIDI protocol [@loy1985musicians] to complex digital representation of common practice music notation. 2. Audio (and audio-derived) Representations - this category of representations ranges from raw unprocessed audio to compressed audio to concise spectral features, depending on the level of reduction and abstraction. 3. Related Musical Metadata - all non-audio information regarding a musical piece, ranging from artist and song names to associated mood tags, social media information, lyrics, occurrence history etc. In this survey we will focus on the first two representations, but due to its ubiquity, we will occasionally refer to the third type. Partition by Algorithmic Technique ---------------------------------- A wide variety of machine learning and artificial intelligence paradigms and techniques have been applied in the context of music domains. From a machine learning and artificial intelligence research perspective, it is of interest then to examine this range of techniques and the specific musical domains where they were applied. Due to the extensive nature of the related literature and the wide range of musical tasks where the following methods have been used, this list is not intended to be entirely comprehensive. To the best of our knowledge, however, it is representative of the full array of methods employed. Broadly speaking, we consider the following general technical approaches: 1. Machine Learning Approaches - a wide range of machine learning paradigms has been employed for various music informatics tasks. The list of techniques used is as varied as the machine learning literature itself, but some examples include methods such as support vector machines (SVM) [@hearst1998support], generative statistical models such as Hidden Markov Models (HMM) [@rabiner1989tutorial], Markov Logic Networks (MLN) [@richardson2006markov], Conditional Random Fields (CRF) [@lafferty2001conditional], and Latent Dirichlet Allocation (LDA) [@blei2003latent]. In recent years, deep neural network architectures such as Convolutional Neural Networks (CNN) [@lecun1995convolutional], Recurrent Neural Networks (RNN) [@gurney1997introduction], and Long Short Term Memory networks (LSTMs) [@hochreiter1997long] have become increasingly popular and ubiquitous in the music informatics literature. 2. Formal methods - multiple attempts have been made to employ formal techniques, similar to the formal methods subfield in computer science, to handle music informatics tasks via formal specification methods to describe and generate musical sequences. Under this umbrella one may find approaches inspired by generative grammars [@jackendoff1972semantic], formal specification of tonal and chordal spaces with production rules [@davis1977production], probabilistic logic [@von1952probabilistic], and fuzzy logic [@zadeh1975fuzzy]. 3. Agent-based techniques - multiple papers in the music AI literature have studied complex approaches that go beyond the scope of a learning algorithm or a formal specification, but rather fall in the subfield of intelligent agent research. That is to say, this category deals with AI systems that combine perception and decision-making in a nontrivial manner to handle musical tasks. In this category one may find examples such as person-agent accompaniment and improvisation systems [@thom2000bob], robotic systems for music performance [@shimon], multiagent music generation architectures [@blackwell2003swarm], and reinforcement learning agents for music generation [@cont2006anticipatory]. Having outlined the general structure of the taxonomy proposed in this survey, we can now delve more deeply into each category and provide examples for the varied types of questions and approaches studied in the past 15 years, following the rise of online music platforms and medium-to-large-scale music datasets. In the next sections we consider each dimension of the taxonomy separately and overview key examples in each partition category. Overview of Musical Tasks {#chap8:tasks} ========================= The first aspect through which we examine the literature is the functional one - which musical tasks have been addressed via machine learning and artificial intelligence approaches? Following our taxonomy from Section \[chap8:tax\], we partition tasks into four main groups - classification and identification, retrieval, skill acquisition, and generation. A visual summary of the content surveyed in this section is provided in Figure \[chap8:fig\_tasks\]. ![Visual high-level illustration of music AI tasks.[]{data-label="chap8:fig_tasks"}](overview_of_tasks_survey.png){width=".8\linewidth"} Classification and Identification Tasks {#chap8:classification} --------------------------------------- Suppose we are presented with a newly unearthed score for a classical piece. This score, it is claimed, is a lost cantata by Johann Sebastian Bach, one of the many assumed to have been lost to posterity. Is this really lost music by the great Baroque master? Or perhaps the work of a talented imposter? Was it written in Bach’s time? Or is it a recent work of forgery? These questions may seem hypothetical, but they are actually quite real, for instance in the case of several organ fugues by J.S. Bach [@van2008assessing]. An even more famous example involving J.S. Bach, one that most music students should be familiar with, is that of Bach’s famous liturgical chorales. Of these 327 chorales, which have been widely used to teach traditional harmony and counterpoint for more than two centuries, only a third have definite sources in known Bach cantatas. The others are without a known source, and many have speculated that at least some of them were written by Bach’s students (indeed, many have disputed the authorship of entire Bach cantatas, for instance [@owen1960authorship]). If we had a reliable way to computationally predict the likelihood that a previously unknown piece was actually written by Bach (vs., say, any other of Bach’s many contemporaries), it would help greatly not only in shedding light on such musicological mysteries, but also in revealing what it is that makes Bach unique. Music domains offer a wide array of potential *classification tasks*. Therefore, partly due to their ease of evaluation (as we discuss further in Section \[chap8:eval\]), they have been a mainstay of the music informatics field for several decades. Indeed, surveying the literature from the past 15 years, a varied list of classification tasks emerges. Early examples for modern approaches include Scheirer and Slaney, who compared various machine learning techniques, including maximum-aposteriori (MAP) estimators, Gaussian Mixture Models, feature space partitioning and nearest-neighbor search, in order to discriminate speech from music based on acoustic features [@scheirer1997construction]. Another such early example is the work of Marques and Moreno, who tackled the issue of instrument classification using Gaussian mixture models and SVMs [@marques1999study]. *Instrument classification* has been a common thread in the music information retrieval literature. In a survey from 2000, Herrera et al [@herrera2000towards] point out several machine learning techniques already employed to identify which instrument is playing in solo recordings. Examples of such works include K-nearest neighbors (KNN), employed for example by Martin and Kim [@Martin98musicalinstrument], Naive Bayes classifiers (see Martin [@martin1999sound]), and support vector machines (SVMs) (see Marques [@marques1999automatic]). Eichner et al. have introduced the usage of Hidden Markov Models for this purpose in a more realistic and diversified setting with multiple instruments of the same kind [@eichner2006instrument]. In their experiments, they inferred HMMs in which the states are Gaussian probability density functions for each individual instrument and for each individual note, comparatively, in a data-driven manner, and were able to show that for their particular dataset of real-world recordings, this approach outperformed the baselines. Benetos et al. [@benetos2006musical] applied Nonnegative matrix factorization and subset selection, resulting in improved classification accuracy compared to results obtained without these modifications. Joder et al. [@joder2009temporal] introduced the notion of [*temporal integration*]{} to instrument recognition. Simply put, temporal integration involves the combination of features across multiple time frames to construct more context-aware, higher-level features (the notion was first introduced in a music domain by Meng et al. [@meng2007temporal]). By combining temporally aggregated features they were able to beat the state of the art for that time. Considering the harder problem of multi-instrument classification, Garcia et al. were able to classify multiple instruments as long as they were in separate recording channels (with some tolerance to leaking) by training statistical models for individual partials per instrument class [@garcia2011simple]. In more recent work, Fourer et al. [@fourer2014automatic] took a hierarchical approach to classifying timbre in ethnomusicological audio recordings. Their method introduces a hierarchical taxonomy from instruments to sound production categories, which bifurcate further (aerophones $\rightarrow$ blowing; cordophones $\rightarrow$ bowing; plucking or striking; etc), and embeds each timbre class in a projection space that captures weights over these descriptors (the training is done via Latent Discriminant Analysis [@mika1999fisher]). The issue of instrument classification ties in organically to another prevalent line of research, that of *genre classification*. Tzanetakis et al. introduced a hierarchy of audio classification to speech vs. music, genres, and subgenres [@george2001automatic]. Using timbral audio features, they were able to reach accuracy of  60% using Gaussian mixture models. Dubnov et al. [@dubnov2003using] trained statistical models to describe musical styles in a way that could also be harnessed towards music generation (a topic we expand on in subsection \[chap8:gen\]). Their approach employs dictionary-based prediction methods to estimate the likelihood of future sequences based on the current context (in the paper they compare incremental parsing to the more sophisticated predictive suffix trees). In a comparative study from the same year as the Dubnov et al. work, Li et al. compared multiple audio feature sets and classifier variations (based on SVMs, KNN, GMM and discriminant analysis), and across several different datasets [@li2003comparative]. In 2007, Meng et al. studied the application of temporal integration (a method we mentioned in the paragraph above) to genre classification [@meng2007temporal], leading to improvements in performance and robustness. Different researchers have taken different perspectives on the issue of *finding useful representations* for genre classification (an issue we also discuss in Section \[chap8:repr\]). For instance, Panagakis et al. applied nonnegative multilinear PCA to construct better features for genre classification [@panagakis2010non], while Salamon et al. used melody extraction for genre classification in polyphonic settings, reaching accuracy of above 90% on a small sample of distinct genres (instrumental Jazz, vocal jazz, opera, pop, and flamenco) [@salamon2012musical]. Anan et al. used a theoretically grounded approach for learning similarity functions for the purpose of genre recognition [@anan2012polyphonic]. In order to train these similarity functions, they converted their MIDI input to binary chroma sequences (a single binary chroma vector is a sequence of length 12 for each tone, in which present tone indices are assigned the value of $1$ and the rest are $0$). Marques et al. applied optimum path forests (OPF), a graph-partitioning ensemble approach, for genre classification in potentially large and dynamic music datasets [@marques2011new]. Rump et al. separated harmonic and percussive features in recordings with autoregressive spectral features and SVMs to improve performance over a non-separated baseline [@rump2010autoregressive], while Panagakis et al. used locality-preserving nonnegative tensor factorization as another means of constructing better features for genre classification [@Panagakis2010sparse]. In contrast, West and Cox studied the issue of optimizing frame segmentation for genre classification [@west2004features] (we discuss the issue of segmentation more in-depth in Section \[chap8:skill\_acq\]). Arjannikov et al. tackled the issue of genre classification from a different perspective by training an associative classifier [@arjannikov2014association] (conversely, association analysis in this context can be perceived as KNN in multiple learned similarity spaces). Hillewaere et al. applied string methods for genre classification in multiple dance recordings, transforming the melodic input into a symbolic contour sequence and applying string methods such as sequence alignment and compression-based distance for classification [@hillewaere2012string]. Somewhat apart from these works, Mayer and Rauber combine ensembles of not only audio but also lyric (i.e. textual) features for genre classification [@mayer2011musical]. In more recent work, Herlands et al. tackled the tricky issue of homogenous genre classification (separating works by relatively similar composers such as Haydn and Mozart), reaching accuracy of 80% using specifically tailored melodic and polyphonic features generated from a MIDI representation [@herlands2014machine]. Interestingly, Hamel et al. also studied the issue of transfer learning in genre classification, demonstrating how classifiers learned from one dataset can be leveraged to train a genre classifier for a very different dataset [@hamel2013transfer]. Another common classification task in the music domain is that of *mood and emotion recognition* in music, a task which is interesting both practically for the purpose of content recommendation, and from a musicological perspective. Yang and Lee used decision trees to mine emotional categorizations (using the Tellegen-Watson-Clark mood model [@tellegen1999dimensional]) for music, based on lyrics and tags, and then applied support vector machines to predict the correspondence of audio features to these categories [@yang2004disambiguating]. Han et al. applied support vector regression to categorize songs based on Thayer’s model of mood [@thayer1990biopsychology], placing songs on the Thayer arousal-valence scale [@han2009smers]. Trohidis et al. also used both the Tellegen-Watson-Clark model and Thayer’s model, and reframed the emotion classification problem as that of multilabel prediction, treating emotional tags as labels [@trohidis2008multi]. Lu et al. applied boosting for multi-modal music emotion recognition [@lu2010boosting]. In their work, they combined both MIDI, audio and lyric features to obtain a multi-modal representation, and used SVMs as the weak learners in the boosting process. Mann et al. classified television theme songs on a 6-dimensional emotion space (dramatic-calm, masculine-feminine, playful-serious, happy-sad, light-heavy, relaxing-exciting) using crowd-sourced complementary information for labels, reaching accuracy of 80-94% depending on the emotional dimension [@mann2011music]. Focusing on audio information alone, Song et al. studied how different auditory features contribute to emotion prediction from tags extracted from last.fm [@song2012evaluation]. Recently, Delbouys et al. proposed a bimodal deep neural architecture for music mood detection based on both audio and lyrics information [@delbouys2018music]. It is worth noting that obtaining ground truth for a concept as elusive as mood and emotion recognition is tricky, but labels are often obtained through mining social media or through crowdsourcing, under the assumption that people are the ultimate arbiters of what mood and emotion music evokes. We discuss this matter in greater detail in section \[chap8:eval\]. The works described above are a representative, but not comprehensive, sample of the type of work on music classification that has taken place in the last 15 years. Various other classification tasks have been studied. To name a few, Su et al. recently applied sparse cepstral and phase codes to identify guitar playing technique in electric guitar recordings [@su2014sparse]; Toiviainen and Eerola used autocorrelation and discriminant functions for a classification based approach to meter extraction [@toiviainen2005classification]; several works including that of Lagrange et al. tackled the issue of singer identification [@lagrange2012robust], while Abdoli applied a fuzzy logic approach to classify modes in traditional Iranian music recordings [@abdoli2011iranian]. Retrieval Tasks --------------- Consider now that you are in charge of picking music for a specific person. The only guidance you have is that previously, that person listed some of the songs and the artists he likes. Provided with this knowledge, your task is to find additional music that he will enjoy. You can accomplish this goal by finding music that is *similar* to the music he listed, but different. For this purpose, you must also define what makes different pieces of music similar to one another. Alternatively, you may be faced with a recognition task not that far removed from the classification tasks we listed in the previous subsection: given a piece of music, find a subset of other musical pieces from a given corpus which are most likely to have originated from the same artist. These are just a couple of examples for music retrieval tasks, which combine music databases, queries, and lower-level understanding of how music is structured. In this subsection we consider different types of retrieval tasks in musical context. These tasks usually require a system to provide examples from a large set given a certain query. Selecting examples that best suit the query is the main challenge in this type of task. The most straightforward context for such retrieval tasks is that of *music recommendation*. Given some context, the system is expected to suggest songs from a set best suited for the listener. This type of task has been a widely studied problem at the intersection of music and AI. Yoshii et al. combined collaborative and content-based probabilistic models to predict latent listener preferences [@yoshii2006hybrid; @yoshii2007improving]. Their key insights were that collaborative filtering recommendation could be improved, first by combining user ratings with structural information about the music (based on acoustic data), revealing latent preferences models; and secondly, by introducing an incremental training scheme, thus improving scalability. Similarly, Tiemann et al. also combined social and content-based recommenders to obtain more robust hybrid system [@tiemann2007ensemble]. Their approach is ensemble-based, with separate classifiers trained for social data and for music similarity later combined via a learned decision rule. A different thread in the music recommendation literature explores the aspect of associating tags with songs. Roughly speaking, tags are a broad set of user-defined labels describing properties of the music, ranging from genre description (“indie”, “pop”, “classic rock” and so forth), to mood description (“uplifting”, “sad” etc), to auditory properties (“female vocalist”, “guitar solo” etc), and so forth. Along these lines, Eck et al. trained boosting classifiers to automatically associate unobserved tags to songs for the purpose of improving music recommendation [@eck2007autotagging]. Similarly, Horsburgh et al. learned artificial “pseudo-tags” in latent spaces to augment recommendation in sparsely annotated datasets [@horsburgh2015learning]. More recently, Pons et al. compared raw waveform (unprocessed audio) vs. domain-knowledge based inputs with variable dataset sizes for end to end deep learning of audio tags at a large scale [@pons2017end]. From a temporal perspective, Hu and Ogihara tracked listener behavior over time to generate better models of listener song preferences [@hu2011nextone]. Specifically, they use time-series analysis to see how different aspects of listener preference (genre, production year, novelty, playing frequency etc) are trending in order to shape the recommendation weighting. In a related paper, Hu et al. also comparatively evaluated how different features contribute to favorite song selection over time [@hu2013evaluation]. From the somewhat related perspective of balancing novelty with listener familiarity and preferences, Xing et al. enhanced a standard collaborative filtering approach by introducing notions from the multi-armed bandits literature, in order to balance exploration and exploitation in the process of song recommendation, utilizing a Bayesian approach and Gibbs Sampling for arm utility inference [@xing2014enhancing]. A full discussion of the components and intricacies of music recommender systems is beyond the scope of this paper, but can be found in Schedl et al. [@knees2013survey] and Song et al. [@song2012survey]. Another example for a common retrieval task is that of *melody recognition*, either from examples or via a query-by-humming system. Betser et al. introduced a sinusoidal-modeling-based fingerprinting system and used it to identify jingles in radio recordings [@betser2007audio]. Skalak et al. applied vantage point trees to speed up search of sung queries against a large music database [@skalak2008speeding]. A vantage point tree partitions a metric space hierarchically into intersection spheres. By embedding songs in a metric space and using vantage point trees querying can be significantly reduced. Miotto and Orio applied a chroma indexing scheme and statistical modeling to identify music snippets against a database [@miotto2008music]. Similar to the representation discussed in Anan et al. [@anan2012polyphonic], a chroma index is a length $12$ vector which assigns weights for each pitch class based on the Fourier transform of a music fragment. A statistical model representing their chroma frequencies over time is then used with an HMM model for song identification. Another paper that considers identification in a time-series context, but from a different perspective, is that of Wang et al., who iteratively segmented live concert recordings to sections and identify each song separately to recover complete set lists [@wang2014automatic]. Also in the context of considering structural properties of music over time, Grosche et al. recovered structure fingerprints, which capture longer structural properties of the music compared to standard fingerprints, to improve the retrieval of matching songs from a database given a query [@grosche2012structure]. These similarity fingerprints are constructed via self-similarity matrices [@foote1999visualizing] on CENS features [@muller2005audio]. Recently, Bellet et al. introduced a theoretically grounded learned discriminative tree edit similarity model to identify songs based on samples using information about the music semantics [@bellet2016learning]. The previously mentioned tasks of music recommendation and melody recognition are strongly connected to the key notion of *similarity in music information retrieval*. Given a query, instead of being asked to retrieve the exact same songs, the system may be expected to retrieve songs which are similar to the query. This sort of problem leads to an extensive branch of research on similarity search in music. Platt considered sparse multidimensional scaling of large music similarity graphs to recover latent similarity spaces [@platt2004fast]. Similarly inspired, Slaney et al. studied various metric learning approaches for music similarity learning [@slaney2008learning], while McFee and Lanckriet proposed a heterogeneous embedding model for social, acoustic and semantic features to recover latent similarities [@mcfee2009heterogeneous]. McFee et al. also employed collaborative filtering for this purpose [@mcfee2010learning]. In a later paper, McFee and Lanckriet expanded the scale of their similarity search approach using spatial trees [@mcfee2011large]. Similarly to Mcfee et al., Stenzel and Kamps were able to show that employing collaborative filtering can generate more robust content based similarity measures [@stenzel2005improving]. From an entirely different perspective, Hofmann-Engl proposed a cognitive model of music similarity to tackle the complicated and multi-dimensional issue of how we define two pieces of music to be similar, applying general melotonic (pitch distribution) transformations [@hofmann2001towards]. Flexer et al. studied the modeling of spectral similarity in order to improve novelty detection in music recommendation [@flexer2005novelty]. Mueller and Clausen studied transposition invariant self similarity matrices (which we mentioned in the context of Grosche et al. [@grosche2012structure]) for music similarity in general [@muller2007transposition]. Hoffman et al. studied the application of hierarchical Dirichlet processes to recover latent similarity measures [@hoffman2008content]. In that work, each song is represented as a mixture model of multivariate Gaussians, similar to a Gaussian Mixture Models (GMM). However, unlike GMMs, in the Hierarchical Dirichlet Process, the number of mixture components is not predefined but determined as part of the posterior inference process. The hierarchical aspect is derived from the fact that each song is defined by a group of MFCC features. Similarity between songs can be defined according to the similarity between their corresponding distributions over components. In a somewhat conceptually related paper, Schnitzer et al. employed ensembles of multivariate Gaussians and self organizing maps to learn a similarity metric for music based on audio features [@schnitzer2010islands]. Wang et al. used bag of frame representations to compare whole pieces to one another [@wang2011learning]. Other approaches include that of Ahonen et al. who used a compression based metric for music similarity in symbolic polyphonic music [@ahonen2011compression], and that of Garcia-Diez et al., who learned a harmonic structure graph kernel model for similarity search [@garcia2011simple]. In that specific work, binary chroma vectors (dubbed “chromagrams” in this paper) are transformed to tonal centroid vectors to reduce the chromagram space from $2^{12}$ to $2^6$. Subsequently, the similarity between query and dataset inputs is measured via the Normalized Compression Distance (NCD) [@cebrian2007normalized]. For a specific review of audio-based methods for music classification (specifically, genre and mood classification, artist identification and instrument recognition) and annotation (or auto-tagging, to be more exact), see [@fu2011survey]. Musical Skill Acquisition Tasks {#chap8:skill_acq} ------------------------------- The tasks we described above tend to rely on the ability to effectively represent music in a meaningful way which reflects its property and structure. Such a representation is often obtained through manually designed features (see[@berenzweig2001locating] for example). However, a large and varied body of work focuses on the ability to automate the construction of such representations. We consider the spectrum of tasks that lead to useful representations of musical features and structure as *musical skill acquisition*. In the music recommendation example we discussed in the previous subsection, we raised the question of what makes two pieces of music similar to one another, and what makes them distinct. Similarity can lie in basic things like tempo and amplitude, and the overall spectral signature of the piece (what frequencies are heard most of the time). It can lie in subtler things, like how the spectrum changes over time. It can also lie in more abstract musicological properties, such as the rhythmic, harmonic and melodic patterns the music exhibits. Capturing such higher level musical properties is the common thread tying the different tasks we consider as musical skill acquisition tasks. While the separation between classification or retrieval tasks and “musical skill acquisition” is somewhat nuanced, the key distinction is the following. Classification and retrieval tasks reduce music-related problems to a “simple” computational question that can be studied directly with its musical aspect abstracted away, as the problem has been reframed as a pure classification or information retrieval problem. On the other hand, in the case of musical skill acquisition, we are interested in training a system to learn some fundamental nontrivial property that involves music. Such a task can be in service of a classification or retrieval task further down the line (for instance, identifying harmonic structure for similarity search) or rely on a lower level classification or retrieval task (for instance, harmonic progression analysis by first classifying individual pitches in each frame), but learning the musical property is in itself the goal and therefore the nature of these tasks is different. Ever since the 18th century, Western scholars have studied the different structural and auditory patterns and properties that characterize music, in what eventually became the modern field of musicology [@tomlinson2012musicology]. Musicologists study the structure of melody (how sequences of pitches are combined over time), harmony (how multiple pitches are combined simultaneously over time), rhythm and dynamics. Since the 1960s, musicologists have been using computers to aid in their analyses, when studying large corpora of music or previously unfamiliar music [@bel1993computational]. and when focusing on aspects of music that were previously harder to study quantitatively, such as nuances in vibrato or articulation for violin performers [@liebman2012phylogenetic]. The automation of these tasks is often more closely related to signal processing than to artificial intelligence, but nonetheless it often involves a large component of machine intelligence, such as analyzing the internal structure of music [@paulus2010state], recovering shared influences among performers [@liebman2012phylogenetic], or identifying performers by nuances in their performance [@lagrange2012robust]. A good example for a musical skill task, or music understanding task, is *music segmentation*. Music segmentation strives to understand the structure of music by partitioning it into functionally separate and semantically meaningful segments. This partitioning can happen on multiple levels - a song could be partitions into an intro, verse, chorus, bridge, and outro for instance, and musical segments can be further broken down into independent musical phrases. The notion of recovering the rules of musical temporal structure is as old as musicology itself, and computational approaches to it date back to the work of Jackendoff and Lerdahl, who proposed a generative theory of tonal music in the early 1980s [@lerdahl1985generative]. In the modern computational research literature, early examples include the work of Batlle and Cano, who used Hidden Markov Models to identify boundaries in music sequences [@batlle2000automatic], and Harford, who used self organizing maps for the same purpose [@harford2003automatic]. Similarly to Batlle and Cano, Sheh et al. also applied HMMs to segment chord sequences [@sheh2003chord]. Unlike Batlle and Cano, their approach is unsupervised - the most likely segmentation is extracted using the expectation-maximization (EM) method. Parry and Essa studied feature weighting for automatic segmentation, combining both local and global contour patterns to recover boundaries between musical phrases [@parry2004feature]. Liang et al. used Gaussian models to hierarchically segment musical sequences as a preprocessing step for classification [@liang2005hierarchical]. Pearce et al. compared statistical and rule-based models for melodic segmentation, achieving accuracy of nearly 87% with a hybrid approach [@pearce2008comparison]. This work was interesting because it revealed (at the time) that data driven approaches alone underperformed compared to a method that combined both statistical boundary prediction and rule-based heuristics that incoporated preexisting knowledge of music theory. Considering the harder problem of segmenting non-professional (and therefore messier and harder to process) recordings, Mueller et al. employed heuristic rules to segment raw recordings of folk tunes to individual notes in order to align them with MIDI versions [@muller2009robust]. To achieve this alignment, the audio was segmented in reference to the much neater MIDI input using a distance function that measures the distance between the chroma expected from the MIDI and those observed in the recording, thus accounting for potential deviations in the non-professional performance. In strongly related work, Praetzlich and Mueller applied dynamic time warping to segment real opera recordings based on aligning them with a symbolic representation [@pratzlich2013freischutz]. In a different work, the same authors used techniques from the string matching literature to identify segments in recordings on a frame-level similarity basis [@pratzlich2014frame]. From a probabilistic perspective, Marlot studied a similar type of recordings made by amateur folk musicians, and trained a probabilistic model to segment them into phrases [@Marolt2009ProbabilisticSA]. In Marlot’s approach, the signal is first partitioned into fragments that are classified into one of the following categories: speech, solo singing, choir singing, and instrumental music. Then, candidate segment boundaries are obtained by observing how the energy of the signal and its content change. Lastly, Maximum aposteriori inference is applied to find the most likely set of boundaries (the training and evaluation were supervised and were done against a set of 30 hand-annotated folk music recordings). In more recent work, Rodriguez-Lopez et al. combined cue models with probabilistic approaches for melodic segmentation [@rodriguez2014multi]. Interestingly, in a paper from recent years, Lukashevich compared multiple possible metrics for song segmentation accuracy (a work also related to structure analysis, which we discuss in greater detail later in this subsection) [@lukashevichtowards]. In this work she exposed the fact that performance of different approaches can vary significantly when altering the accuracy metric. The somewhat subjective character of this task is also evident in the work of Pearce et al. Along the same lines, much work has been invested in the tasks of *chord extraction and harmonic modeling*, the practice of extracting the harmonic properties of a musical sequence, and reducing it to a more abstract representation of typical patterns. This task is of interest both from a general music understanding perspective and for practical applications such as music recommendation and preference modeling. The literature in this subfield has evolved in an interesting manner. Initial modern approaches, such as that of Paiement et al., were based on graphical models. Paiement et al. trained a graphical probabilistic model of chord progressions and showed it was able to capture meaningful harmonic information based on a small sample of recordings [@paiement2005probabilistic]. Burgoyne et al. compared multiple approaches of sequence modeling for automatic chord recognition, mainly comparing Dirichlet-based HMMs and conditional random fields (CRFs) over pitch class profiles [@burgoyne2005learning]. In something of a departure from the earlier problem setting, Mauch and Dixon used structural information about the music to better inform chord extraction, and utilized a discrete probabilistic mixture model for chord recognition, reaching average accuracy of  65% [@mauch2009using]. Cho and Bello introduced recurrence plots (essentially a derivative of the previously discussed self-similarity matrices) as a noise reduction method in order to smooth features and facilitate more accurate chord recognition, improving performance over a non-smoothed baseline. Unlike the probabilistic graphical models approach, Ogihara and Li trained N-gram chord models for the ultimate purpose of composer style classification (basically treating chords as words) [@ogihara2008n]. Combining the N-gram and probabilistic perspectives, Yoshii and Goto introduced a vocabulary free, infinity-gram model composite generative model for nonparametric chord progression analysis, which was able to recover complex chord progressions with high probability [@yoshii2010infinite]. Chen et al. expanded the standard HMM approach to chord recognition using duration-explicit HMM models [@chen2012chord]. Among their innovations is the utilization of a transformation matrix for chroma (learned via regression) that yields a richer spectral representation than that of the traditional chroma vector. On top of this learned representation a generalized, duration-aware HMM is used to predict the most likely chord sequence (using the Viterbi algorithm [@rabiner1989tutorial]). Papadopoulos and Tzanetakis chose to combine graphical models with a rule-based approach directly by utilizing a Markov logic networks to simultaneous model chord and key structure in musical pieces. More recently, deep neural networks have become increasingly prevalent for the purpose of chord recognition. Boulanger-Lewandowski et al. studied the application of recurrent neural networks (RNN), and specifically Restricted Boltzmann Machines (RBMs), for audio chord recognition [@boulanger2013audio], and Humphrey and Bello applied convolutional neural networks (CNN) for the same purpose [@humphrey2012rethinking]. In a strongly related paper, Zhou and Lerch trained a Deconvolutional neural networks (DNN) for feature construction, and combined SVM and HMM classifiers on a bottleneck layer of the DNN for final chord classification [@zhou2015chord]. The problem of chord extraction and harmonic modeling is closely linked to that of *note transcription and melody extraction*. Note transcription involves the translation of audio information into a sequential symbolic representation. Melody extraction is the related task of identifying a melodic sequence in a larger musical context and isolating it. Abdallah and Plumbley applied non-negative sparse coding [@hoyer2002non] on audio power spectra for polyphonic music transcription [@abdallah2004polyphonic]. Similarly, Ben Yakar et al. applied unsupervised bilevel sparse models for the same purpose [@yakar2013bilevel]. Madsen and Widmer introduced a formal computational model for melody recognition using a sliding window approach [@madsen2007towards]. In their work, they compared entropy measures with a compression-based approach to predict melody notes. Polliner and Ellis framed the melody transcription task as a classification problem, identifying notes in each frame based on the audio spectral properties [@poliner2006discriminative]. From a more statistical perspective, Duan and Temperley apply maximum likelihood sampling to reach note-level music transcription in polyphonic music [@duan2014note]. Alternatively, taking a Bayesian filtering approach, Jo and Yoo employed particle filters to track melodic lines in polyphonic audio recordings [@jo2010melody]. Kapanci and Pfeffer treated the melody extraction problem from an audio-to-score matching perspective, and trained a graphical model to align an audio recording to a score, recovering melodic lines in the process [@kapanci2005signal]. A different graphical approach to the problem was introduced by Raczynski et al., who trained a dynamic Bayes network (DBN) for multiple pitch transcription [@raczynski2010multiple]. In their study they were able to show this choice significantly improved performance compared to a reference model that assumed uniform and independently distributed notes. Grindlay and Ellis propose a general probabilistic model suitable for transcribing single-channel audio recordings containing multiple polyphonic sources [@grindlay2010probabilistic]. As in other related problems, in the last few years multiple researchers have applied deep neural network architectures for this task. Boulanger-Lewandowski et al. applied RNNs to recover multiple temporal dependencies in polyphonic music for the purpose of transcription [@boulanger2012modeling]. Connecting the graphical model literature with the deep architectures thread, Nam et al. applied deep belief networks for unsupervised learning of features later used in piano transcription, showing an improvement over hand designed features [@nam2011classification]. In another recent work on piano transcription, Bock and Schedl applied bidirectional Long Short Term Memory RNNs (LSTMs), reporting improved performance compared to their respective baselines [@bock2012polyphonic]. Berg-Kirkpatrick et al. achieved the same goal of piano note transcription in a fully unsupervised manner, using a graphical model that reflects the process by which musical events trigger perceived acoustic signals [@berg2014unsupervised]. In another recent example, Sigtia et al. presented an RNN-based music sequence model [@sigtia2014rnn]. In the transcription process, prior information from the music sequence model is incorporated as a Dirichlet prior, leading to a hybrid architecture that yields improved transcription accuracy. Chord analysis, melody extraction and music similarity are all strongly connected to *cover song identification* - another field of music analysis where AI has been applied. Cover song identification is the challenging task of identifying an alternative version of a previous musical piece, even though it may differ substantially in timbre, tempo, structure, and even fundamental aspects relating to the harmony and melody of the song. The term “cover” is so wide that it ranges from acoustic renditions of a previous song, to Jimi Hendrix’ famous (and radical) reinterpretation of Bob Dylan’s “All Along the Watchtower”, to Rage Against the Machine essentially rewriting Bob Dylan’s “Maggie’s Farm”. Beyond its value for computational musicology and for enhancing music recommendation, cover song identification is of interest because of its potential for benchmarking other music similarity and retrieval algorithms. Ellis proposed an approach based on cross-correlation of chroma vector sequences, while accounting for various transpositions [@ellis2006identifying]. As a critical preprocessing step, chroma vectors were beat-aligned via beat tracking, a separate music information retrieval problem that we discuss further in this section. Serra et al. studied the application of Harmonic Pitch Class Profiles (HPCP) [@gomezHPCP; @lee2006automatic] and local alignment via the Smith-Waterman algorithm, commonly used for local sequence alignment in computational biology [@smith1981comparison], for this purpose [@serra2008chroma]. HPCP is an enhancement of chroma vectors which utilizes the typical overtone properties of most instruments and the human voice to obtain a less noisy representation of the pitch class profile of a musical segment. Serra at el. later proposed extracting recurrence measures from the cross recurrence plot, a cross-similarity matrix of beat-aligned HPCP sequences, for more accurate cover song identification. Since complicated pairwise comparisons for the purpose of en masse cover song identification in large scale datasets is prohibitively computationally expensive, Bertin-Mahieux and Ellis proposed a significant speed-up to previous approaches by extracting the magnitude of the two-dimensional Fourier transform of beat-aligned chroma patches (chroma patches are windowed subsequences of chroma vectors) and then computing the pairwise euclidean distance of these representations (PCA was also applied for dimensionality reduction) [@bertin2012large]. Humphrey et al. further improved on this result by introducing various data-driven modifications to the original framework. These modifications included the application of non-linear scaling and normalization on the raw input, learning a sparse representation, or a dictionary (essentially a set of approximate basis functions that can be used to describe spectral patterns efficiently) in order to further reduce the complexity of the input data [@humphrey2013data]. More recently, Tralie and Bendiche cast the cover song identification problem as matching similar yet potentially offset, scaled and rotated patterns in high-dimensional spaces, treating MFCC representations as point-cloud embeddings representing songs [@tralie2015cover]. Another important aspect of computational music analysis where machine intelligence has been applied is that of *onset detection*. Onset detection refers to the issue of identifying the beginning of notes in audio representations, and it has been widely studied given its fundamental application to music information analysis. You and Dannenberg proposed a semi-supervised scheme for onset detection in massively polyphonic music, in which more straightforward signal processing techniques such as thresholding, are likely to fail due to the difficulty in disambiguating multiple adjacent notes with overlapping spectral profiles [@you2007polyphonic]. To avoid the necessity of hand labeling the countless onsets, audio-to-score alignment is used to estimate note onsets automatically. Because score alignment is done via chroma vectors, which only provide crude temporal estimates (on the order of 50 to 250ms), a trained support vector machine classifier is used to refine these results. Later, Benetos et al. showed that using the auditory spectrum representation can significantly improve onset detection [@benetos2009pitched]. Inspired by both computational and psycho-acoustical studied of the human auditory cortex, the auditory spectrum model consists of two stages, a spectral estimation model (designed to mimic the cochlea in the auditory system), and a spectral analysis model. Extracting the group delay (the derivative of phase over frequency) [@holzapfel2008beat] and spectral flux (the detection of sudden positive energy changes in the signal) [@bello2005tutorial], the authors were able to reach dramatic improvements in performance compared to more straightforward Fourier-based onset detection [@benetos2009pitched]. More recently, Schluter and Bock were able to significantly improve on previous results by training a convolutional neural network for the purpose of beat onset detection [@schluter2014improved]. The notion of onset detection naturally leads to another core property of music that has been studied computationally - *beat perception*. The beat of a musical piece is its basic unit of time. More concretely, by “beat perception” we refer to the detection of sequences of temporal emphases that induce the perceived rhythm of a musical piece. We have touched on the issue of beat detection explicitly when we discussed cover song identification (when discussing the works of Ellis et al. [@ellis2006identifying] and Serra et al. [@serra2008chroma]), but in truth the issue of beat tracking is present in almost any task that involves the comparative analysis of audio sequences (in symbolic representations the issue of beat tracking is significantly less challenging for obvious reasons). Raphael introduced a generative model that captures the simultaneous nature of rhythm, tempo and observable beat processes and utilized it for automatic beat transcription. Given a sequence of onset times, a sequence of measure positions, and a Gaussian tempo process, a graphical model is used to describe the process with which these sequences are connected. Using maximum aposteriori inference, the sequence of beats is produced [@raphael2001automated]. Alonso et al. defined the notion of spectral energy flux (which we mentioned previously in the context of onset detection) to approximate the derivative of the energy per frequency over time, and use it for efficient beat detection [@alonso2004tempo]. Paulus and Klapuri combine temporal and spectral features in an HMM-based system for drum transcription [@paulus2007combining]. Temporal patterns are modeled as a Gaussian Mixture Model, and are combined with a hidden Markov Model that considers the different drum combinations, and the drum sequence is inferred via maximum likelihood. Gillet and Richard also tackled drum transcription specifically, but took a different approach, training a supervised N-gram model for interval sequences [@gillet2007supervised]. In their method, after extracting initial predictions based on the N-gram model, a pruning stage takes place in an unsupervised fashion, by reducing the approximate Kolmogorov complexity of the drum sequence. Le Coz et al. proposed a different approach altogether to beat extraction, which does not rely on onset detection, but rather on segmentation [@le2010segmentation]. In their paper, they segment each note into quasi-stationary segments reflecting (approximately) the attack, decay, sustain and release of the notes via forward-backward divergence [@andre1988new], and reconstruct the beat sequence directly from the resulting impulse train via Fourier analysis. Beat extraction is closely related to *audio-to-score alignment* and score following - the task of matching audio to a score in an online fashion (we have already touched on this subject in the context of melody extraction and onset detection). Dixon proposed an application of the Dynamic Time Warping algorithm for this purpose [@dixon2005line]. Dynamic Time Warping is a well known dynamic programming algorithm for finding patterns in time series data by aligning two time-dependent sequences [@berndt1994using], and its application in the context of aligning scores to audio data is self-evident (it was also used context such as cover song identification, which we have discussed previously). Pardo and Birmingham tackled the score following from a probabilistic perspective [@pardo2005modeling]. In their paper, they treating the score as a hidden Markov model, with the audio as the observation sequence, reducing the score following to the problem of finding the most likely state at a given point, which can be done via Viterbi-style dynamic programming. In a recent paper, Coca and Zhao employed network analysis tools to recover rhythmic motifs (represented as highly connected graph sub-components) from MIDI representations of popular songs [@coca2016musical]. Melody, harmony and rhythm modeling, and score alignment, all naturally lead to the task of overall *musical structure analysis*. This problem has been studied as well, from multiple directions. Kameoka et al. employed expectation-maximization to recover the harmonic-temporal overall structure of a given piece. Abdallah et al. propose a Bayesian approach to clustering segments based on harmony, rhythm, pitch and timbre. Peeters applies spectral analysis to the signal envelope to recover the beat properties of recorded music [@peeters2007sequence]. Peeters’ approach was to utilize MFCC and pitch class profile features, construct higher order similarity matrices, and infer the structure via maximum likelihood inference. Mueller and Ewert jointly analyze the structure of multiple aligned versions of the same piece to improve both efficiency and accuracy [@muller2008joint]. This type of analysis is done by finding paths in the pairwise similarity matrix of chroma vector sequences and using them to partially synchronize subsequences in both pieces. Bergeron and Conklin designed a framework for encoding and recovering polyphonic patterns in order to analyze the temporal relations in polyphonic music [@bergeron2008structured]. To achieve this sort of encoding, they proposed a polyphonic pattern language inspired by algebraic representations of music, which can be seen as a formal logic derivation system for harmonic progressions. From a more utilitarian perspective, as an example for structure analysis as a preprocessing step for other purposes, Mauch et al. used patterns recovered from music structure to enhance chord transcription. Harmonic progressions in Western music tend to obey contextual and structural properties (consider, for instance, the cadenza, a typical harmonic progression signifying the end of a musical phrase). Specifically, in their work, Mauch et al. leverage repetitions in sequences to improve chord extraction by segmenting the raw sequence and identifying those repetitions. From a different perspective, Kaiser and Sikora used nonnegative matrix factorization to recover structure in audio signals [@kaiser2010music]. The nonnegative matrix factorization is applied on the timbre self-similarity matrix, and regions of acoustically similar frames in the sequence are segmented. Another unsupervised approach for overall structure analysis is described in more recent work by McFee and Ellis, who employed spectral clustering to analyze song structure. They construct a binary version of the self-similarity matrix which is subsequently interpreted as a unweighted, undirected graph, whose vertices correspond to samples. Then, spectral clustering (through Laplacian decomposition) is applied, with the eigenvalues corresponding to a hierarchy of self-similar segments. In a somewhat related recent paper, Masden et al learned a pairwise distance metric between segments to predict temporally-dependent emotional content in music [@madsen2014modeling]. A research topic that is related to structure analysis, beat perception, melody, and chord extraction is that of *motive identification* - the extraction of key thematic subject matter from a musical piece. To mention a few papers from the past 15 years, Juhasz studied the application of self-organizing maps and dynamic time warping for the purpose of identifying motives in a corpus of 22 folk songs [@juhasz2009motive]. Dynamic time warping is used to search for repeated subsequences in melodies (in a way conceptually related to how self-similarity matrices work), and then these sequences are fed to a self organizing map, extracting the most prominent abstracted representations of the core motifs and their correspondence relationships. Lartillot framed the motive extraction problem as combinatorially identifying repeated subsequences in a computationally efficient manner [@lartillot2005efficient]. The subsequences is multidimensional, as it comprises both melodic and rhythmic properties. Lartillot later revisited and refined this approach, and tested in on the Johannes Kepler University Patterns Development Database [@collins2013discovery], and was able to show it recovers meaningful motivic patterns. Lastly, it is worth mentioning another example for the application of AI towards musicological problems - *performance analysis*. The rise in corpora of recorded music has both facilitated and necessitated the application of algorithmic approaches to comparatively analyze multiple recordings of the same pieces. Several good examples for such computational method include the work of Madsen and Widmer, who applied string matching techniques to compare pianist styles [@madsen2006exploring]. In a related work, Sapp used rank similarity matrices for the purpose of grouping different performances by similarity [@sapp2007comparative]. Molina-Solana et al. introduced a computational experssiveness model in order to improve individual violinist identification [@molina2008using]. In past work, Liebman et al. applied an approach inspired by computational bioinformatics to analyze the evolution and interrelations between different performance schools by constructing an evolutionary tree of influence between performances [@liebman2012phylogenetic]. Other related works include that of Okomura et al., who employed stochastic modeling of performances to produce an “expressive representation” [@okumura2011stochastic]. More recently, van Herwaarden et al. trained multiple Restricted Boltzmann Machines (RBMs) to predict expressive dynamics in piano recordings [@van2014predicting]. Generation Tasks {#chap8:gen} ---------------- Thus far we have considered tasks where intelligent software was required to perform tasks with existing pieces of music as input. However, there is also a wide array of work on employing artificial agents for the purpose of creating music. The autonomous aspect of algorithmic composition has been routinely explored in various artistic contexts [@nierhaus2009algorithmic]. However, while considered by some as the “holy grail” in computer music and the application of AI to music, less scientific attention has been placed on AI for musical content generation compared to other music AI problems.[^3] This gap owes at least in part to the fact that evaluating the quality of computer generated content is very difficult, for reasons discussed in Section \[chap8:eval\] In many ways, the task of *playlist generation*, or recommending music in a sequential and context dependent manner, can be perceived as lying at the intersection of recommendation and generation. In the past 15 years, multiple works have studied machine learning approaches to created meaningful song sequences. Maillet et al. [@maillet2009steerable] treated the playlist prediction problem as a supervised binary classification task, with pairs of songs in sequence as positive examples and random pairs as negative ones. Mcfee and Lanckriet [@mcfee2011natural] examined playlists as a natural language model induced over songs, and trained a bigram model for transitions. Chen et al. [@chen2012playlist] took a similar Markov approach, treating playlists as Markov chains in some latent space, and learned a metric representation for each song without reliance on audio data. Zheleva et al. [@zheleva2010statistical] adapted a Latent Dirichlet Allocation model to capture music taste from listening activities across users and songs. Liebman et al. [@DJMC] borrow from the reinforcement learning literature and learn a model both for song and transition preferences, then employing a monte-carlo search approach to generate song sequences. Wang et al. [@wang2013exploration] consider the problem of song recommendations as a bandit problem, attempting to efficiently balance exploration and exploitation to identify novel songs in the playlist generation process, and very similar work has been done by Xing et al. [@xing2014enhancing] towards this purpose as well. Novelty and diversity in themselves have also been a studied objective of playlists. Logan and Salomon [@logan2001music; @logan2002content] considered novelty in song trajectories via a measure which captures how similar songs are from one another in a spectral sense. Lehtiniemi [@lehtiniemi2008evaluating] used context-aware cues to better tailor a mobile music streaming service to user needs, and showed that using such cues increases the novelty experienced by users. More recently, Taramigkou et al. [@taramigkou2013escape] used a combination of Latent Dirichlet Allocation with graph search to produce more diversified playlists that are not pigeonholed to overly specific tastes, leading to user fatigue and disinterest. Another task of a generative nature is that of *expressive performance*. It is naturally closely related to music performance analysis, but rather than perceiving how humans perform music expressively, the emphasis in this task is on imparting computational entities with the ability to generate music that would seem expressive to a human ear. Early modern approaches to this problem include the work of de Mantaras et al., who applied case-based reasoning for the purpose of algorithmic music performance [@de2002ai], and that of Ramirez and Hazan, who used a combination of k-means clustering and classification trees to generate expressive performances of Jazz standards [@ramirez2006tool]. Ramirez et al. later proposed a sequential covering evolutionary algorithm to train a model of performance expressiveness based on Jazz recordings [@ramirez2007inducing]. Diakopoulos et al. proposed an approach for classifying and modeling expressiveness in electronic music, which could also be harnessed for generating automatic performances [@diakopoulos200921st]. The challenge of expressive performance has been of particular interest in robotic platforms. Murata et al. studied the creation of a robotic singer which was able to follow real-time accompaniment [@murata2008robot]. In a somewhat related paper, Xia et al. presented a robotic dancer which tracked music in real time and was trained to match the expressiveness of the music with matching dance movement [@xia2012autonomous]. Another example is the work of Hoffman and Weinberg, who presented Shimon, a robotic marimba player, and borrowed ideas from the world of animation to make Shimon expressive not just musically, but also visually [@shimon]. Shimon was geared towards *live improvisation*, and indeed improvisation is yet another music generation goal for artificial systems. Eck and Schmidhuber used long short-term memory recurrent neural networks to train a generative model of Jazz improvisation [@eck2002finding]. In a different contemporary work, Thom employed a learned probabilistic model for interactive solo improvisation with an artificial agent [@thom2001machine; @thom2000unsupervised]. Assayag and Dubnov trained Markov models for music sequences, then employ a type of string matching structures called factor oracles to facilitate algorithmic improvisation [@assayag2004using]. Lastly, there has been some attention from an AI perspective on automatic music generation, though the study of this problem has been relatively limited, particularly due to the difficulty of evaluation (see Section \[chap8:eval\]). In a technical report, Quick borrowed ideas from Shenkerian analysis and chord spaces to create an algorithmic composition framework [@quick2010generating]. Kosta et al. proposed an unsupervised multi-stage framework for chord sequence generation based on observed examples [@kosta2012unsupervised]. From a very different perspective, Blackwell has applied multi-swarms to create an improvisational musical system [@blackwell2003swarm]. Very recently, Colombo et al. proposed deep RNN architectures for the purpose of melody composition [@colombo2016algorithmic]. Most recently, Dieleman et al. compared different deep architectures for generating music in raw audio format at scale [@dieleman2018challenge], and Huang et al. were able to apply deep sequential generative models with self-attention to generate structured compositions that achieve state of the art performance in synthesizing keyboard compositions [@huang_transformer]. Similarly, quite recently, Payne proposed MuseNet, a deep neural network model that can generate several minutes long compositions for ensembles of up to ten different instruments, reasoning about musical styles in the process [@musenet]. For an interesting overview of AI methods particularly in the use of algorithmic composition, see [@fernandez2013ai]. Overview of Common Representations {#chap8:repr} ================================== Thus far, we have focused on breaking down the wide range of musical tasks from a purpose-oriented perspective. However, an equally important perspective involves the types of input used for these tasks. As noted by Dannenberg [@dannenberg1993music], representation of the music itself can be viewed as a continuum “ranging from the highly symbolic and abstract level denoted by printed music to the non-symbolic and concrete level of an audio signal”. Additionally, one may consider all the additional related information, such as lyrics, tags, artist’s identity, etc. as part of the representation. As briefly mentioned in Section \[chap8:tax\], we consider three main types of information categories for music: - Symbolic representations - logical data structures representing musical events in time, which may vary in level of abstraction. Examples for different levels of abstraction include but are not limited to the extent of encoded detail regarding pitch, registration, timbre, and performance instructions (accents, slurs, etc). - Audio representations - this sort of representation captures the other end of the continuum mentioned above, capturing the audio signal itself. Despite its seeming simplicity, here too there is a level of nuance, encompassing the fidelity of the recording (levels of compression, amplitude discretization and so forth), or the level of finesse in representations which perform signal processing on the original audio (such as the ubiquitous chroma and MFCC audio representations we have already mentioned in Section \[chap8:tasks\] and discuss in further detail later in this section). - Meta-musical information - all the complementary information that can still be legitimately considered part of the musical piece (genre classification, composer identity, structural annotations, social media tags, lyrics etc). Of these three broad categories, only the first two are within the scope of this survey, since we explicitly focus on aspects of music analysis relating to the music itself, rather than applying machine learning directly and/or exclusively on the complementary information such as lyrics, social media context, or general artist profiles. A visual summary of the contents of this section is presented in Figure \[chap8:fig\_repr\]. ![Visual high-level overview of music representations used in music AI research. For reasons described in the text, we only consider the first two categories in this article.[]{data-label="chap8:fig_repr"}](overview_of_repr_survey.png){width=".8\linewidth"} We now expand on the first two types of input. Symbolic Representations for Music ---------------------------------- One of the earliest and most common approaches to representing music inputs is via symbolic formats. In essence, a symbolic representation of music is the conceptual abstraction, or the blueprint, of that music. Musical scores using Western notation, for instance, serve exactly as such blueprints. In its most basic form, it includes information on pitches, their length, and when they are played. Additional relevant information includes when each note is released, the amplitude of each note, and the attack (simply put, how rapidly the initial rise in amplitude is and how amplitude decays over time). Classical scores also include a wide range of additional data regarding performance, such as performance instructions, sound effects, slurs, accents, and so forth, all of which can often be represented in symbolic formats as well. Additional information such as timbre can be represented, typically by using a preexisting bank of instrument representations. While this representation isn’t as rich as an audio recording, for certain genres, such as classical music or musical theater, which already rely on scores, it is an incredibly informative and useful resource, that eliminates multiple levels of difficulty in dealing with complex auditory data, enabling an artificial agent to know at each moment the core information about pitch, dynamics, rhythm and instrumentation. One of the most common “blueprint” formats is the MIDI protocol. Since its initial introduction in the early ’80s, the MIDI (Musical Instrument Digital Interface) format has served as part of the control protocol and interface between computers and musical instruments [@loy1985musicians]. The MIDI format specifies individual notes as “events” represented as tuples of numbers describing varied properties of the note including pitch, velocity (amplitude), vibrato and panning. These note events are sequenced to construct a complete piece, containing up to 16 separate channels of information. These channels typically represent instruments, since each channel can be associated with a separate sound profile, but sometimes the same instrument can be partitioned into multiple channels. Due to its long history and ubiquity, much of the literature utilized this file format as input source (See [@rizo2006pattern; @yeshurun2006midi; @yang2017midinet; @grosche2010makes; @rauber2002using; @madsen2007towards; @hillewaere2010string; @tsai2005query; @anan2012polyphonic; @mardirossian2006music] for a very partial list of examples). A different approach to symbolic representation aims to digitally represent musical scores, similarly to how traditional music engraving generates scores for mass printing. In the past two decades, several such formats have emerged, including LilyPond [@nienhuys2003lilypond], Humdrum “kern” [@huron2002music; @sapp2005online] and MusicXML [@good2001musicxml], among others. While this list is not comprehensive, in terms of symbolic music analysis these formats are largely equivalent and can be converted from one to another with some loss of nuance, but preserving most key features. Examples of research utilizing data in these formats is plentiful and varied (see [@sapp2005online; @sinclair2006lilypond; @cuthbert2011feature; @antila2014vis], for, once again, a very partial list of examples). The advantage of using such music engraving representations, particularly from a musicology perspective, is that they are designed to capture the subtleties of Western notation, including concepts such as notes, rests, key and time signatures, articulation, ornaments, codas and repetitions, etc. This richness of representation is in contrast to the MIDI format, which is conceptually closer to raw audio in terms of abstractions and is designed to describe specific pitched events in time, and is thus less suited to capture the full complexity of more sophisticated music scoring. On the flipside, that is also the relative strength of MIDI compared to these other formats - it is much simpler to parse and process. Furthermore, from a practical standpoint, MIDI largely predates these other formats and is designed as an interface protocol rather than a music engraving language, and is thus far more commonly supported by electronic musical instruments, devices, and software. Audio Representations and Derived Features ------------------------------------------ A more intuitive way to represent music is through digital sampling of the raw audio, as is done on audio CDs and using the canonical wave and aiff file formats. In its crudest form, digitizing music audio simply captures amplitude over time in either a single (mono) or dual (stereo) output channel. The quality of recording is dependent on two main aspects: - The number of bits used to represent amplitudes, which determines quantization noise. - The sampling frequency, which determines the range of frequencies captured in the digitization process. The standard sampling frequency of 44100Hz ensures that no human audible frequencies are lost. To these considerations one may also add the possibility of using compression, typically at some cost to frequency resolution [@pye2000content]. Historically, working directly on raw audio has proven impractical. First, it has traditionally been prohibitively expensive in terms of data storage and processing cost. Second, and more importantly, it has been impractical in terms of the ability of AI software to extract meaningful information from such a low level representation. For reference, this pattern is somewhat analogous to the historical difficulty in using raw pixel data in visual processing. For this reason, similar to how visual processing resorted to more expressive, condensed representations such as SIFT [@lowe1999object] and HOG [@dalal2005histograms] features, different features constructed from raw audio have been commonly used. The common are the Mel-frequency cepstral coefficients (MFCC) [@logan2000mel], a derivative of the Fourier transform which captures the short-term power spectrum of a sound. The MFCC is typically constructed using successive temporal windows, thus representing auditory information over time. These coefficients were first used in speech recognition [@hasan2004speaker], and over the past two decades were shown to be extremely useful in music analysis, serving as a condensed but expressive representation of spectrum over time (see [@proutskova2009you; @schuller2010vocalist; @tomasik2009using; @han2014hierarchical; @marolt2009probabilistic] for a few examples). To reiterate, the symbolic and the auditory aspects of music representation aren’t separate categories but rather the two ends of a continuum. A good example for a commonly used representation that lies somewhere in between these two ends is that of chroma features [@ellis2007identifyingcover]. As we’ve briefly mentioned in Section \[chap8:tasks\], chroma features record the intensity associated with each of the 12 semitones in an octave, thus, when windowed, capture both melodic and harmonic information over time. Since this representation is typically extracted via analyzing the spectrum of the music, and since it strives to achieve a succinct representation of the notes physically heard throughout a recording, it has something of the auditory representation. At the same time, it also reduces raw audio to a series of pitch information over time, thus also retaining something of the symbolic. There is an inherent trade-off in choosing a music representation. Audio information is ubiquitous and more immediately useful for large-scale common applications. At the same time, raw recordings are harder to analyze, store and query. Symbolic representations are elegantly concise ways of storing and relaying a great deal of the audio information Western music traditionally cares about (which is in part why reading sheet music is still considered a fundamental skill for musicians), and such representations can be used efficiently for many analysis and retrieval tasks, but they are generally less common, less valuable for mass use and inherently partial in the sense that ultimately crucial auditory information is nonetheless lost. In practice, the choice of representation in the literature is more often than not dictated by availability, ease of use and the nature of the studied task. In the past few years, as part of the rising popularity and success of deep learning [@lecun2015deep], multiple papers have explored the prospects of using deep artificial neural networks to autonomously learn representations - i.e., learn meaningful features - from raw audio. Lee at al. [@lee2009unsupervised] have shown that generic audio classification features learned using convolutional deep belief networks were also useful in 5-way genre classification. Hamel and Eck also explored deep belief nets for both genre classification and automatic tagging, and have shown their learned features to outperform the standard MFCC features [@hamel2010learning]. Henaff et al. used sparse coding to learn audio features and showed this approach to be competitive with the state of the art in genre classification on a commonly used dataset [@henaff2011unsupervised]. Humphrey et al. surveyed various aspects of deep feature learning, and analyzed how the proposed architectures can be seen as powerful extensions for previously existing approaches [@humphrey2012moving]. While these new approaches are certainly promising, such architectures have not fully supplanted the previously designed representations discussed in this section, and are not a replacement for existing music interface protocols such as MIDI and music-engraving languages such as LilyPond. Overview of Technique {#chap8:technique} ===================== A wide variety of machine learning and artificial intelligence paradigms and techniques have been applied in the context of music domains. From a machine learning and artificial intelligence research perspective, it is of interest then to examine this range of techniques and the specific musical domains where they were applied. Due to the extensive nature of the related literature and the wide range of musical tasks where the following methods have been used, this list cannot be entirely comprehensive. To the best of our knowledge, however, it is representative of the full array of methods employed. A visual summary of the contents of this section is presented in Figure \[chap8:fig\_tech\]. ![Visual high-level overview of algorithmic techniques used in music AI research.[]{data-label="chap8:fig_tech"}](overview_of_tech_survey.png){width=".8\linewidth"} Machine Learning Approaches --------------------------- Considering the long list of music informatics tasks described in section \[chap8:tasks\], it is clear that many of them can be viewed as machine learning problems. Indeed, a broad spectrum of machine learning techniques have been used to tackle them. Perhaps one of the oldest documented machine learning approaches for musical tasks is *support vector machines* (SVM) and kernel methods. As mentioned in Section \[chap8:classification\], in an early example of computational approaches to music in general, Marques and Moreno utilized SVM for instrument classification [@marques1999study]. Xu et al.  used a multi-layer SVM approach for genre classification [@xu2003musical]. Their approach was to use the different features representing the spectrum of the audio and hierarchically partition the input first to Pop/Classic or Rock/Jazz, and then within each category (all in all training three SVM models). A similar task was also pursued by Mandel and Ellis, who studied the application of SVM on song-level features for music classification [@mandel2008multiple]. Meng and Shawe-Taylor studied other types of feature models, namely multivariate Gaussian models and multivariate autoregressive models, for short time window feature representation, with the ultimate goal of improved classification results over 11 genre categories [@shawe2005investigation]. Han et al. used the strongly related technique of support vector regression for emotion classification in music [@han2009smers]. Their proposed SMERS system extracts features from the raw audio, maps given audio from its feature representation to Thayer’s two-dimensional emotion model (this emotion representation is based on, and trains a support vector regressor for future prediction. Helen and Vitranen used support vector machines to classify audio components as drums vs. pitched instruments [@helen2005separation]. Ness et al. applied a stacked SVM approach for automatic music tagging, using the key insight that the probabilistic output of one SVM can be used as input for a second layer SVM in order to exploit possible correlations between tags [@ness2009improving]. Maddage et al. trained an SVM classifier to distinguish purely instrumental music sections from ones mixing instruments and vocals, for the purpose of song structure analysis [@maddage2003svm]. Gruhne et al. used SVM classifiers for phoneme identification in sung lyrics in order to synchronize audio with text [@gruhne2007detecting]. While useful, the overall popularity of SVM approaches for music informatics seems to have somewhat faded in the past few years, perhaps reflecting its diminishing popularity in the machine learning community in general. Another well-established and frequently used machine learning approach for musical tasks is that of *probabilistic methods*. Standard examples include Hidden Markov Models (HMM), which are of obvious use given the sequential and partially observable nature of music. In early examples, Battle and Cano used competitive HMMs (or Co-HMMs), a variation on the standard HMM paradigm, for automatic music segmentation [@batlle2000automatic]. In their study, Co-HMMs were better suited for music partitioning since they required far less apriori domain-knowledge to perform well. Durey et al. used HMMs for the purpose of spotting melodies in music [@durey2001melody], extracting notes from raw audio and treating them as observations in a graphical music language model. Eichner et al. were able to use HMMs for instrument classification. In their paper, they manually collected fragments of solo recordings of four instruments: classical guitar, violin, trumpet and clarinet, and trained separate HMMs for each instrument, leveraging the fact that different instruments induce different note transition mechanics [@eichner2006instrument]. Sheh and Ellis used HMMs for the more complicated task of chord recognition and segmentation [@sheh2003chord], while Noland and Sandler trained an HMM for key estimation [@noland2006key]. Extending these directions, Burgoyne and Saul applied a hidden Markov model to train Dirichlet distributions for major and minor keys on normalized pitch class profile vectors, for the eventual purpose of tracking chords and keys over time [@burgoyne2005learning]. Chen et al. used a duration-explicit HMM (or DHMM) for better chord recognition [@chen2012chord]. DHMMs work in different time resolutions to estimate the chord sequence by simultaneously estimating chord labels and positions. In their paper, Chen et al. were able to show that explicitly modeling the duration of chords improved recognition accuracy. Considering a different approach, Papadopoulos and Tzanetakis applied Markov Logic Networks (MLNs) for modeling chord and key structure, connecting the probabilistic approach with logic-based reasoning [@papadopoulos2012modeling]. In practice, their approach is to take Markov networks that encode the transitional chord dynamics of particular scales and combine them with a first-order knowledge base that encodes rules such as “A major chord implies a happy mood”. Leveraging the generative capabilities of HMMs, Morris et al. proposed a system that uses a Hidden Markov Model to generate chords to accompany a vocal melody [@morris2008exposing]. More recently, Nakamura et al. studied the application of autoregressive Hidden Semi-Markov Models for score following [@nakamura2015autoregressive], as well as for recovering piano fingering [@nakamura2014merged]. In the context of ethnomusicology, Jancovic et al. applied HMMs for automatic transcription of traditional Irish flute music [@jancovic2015automatic]. *Graphical models* in general have been used in various ways in music domains. Raphael designed a graphical model for recognizing sung melodies [@raphael2005graphical] and for aligning polyphonic audio with musical scores [@raphael2004hybrid]. Kapanci and Pfeffer explored the related notion of graphical models for signal-to-score music transcription, modeling different aspects of the music such as rhythm and pitch as first-order Gaussian processes [@kapanci2005signal]. Pickens and Iliopoulos proposed a Markov Random Fields (MRFs) for general music information retrieval tasks [@pickens2005markov], citing the power of MRFs in handling non-independent features as their key strength and inherently suitable for music tasks, in which various aspects of features - pitch, timbre, tempo etc) are deeply interdependent. Hoffman et al. used a hierarchical Dirichlet process to estimate music similarity [@hoffman2008content]. Hu and Saul proposed an approach a key profiling modeling technique that utilizes a latent Dirchilet allocation (LDA) topic model [@hu2009probabilistic]. The core insight in their paper was that by looking for commonly cooccurring notes in songs, it is possible to learn distributions over pitches for each musical key individually. Yoshii and Goto proposed a novel model for spectral representation called infinite latent harmonic allocation models (iLHA) [@yoshii2010infinite]. Their model represents a Bayesian Nonparametric approach in which each spectral basis is parameterized by means of a Gaussian mixture model (GMM), with both the number of bases and the number of partials being potentially infinite (in practice the least informative elements are zeroed out quickly and a finite approximation remains). In their paper they show this model is useful for multipitch analysis. More recently, Berk-Kirkpatrick et al. proposed a graphical model for unsupervised transcription of piano music, designing a complicated probabilistic activation model for individual keystrokes and inferring the most plausible sequence of key activations to produce a given spectogram [@berg2014unsupervised]. Schmidt and Kim proposed a conditional random field (CRF) approach for tracking the emotional content of musical pieces over time [@schmidt2011modeling]. Later, the same authors would study the application of deep belief networks to learn better music representations, to be used later on in supervised learning tasks [@schmidt2013learning]. Another very current example of the application of deep generative models for musical task is the work of Manzelli et al., who applied a Long Short Term Memory network (commonly referred to as LSTMs) to learn the melodic structure of different styles of music, and then use the unique symbolic generations from this model as a conditioning input for an audio generation model [@manzelli2018conditioning]. In a different recent work, Korzeniowski and Widmer proposed an RNN-based probabilistic model that allows for the integration of chord-level language models with frame-level acoustic models, by connecting the two using chord duration models [@korzeniowski2018improved]. As illustrated by these last few examples, the concept of deep belief networks and deep generative models in general is a natural bridge between graphical models and artificial neural network architectures, which indeed constitute the next learning paradigm we will discuss. *Artificial Neural Networks* (ANN) are among the oldest paradigms of machine learning. As such, they are also among the oldest to have been used by computational researchers studying musical tasks. To mention a several early modern examples, as early as 1997, Dannenberg et al. used ANNs, among other techniques, for musical style recognition [@dannenberg1997machine]. Kiernan proposed ANNs for score-based style recognition [@kiernan2000score], and Rauber et al. applied a self-organizing map (SOM) on psycho-acoustic features to learn a visualization of music datasets [@rauber2002using]. For some additional details on the prehistory of this approach, it is worth reviewing Griffith and Todd’s 1999 short survey on using ANNs for music tasks [@griffith1999musical]. In recent years, after an extended lapse in popularity, there has been a resurgence for ANNs via *deep architectures* (commonly dubbed “deep learning”). Naturally, these learning architectures have also been firmly embraced by researchers at the intersection of AI and music. Boulanger-Lewandowski et al. studied audio chord recognition using Recurrent Neural Networks (RNNs) [@boulanger2013audio]. Herwaarden et al. applied Restricted Boltzmann Machines (RBMs) for predicting expressive dynamics in piano performances [@van2014predicting]. Bock and Schedl applied RNNs for automatic piano transcription [@bock2016joint] and for joint beat and downbeat tracking [@bock2016joint]. In the latter work, an RNN operating directly on magnitude spectrograms is used to model the metrical structure of the audio signals at multiple levels and provides an output feature for a Dynamic Bayes Network which models the bars, thus making this work another example for the fusion of deep architectures and graphical models. Krebs et al. also utilized RNNs for the purpose of downbeat tracking [@krebs2016downbeat], using a very similar RNN + Dynamic Bayes Network learning framework, but in that work they used beat-synchronous audio features rather than the spectogram information. Humphrey et al. applied Convolutional Neural Networks (CNNs) for automatic chord recognition [@humphrey2012rethinking]. Humphrey has also been able to show the utility of deep architecture to learn better music representations [@humphrey2012moving]. CNNs were also recently used by Choi et al. for automatic tagging [@choi2016automatic]. In that paper, they use the raw mel-spectorgram as two-dimensional input, and compare the performance of different network architectures, and study their prediction accuracy over the MagnaTagATune dataset. Vogl et al. applied RNNs for automatic drum transcription, training their model to identify the onsets of percussive instruments based on general properties of their sound [@vogl2016recurrent]. Liu and Randall applied bidirectional Long Short Term Memory networks (LSTMs), a form of RNNs, for predicting missing parts in music [@liu2016predicting]. Pretrained neural networks have also been shown useful for music recommendation and auto-tagging, for instance by Liang et al. [@liang2015content] and Van den Oord et al. [@van2013deep]. Recently, Durand and Essid proposed a conditional random fields approach for downbeat detection, with features learned via deep architectures, in yet another example for combining graphical models with deep learning models [@durand2016downbeat]. Another deep generative approach that has been rising in prominence in recent years is that of Generative Adversarial Networks, or GANs, and indeed those too have been used in music AI tasks. As a recent example, Dong et al. proposed MuseGan, a symbolic-domain multi-track music synthesis framework trained on the Lakh dataset [@dong2018musegan]. Though somewhat beyond the scope of this paper, one of the most commonplace approaches for decomposing spectral data to individual components is that of *matrix factorization methods*, which can be viewed as an unsupervised learning technique, and were mentioned when discussing music AI tasks, for instance the works of Panagakis et al., who presented a sparse multi-label linear embedding approach based on nonnegative tensor factorization and demonstrate its application to automatic tagging [@panagakisGDI], or Kaiser et al., who used these factorization techniques to recover musical structure [@kaiser2010music]. To name a few more examples, Masuda et al. applied semi-supervised nonnegative matrix factorization for query phrase identification in polyphonic music [@masuda2014spotting], while Sakaue et al. proposed a Bayesian nonnegative factorization approach for multipitch analysis cite[sakaue2012bayesian]{}. Liang et al. proposed a Beta process nonnegative factorization and show its potential usefulness in several tasks including blind source separation [@liang2013beta], and subsequently Poisson matrix factorization for codebook-based music tagging [@liang2014codebook]. Another large family of machine learning models that have seen frequent use in musical domains are *decision trees*. To mention a few examples, Basili et al. applied decision trees for genre classification [@basili2004classification]. Lavner and Ruinskiy proposed a decision-tree based approach for fast segmentation of audio to music vs. speech [@lavner2009decision]. Herrera-Boyer and Peeters utilized a decision tree approach for instrument recognition [@herrera2003automatic]. West and Cox proposed a tree-based approach for learning optimal segmentations for genre classification [@west2005finding]. As in other domains, the benefits of applying *ensembles of classifiers* has not escaped the music informatics community. To mention a few examples, Tiemann et al. proposed an esnemble learning approach for music recommendation, generating many weak recommendations and combining them via learned decision templates [@tiemann2007ensemble]. Dupont and Ravet proposed a novel approach for instrument family classification using ensembles of t-SNE embeddings [@dupont2013improved]. Two particularly common ensemble approaches - boosting and random forests - have both been applied in music-related domains. Casagrande et al. used AdaBoost for frame-level audio feature extraction [@casagrande2005frame]. Turnbull et al. applied boosting for automatic boundary detection [@turnbull2007supervised]. Parker applied AdaBoost to improve a query-by-humming system [@parker2005applications]. Foucard et al. applied boosting for multiscale temporal fusion, later utilized for audio classification [@foucard2011multi]. In that paper, data from different timescales is merged through decision trees (serving as another example for the usage of this type of model in music tasks), which are then used as weak learners in an AdaBoost framework. The performance of their proposed system was tested on both instrument classification and song tag prediction, showing that their model was able to improve on prediction using features from only one timescale. Anglade et al. applied random forests to learn harmony rules, which were subsequently applied to improve genre classification [@anglade2009genre]. Lastly, it’s worth mentioning that though it has not been applied as extensively as other techniques, evolutionary computation has also been used for various music tasks. For instance, Tokui and Iba proposed a system for interactive composition via evolutionary optimization (with human feedback serving as a fitness function) [@tokui2000music]. Biles adapted genetic algorithms for music improvization [@biles2007improvizing], and as in Section \[chap8:gen\], Ramirez and Hazan employed genetic computation for expressive music performance [@ramirez2007inducing]. While machine learning approaches may indeed be prevalent and ubiquitous in music (as in artificial intelligence research in general), other techniques have been applied as well. In the next subsection we will present two families of such methods: formal (or logic-based) approaches, and agent-based architectures. Formal Methods -------------- While the learning-based approaches listed above are primarily data driven, many approaches have been employed for music tasks that are inherently rule-based and rely on formal reasoning. We consider this set of techniques as formal methods. Historically, one of the earliest approaches to the computational understanding of music involved linguistic analysis of music structure. Lehrdal and Jackendoff’s seminal work on the generative theory of tonal music [@lerdahl1985generative] is one of the earliest examples for such an approach. Since then, many musicians and researchers have attempted to both analyze and generate music using the derivational structure of *generative grammars* for music and other linguistic constructs [@rohrmeier2007generative; @de2009modeling; @mccormack1996grammar]. In a somewhat related work, Quick introduced the notion of chord spaces and applied concepts from Schenkerian analysis to define “production rules” for music generation [@quick2010generating]. As previously mentioned, Papadopoulos and Tzanetakis applied *Markov Logic Networks* for modeling chord and key structure [@papadopoulos2012modeling]. Bergeron and Conklin proposed a structured pattern representation for polyphonic music that defined construction rules for hierarchical patterns, and utilize pattern matching techniques to extract such descriptions from symbolic data [@bergeron2008structured]. In another relevant example, Abdoli applied fuzzy logic to classify traditional Iranian music [@abdoli2011iranian]. Lastly, though it has declined in fashion over the past 15 years, it is worth mentioning a sizable body of work on music generation through constraint satisfaction techniques. This approach is typified by formulating music rules as constraints and using constraint solving techniques for music generation. For further details and examples, see Pachet and Roy’s survey on harmonization with constraints [@pachet2001musical]. Agent-Based Techniques ---------------------- The definition of what exactly makes an “agent” is complicated and open for discussion, and it is outside the scope of this survey [@franklin1996agent]. For our purposes, we define an agent as an artificial system (either physical or, more commonly, implemented in software) that operates in an environment with which it interacts, and makes autonomous decisions. The vast majority of music-oriented robotics falls under this category. Robotic agents are autonomous systems which need to sense their environments, make decisions, and perform complex continuous control in order to achieve their goals. They may either need to play music alone, as in the work of Solis et al. on a robotic saxophone player [@solis2010development], or with humans, as in the work of Hoffman et al. on a robotic marimba player [@hoffman2011interactive] and that of Peterson et al. on a robotic flute player [@petersen2010musical], but their tasks still involve complex sensing and continuous control. Of course, not only physical robots serve as agents - autonomous accompaniment frameworks such as those proposed by Thom [@thom2001machine] and Raphael [@raphael2006demonstration] which we mentioned previously may certainly be considered autonomous agents. For a fairly recent survey of the state of the art in robotic musicianship, see [@bretan2016survey]. Another family of approaches which we define as agent based are multiagent systems, where multiple autonomous, reactive components cooperate in order to perform a musical task. These approaches have been primarily utilized for music generation tasks. Examples include the swarm approach of Blackwell, previously mentioned in the context of music tasks. Blackwell modeled music through particle swarms which generate music through forces of attraction and repulsion [@blackwell2003swarm]. A somewhat similar approach can be seen in the more recent work of Albin et al., who utilized local properties in planar multi-robot configurations for decentralized real time algorithmic music generation [@albin2012musical]. Lastly, it is worth noting that some approaches have directly applied reinforcement learning, which is an agent-based learning paradigm, for various musical tasks. Cont et al. apply a reinforcement learning model for anticipatory musical style imitation [@cont2006anticipatory]. Wang et al. considered music recommendation as a multi-armed bandit problem, a concept closely related to the RL literature, with the explicit purpose of efficiently balancing exploration and exploitation when suggesting songs to listeners [@wang2014exploration]. And quite recently, Dorfer et al. framed score-following as a reinforcement learning task, a sensible approach given that changes in an agent estimation of its position in the score affect its expectation over future score position [@dorfer2018learning]. In that paper the authors also had the interesting insight that once the agent is trained, it does not need a reward function in order to generate predictions, an observation that would pave the road for other applications of reinforcement learning in similar situations. To summarize, in this section we have reviewed the wide and varied range of artificial intelligence disciplines utilized in the context of music-related tasks. It is indeed apparent that nearly all major developments in artificial intelligence research have found their way to music applications and domains. In the next section we will address one of the primary challenges of music AI research - how do we evaluate algorithmic performance in music-related tasks? Evaluation Methods for Musical Intelligence Tasks {#chap8:eval} ================================================= Having delved into the vicissitudes of the music and AI literature, one should also consider the various evaluation metrics used in assessing success and failure in tackling the varied research questions previously mentioned. In this section we discuss the various approaches observed in the literature for evaluating performance on various musical tasks. Evaluation is often a challenge when it comes to the application of AI for music. Many musical tasks are inherently fuzzy and subjective, and on the face of it, any tasks that are aimed towards humans, be they music recommendation or affective performance, ultimately rely on human feedback as the most reliable (and perhaps the only) measure for success. An additional source of complication stems from the inherently sequential nature of music. In the case of image scene understanding, for instance, a person is able to perceive, recognize and annotate relatively quickly. Unlike visual data, music is experienced and processed sequentially in time, and often without being afforded the luxury of skipping information or “speed auditing”. For these reasons, data from human participants is expensive to obtain, and various other methods have been employed in addition to it, depending on the task. We now briefly discuss such methods in this section. A visual illustration of the breakdown of evaluation method can be seen in Figure \[chap8:fig\_eval\]. ![Visual high-level overview of evaluation methods used in music AI research.[]{data-label="chap8:fig_eval"}](tax_evaluation_survey.png){width=".8\linewidth"} Evaluation of Classification Tasks ---------------------------------- One of the primary reasons why classification tasks have been popular in music informatics is its relative ease of evaluation. Given that a labeled dataset exists, evaluation can rely on the traditional evaluation metrics used in supervised learning, such as overall accuracy, AUC, F-scores etc [@murphy2012machine]. Some challenge may still lie in obtaining labeled examples. For certain tasks, such as classification by genre or composer, labels can easily be assigned automatically. For other tasks, such as guitar playing technique classification, getting label information is more difficult. In such cases, collecting hand-annotated data is a common solution [@reboursiere2012left; @su2014sparse]. Alternatively speculative labels may be inferred in some cases [@fu2011survey]. Another example of this kind of approach has been proposed recently by Sears et al., who described a data-driven method for the construction of harmonic corpora using chord onsets derived from the musical surface [@sears2018evaluating]. Overall, for multiple tasks ranging from sentiment analysis and tagging in music to structure inference, preexisting hand-annotated datasets, such as the Mazurka project for performance analysis [@cook2007performance] or the various existing MIREX datasets [@downie2008music] serve as necessary benchmarks. Evaluation of Skill Acquisition Tasks ------------------------------------- Skill acquisition (or music understanding) tasks, per our definition from Section \[chap8:tasks\], are generally more difficult than traditional classification, and as such tend to be more difficult to evaluate. For tasks such as music segmentation, structural analysis and motif identification, for instance, no trivial way to obtain ground truth information exists, and therefore most commonly researchers have relied on hand-annotated datasets for evaluation (as previously discussed in the context of classification tasks). In certain contexts, in which the underlying skill is learned to facilitate a more complicated task, such as better genre classification, evaluation can be done directly on the final task. This observation holds for many of the aforementioned MIREX tasks, such as key detection and audio downbeat estimation, see the MIREX website for a most current list of tasks and benchmarks.[^4] In certain contexts, such as informative music performance, direct human evaluation has been applied, commonly in a comparative format (of the two performances, which one was more expressive?) [@lee2010crowdsourcing]. One of the sources of difficulty evaluating skill acquisition tasks is the potential complexity of the ground truth data and metrics required in order to perform reliably. For instance, McLeod and Steedman note in a recent paper, in the context of evaluating polyphonic music transcription, that “(i)t is less common to annotate this output with musical features such as voicing information, metrical structure, and harmonic information, though these are important aspects of a complete transcription”. In that paper they also propose a novel evaluation metric that combines different aspects of transcription that typically are evaluated separately, such as voice separation, pitch detection and metrical alignment. Despite such progress, the challenge of finding efficient and non-labor-intensive ways of evaluating musical skill acquisition tasks is not yet resolved. Evaluation of Retrieval Tasks ----------------------------- Like skill acquisition, retrieval tasks are nontrivial for evaluation. For example, they often rely on some notion of similarity among musical pieces, which is often a subjective and relative concept. Even when ground truth exists (for instance, in the form of playlists designed by humans [@mcfee2011natural]), deducing similarity or commonalities in taste is not immediate. For music recommendation systems, for instance, the best and most reliable evaluation method is through human experimentation, which is a difficult and time consuming process. Some researchers have gone around this by leveraging preexisting datasets as a surrogate for human preference [@mcfee2011natural]. Various different methods have been suggested to use limited existing data to impute speculative information regarding success or failure in the underlying task. For instance, in the context of playlist recommendation, it has been suggested that if a given person likes both artists A and B, then having songs by these two artists in the same playlist is considered a success [@chen2012playlist; @weston2011multi]. In other tasks, such as mood analyis, particularly for retrieval purposes, given that certain songs by an artist have been labeled as “moody”, assigning this label to other songs by that artist could be considered a success. These methods can be noisy and have obvious limitations (for instance, song style can vary considerably even for songs by the same artist). However, in a recent paper, Craw et al. have shown that preexisting datasets in combination with information extracted from social media can serve as a reasonable approximation for evaluating the effectiveness of different music recommenders, validating their approach via a human study [@craw2015music]. Qualitative Evaluation ---------------------- Some music tasks, such as music generation, are very difficult to evaluate even with human feedback. For instance, the fact that 20 out of 100 human subjects liked or didn’t like a song isn’t in itself sufficient evidence for the quality of that song. For this reason, some researchers in the past have relied on qualitative evaluation by experts as a benchmark for performance. While such evaluation is foreign to the world of machine learning and artificial intelligence, it is in line with how culture in general is often evaluated. Another common approach aims for verisimilitude. In the case of style imitation, this approach has some legitimacy, though to the best of my knowledge very few if any recent algorithmic composition algorithms have been put to the test rigorously (i.e. having a large group of people try and differentiate between algorithmic compositions in the style of a given composer and pieces by that composer himself). If we were to speculate, I’d cautiously suggest that in most cases, even in light of recent, truly impressive advances in the field of generative music models (such as the work of Huang et al. [@huang_transformer]), the differentiation between an actual composition by a renowned composer and an algorithmic one is either trivially easy (for experts in particular) or meaningless (for laymen, who would not be able to tell much less professional-sounding algorithmic approximation from actual human compositions). To conclude, despite much progress both in research and in analysis, the question of how to evaluate algorithmic composition in general remains an open problem. Summary & Discussion: Open Problems {#chap8:musint} =================================== In this survey article we have reviewed an extremely large body of work involving both AI research and music-related tasks. We have proposed an overall taxonomy of music AI problems, breaking them down by the core nature of the task, by the input type, and by the algorithmic technique employed. We have then proceeded to map out the current state of the art, focusing on research from the past 20 years, relating a wide array of concrete exemplars to the proposed taxonomy. This panoramic overview of music AI research reveals a dizzyingly complex picture, spanning disciplines and paradigms. On the one hand it feels as though almost any conceivable task has been attempted and any plausible technique has been employed. For some tasks, like key identification [@noland2006key] or beat detection [@durand2016downbeat], the current levels of performance are high enough to allow for other tasks to rely on them as lower-level skills (for instance, key identification or beat and note extraction in the service of algorithmic accompaniment [@ramona2015capturing], or score following [@otsuka2011incremental]). On the other hand, while the research community has been able to make significant strides on many music-related tasks spanning the gamut from extracting chords and notes to structure analysis to playlist recommendation to music synthesis, the more elusive goal of “music understanding” - as we proposed in Section \[chap:taxonomy\] - is still largely unsolved. While we have been able to impart AI with the ability to identify many different building blocks necessary for music understanding, such as recognizing notes, chords, beats, motifs, sentiment (to some extent) and how these relate to more abstract things like listener preferences. But we have yet to teach AI to make sense of all these disparate sources of information; to ground their cultural and semiotic significance; understand the core characteristics of an individual’s taste in music (and how it related to one’s background, sense of identity etc); to know what a given chord means to a listener in a given setting; to understand what makes a piece by Telemann banal to modern ears and a piece by Bach a work of timeless genius; or to understand what people listen for when they listen to rock music vs. when they listen to a piano sonata by Beethoven. In the next subsection we review the state of the art both with respect to specific tasks and from a higher-level perspective. In the subsequent subsection, we discuss the current gaps and limitations in the literature and what these gaps are indicative of, conceptually. Lastly, we consider possible ways forward in expanding the literature and bridging these gaps in the pursuit of more complete artificial musical intelligence. The State of the Art -------------------- Examining the literature surveyed in this article reveals several insights regarding the current state of the art in applying machine learning approaches and tools in the context of music informatics. In this section we review the state of the art with respect to musical tasks, breaking it down along similar ones to those elucidated in Section \[chap8:tasks\]. - Over the past ten years, thanks to sustained research efforts and general advances in supervised machine learning techniques, performance on classification tasks such as instrument, genre and composer classification has been steadily growing. In a recent study, Oramas et al. reported AUC-ROC scores of up to 0.88-0.89 using audio information alone in a task of classifying music albums by genre[@oramas2018multimodal], and Gomez et al. reported an F-score of 0.8 for Jazz solo instrument classification[@gomez2018jazz]. While this thread of research remains active and is expected to continue pushing the boundaries, it seems the community as a whole has gravitated towards more complex tasks which better fit the other categories of the task taxonomy - retrieval, skill acquisition and generation. - The dramatic increase in recommendation systems research and available online music consumption data has led to a boom in studies exploring music retrieval, recommendation, mood prediction and other user-facing tasks, as discussed at length in Section \[chap8:tasks\]. Only recently, Schedl presented the LFM-1b Dataset, which contains $10^9$ listening events created by $12\cdot10^5$ users[@schedl2016lfm], pushing the envelope even further with respect to the amount of data academic researchers can work with towards such tasks. Meanwhile, in industry, companies such as Spotify have over 200 million active users and 50 million tracks.[^5] Despite this growth, the impression given by the literature is that progress in the quality of prediction for tasks such as music sentiment analysis and preference modeling is far from plateauing. - While improvements can always be made, existing approaches for fundamental music understanding tasks such as key and chord identification, beat extraction, audio transcription, score following, and even to some extent mood recognition, work well enough to provide serviceable performance as underlying components of more advanced tasks such as music recommendation and live accompaniment. This observation is supported by the increase in publications proposing such systems and their improved performance, requiring less direct human control or tuning. - In the past few years, harnessing the emergence of several discipline-altering advances in AI research such as deep neural network architectures, generative adversarial models, and attention mechanisms, huge strides have been made with the respect to AI-driven autonomous music generation, including Music Transformer[@huang_transformer], MuseGan[@dong2018musegan] and MuseNet [@musenet]. While these advances are highly impressive, researchers [@chen2019effect] and musicians[^6] alike have commented on their existing limitation, highlighting the fact that AI-generated human-level music composition is still a challenge. Major Gaps in the Literature ---------------------------- Examining the rich and varied work that has been carried out in pursuit of artificial musical intelligence, one may observe there has been an over-emphasis in the literature on isolating small, encapsulated tasks and focusing on them, without enough consideration of how different tasks connect to some end-goal vision of artificial intelligence for music. Despite their existence (as surveyed in this article, particularly under the category of agent-based techniques), there is a relative dearth of music AI *systems*, entities that perform multiple music-related tasks over time, and connect music sensing, understanding and interaction. As a consequence of this gap, there has not been much work on music AI systems operating *over time*. The challenge of end-to-end complex music reasoning systems is that they involve multiple facets of perception, abstraction and decision-making, not dissimilar from those of physical robotic or visual systems. While some progress has been made towards more robust and adaptive music AI capabilities, the conceptualization of music understanding as a process of sequential perception and decision-making over time is under-explored in the current literature. Furthermore, there has not been much work on how such systems would practically interact with other agents and with humans and explicitly reason about their perceptions and intentions (for instance, in the context of joint human-agent music generation). More prosaically, the relative shortage of works which explicitly reason about people’s *perception* of music. These gaps reflect not only a lack in music AI “system engineering research”, i.e. the piecing together of different components towards an end-to-end functional architecture which is capable of sensing and acting in a closed loop fashion (though that is definitely part of the gap). They also indicate a conceptual lacuna with respect to modeling the *implicit semantics* of music, understanding music hierarchically in a musicology-inspired fashioned to characterize in ways that go beyond statistical patterns and spectral subsequences what, on an abstract level, really makes two songs alike, or what characterizes one composer vs. the other. Above all these challenges looms the fact that for many critical musical intelligence tasks, evaluation at scale is still an unresolved issue. As discussed in Section \[chap8:eval\], for any task complex enough such that labels cannot be automatically derived from the input, the curation of manually-annotated datasets is difficult and labor intensive. The difficulty of evaluation is substantially greater when it comes to music generation tasks, as no agreed upon metrics exist for ascertaining the quality of synthesized music, or for comparing pieces of synthesized music generated using different algorithms. In the next section, we propose a vision for music AI research which, in our opinion, would help put the community on a path forward towards meeting the challenges listed above. Directions Forward ================== All in all, dramatic leaps forward have been made over the past decades in music informatics and the application of artificial intelligence techniques in musical tasks. However, as discussed in this section, the challenges remaining are substantial, and pose both technical and conceptual challenges. We believe that the conceptual challenges should be addressed irrespective of the many technical advances that are still being made by many researchers around the world. Here we propose a short, non-comprehensive list of concrete directions we believe offer the greatest opportunity for substantial progress: - While isolated, well-scoped tasks are the building blocks with which progress is pursued, we believe it would be highly beneficial to the community to actively consider challenges of a bigger scale. Such challenge would introduce the need for end-to-end systems as well as a deeper conceptual understanding of what it means for AI to be musically intelligent. A good example for such a challenge would be a physical system that creates music while interacting with other musicians. Such a system should be required to actively sense what its collaborators are playing, reason about it abstractly, and generate audible sound in a closed-loop sense. Such a system would tie together challenges in music perception, music abstraction and modeling, prediction and decision-making, and would require anyone working on such a system to consciously consider how these various aspects of the problem really connect and inform one another. It is our hope that aiming towards such a goal would lead to substantial progress on each subtask individually, but more importantly, on our overall understanding of what synthetic music competency means. - While there has been huge progress in the creation of large-scale, meaningfully annotated music datasets for AI research, there is still no “ImageNet[@deng2009imagenet] equivalent” for music. We believe a benchmark of such nature - a rich, audio-level dataset with complex annotations on a massively grand scale - would lead to considerable progress and would not only push the field forward but also serve as a consistent shared baseline across algorithms and platforms, even beyond music informatics. More importantly, if the goal of algorithmic, AI-driven music synthesis is truly a tent-pole for music AI research, we must strive for some shared notion of a metric or evaluative procedure for comparing the outputs of such synthesized pieces of music, a measure which goes beyond collective impressions. A possible approach towards addressing the issue of evaluating AI-generated music could be a formal competition, with some credentialed experts as referees and predefined criteria. Such an expert panel approach could be complemented by a more traditional crowdsourced approach. Together, these two formats of evaluation could provide us with a clearer picture of how the music establishment as well as the general public view these generated pieces comparatively. - Lastly, we believe there is a great deal to be gained in bridging the gap between music AI and cognitive research. Music is an innate form of human communication. How we perceive music and reason about it should be made a more integral aspect of music AI research. First, because ultimately any music AI tool would need to interact with human perception in some way. Second, because leveraging a better understanding of human music cognition could inform better music AI algorithms. And lastly, because in the process we might also learn something profound about our own music cognition, and how it is related to other facets of our perception and reasoning. Concluding Remarks ------------------ If we envision a future where intelligent artificial agents interact with humans, we would like to make this interaction as natural as possible. We would therefore like to give AI the ability to understand and communicate within cultural settings, by correctly modeling and interpreting human perception and responses. Such progress would have many real world practical benefits, from recommender systems and business intelligence to negotiations and personalized human-computer interaction. Beyond its practical usefulness, having AI tackle complex cultural domains, which require advanced cognitive skills, would signify a meaningful breakthrough for AI research in general. The dissertation research of the first author of this survey was largely motivated by the desire to address the gaps discussed in the previous section, particularly on work towards the goal of learning social agents in the music domain[@Liebman2020]. However, the progress made in one dissertation only highlights how much challenging work is left to be pursued. We believe this work presents incredible opportunities for musical intelligence, and for artificial intelligence as a whole. Musical Terms {#app:glossary .unnumbered} ============= -------------- -------------------------------------------------------- [**term**]{} [**meaning**]{} beat basic unit of time chord concurrent set of notes interval a step (either sequential or concurrent) between notes loudness amplitude of audible sound major chord a chord based on a major third interval minor chord a chord based on a minor third interval note sustained sound with a specific pitch pitch the perceived base frequency of a note playlist ordered sequence of songs tempo speed or pace of a given music timbre perceived sound quality of a given note -------------- -------------------------------------------------------- Bibliography {#bibliography .unnumbered} ============ [^1]: <http://techcrunch.com/2015/01/21/apple-musicmetric> [^2]: <http://en.wikipedia.org/wiki/Rocksmith> [^3]: By “scientific” we primarily mean principled, measurable and reproducible research in appropriate publication venues. [^4]: <https://www.music-ir.org/mirex/wiki/MIREX_HOME> [^5]: <https://newsroom.spotify.com/company-info/> [^6]: <https://www.youtube.com/watch?v=xDqx14lZ_ls>
{ "pile_set_name": "ArXiv" }
--- abstract: 'The discovery of a transient kilonova following the gravitational-wave event GW170817 highlighted the critical need for coordinated rapid and wide-field observations, inference, and follow-up across the electromagnetic spectrum. In the Southern hemisphere, the Dark Energy Camera (DECam) on the Blanco 4-m telescope is well-suited to this task, as it is able to cover wide-fields quickly while still achieving the depths required to find kilonovae like the one accompanying GW170817 to $\sim$500 Mpc, the binary neutron star horizon distance for current generation of LIGO/Virgo collaboration (LVC) interferometers. Here, as part of the multi-facility followup by the Global Relay of Observatories Watching Transients Happen (GROWTH) collaboration, we describe the observations and automated data movement, data reduction, candidate discovery, and vetting pipeline of our target-of-opportunity DECam observations of S190426c, the first possible neutron star–black hole merger detected via gravitational waves. Starting 7.5hr after S190426c, over 11.28hr of observations, we imaged an area of 525deg$^2$ ($r$-band) and 437deg$^2$ ($z$-band); this was 16.3% of the total original localization probability and nearly all of the probability density visible from the Southern hemisphere. The machine-learning based pipeline was optimized for fast turnaround, delivering transient candidates for human vetting within 17 minutes, on average, of shutter closure. We reported nine promising counterpart candidates 2.5 hours before the end of our observations. Our observations yielded no detection of a bona fide counterpart to $m_z = 22.5$ and $m_r = 22.9$ at the 5$\sigma$ level of significance, consistent with the refined LVC positioning. We view these observations and rapid inferencing as an important real-world test for this novel end-to-end wide-field pipeline.' author: - 'Daniel A. Goldstein' - Igor Andreoni - 'Peter E. Nugent' - 'Mansi M. Kasliwal' - 'Michael W. Coughlin' - Shreya Anand - 'Joshua S. Bloom' - 'Jorge Martínez-Palomera' - Keming Zhang - 'Tom[á]{}s Ahumada' - Ashot Bagdasaryan - Jeff Cooke - Kishalay De - 'Dmitry A. Duev' - 'U. Christoffer Fremling' - Pradip Gatkine - Matthew Graham - 'Eran O. Ofek' - 'Leo P. Singer' - Lin Yan bibliography: - 'ref.bib' title: ' GROWTH on S190426c. II. Real-Time Search for a Counterpart to the Probable Neutron Star-Black Hole Merger using an Automated Difference Imaging Pipeline for DECam ' --- Introduction {#sec:intro} ============ Joint detections of electromagnetic (EM) and gravitational waves (GWs) from compact binary mergers involving neutron stars (NSs) are a promising new way to address a number of open questions in astrophysics and cosmology [see, e.g., @2009arXiv0902.1527B; @2019arXiv190402718C for reviews]. The combined EM/GW dataset from the binary neutron star (BNS) merger GW170817 [@mma] provided a high-precision measurement of the speed of gravity [@speedgrav], gave new insight into the origin of the heavy elements [e.g., @2017Natur.551...80K; @2017Natur.551...67P; @2017ApJ...848L..19C; @2017Sci...358.1570D; @2017Natur.551...75S; @2017Sci...358.1565E; @2017Sci...358.1556C; @2018ApJ...855...99C; @2018arXiv181000098S; @2019PhRvL.122f2701W; @2019MNRAS.tmpL..14K; @2019arXiv190501814J; @2019ApJ...875..106C], demonstrated a novel technique for measuring cosmological parameters [@2017Natur.551...85A], and provided unparalleled insight into the radiation hydrodynamics of compact binary mergers [e.g., @2017ApJ...848L..20M; @2017Sci...358.1559K; @2017Sci...358.1579H; @2017ApJ...848L..21A; @2018Natur.554..207M; @2019Sci...363..968G; @2018PhRvL.120x1103L]. To date, GW170817 remains the only astrophysical event that has been detected in both the EM and GW messengers. To realize the full scientific potential of BNS and NS-black hole (BH) mergers with joint EM/GW detections, many more must be discovered and followed up. The current working procedure for joint EM/GW astronomy begins when a network of GW observatories [presently LIGO, the Laser Interferometer Gravitational-Wave Observatory, and the Virgo Gravitational-Wave Observatory; @2015CQGra..32g4001L; @2015CQGra..32b4001A] detects a GW source, and, by analyzing its waveform, localizes it to a region of the sky that is typically between 100 and 1000 deg$^2$. Nearly contemporaneous $\gamma$-rays and X-rays may be detected and localized if the merger also produces a short gamma-ray burst (GRB) at a favorable viewing angle [see, e.g., @1989Natur.340..126E; @2006ApJ...638..354B]. It then falls to the optical and near-infrared observational communities to search for transient events in the large localization region that are consistent with theoretical expectations for spectrum synthesis in compact binary mergers, enabling the GW sources to be localized precisely (i.e., associated with a host galaxy). Such transients, often referred to as “kilonovae” because they are roughly $10^3$ times brighter than novae, are powered by the rapid decay of $r$-process material synthesized in the mergers [@2010MNRAS.406.2650M], and they are distinguished from other transients by their rapidly evolving light curves, which fade and redden in just a few days [e.g., @2013ApJ...775..113T; @2013ApJ...775...18B]. In order to search large areas of sky for such faint and rapidly evolving transients, telescopes with large apertures, imagers with large fields of view, and pipelines that can rapidly process images to efficiently identify transient candidates are required. In the Southern Hemisphere, the Dark Energy Camera [DECam; @2015AJ....150..150F] on the Victor M. Blanco 4-meter Telescope at Cerro Tololo Inter-American Observatory (CTIO) is a powerful instrument for detecting kilonovae associated with gravitational wave triggers. The wide field of view ($\sim$3 deg$^2$) of the instrument, combined with its red sensitivity and the substantial aperture of its telescope, make it well suited to follow up even the most distant BNS and NS-BH mergers in the LIGO/Virgo horizon. The power of DECam for EM/GW follow-up was illustrated by its significant role in the study of AT2017gfo, the kilonova associated with GW170817 [@2017ApJ...848L..16S; @2017ApJ...848L..17C], and by its important role in the follow-up of several other GW events from LIGO and Virgo [@2016ApJ...823L..33S; @2016ApJ...826L..29C; @2016ApJ...823L..34A; @2019ApJ...873L..24D]. In preparation for the third LIGO/Virgo GW observing run (O3), we developed a high-performance image subtraction pipeline to rapidly identify transients on DECam images. The National Optical Astronomy Observatory (NOAO), which allocates time on DECam, granted our team the opportunity to trigger the instrument to follow up neutron star mergers detected in gravitational waves by LIGO and Virgo during the first half of O3 (NOAO Proposal ID 2019A-0205; PIs Goldstein and Andreoni). We activated our first trigger on the unusual GW source S190426c [@gcn2], potentially the first neutron star-black hole (NS-BH) merger to be detected by LIGO and Virgo. In this Letter, we describe our follow-up observations of this event, with a focus on the software infrastructure we have developed to rapidly conduct wide-field optical follow-up observations of neutron star mergers using DECam. S190426c: A Probable NS-BH Merger {#sec:event} ================================= On 2019 April 26 at 15:21:55 UTC, the LIGO Scientific Collaboration and Virgo Collaboration (LVC) identified a compact binary merger candidate, dubbed “S190426c,” during real-time processing of data from LIGO Hanford Observatory, LIGO Livingston Observatory, and Virgo Observatory. The candidate was detected by four separate analysis pipelines: GstLAL [@2017PhRvD..95d2001M], MBTAOnline [@2016CQGra..33q5012A], PyCBC Live [@2017ApJ...849..118N], and SPIIR, with a false alarm rate of 1 in 1.7 years. Roughly twenty minutes after detecting the event, LVC issued a circular on the NASA Gamma-Ray Coordinates Network (GCN)[^1] reporting the discovery [@gcn1]. The initial GCN included a preliminary skymap giving a probabilistic localization of the event from the [<span style="font-variant:small-caps;">BAYESTAR</span>]{} rapid GW localization code [@2016PhRvD..93b4013S see Figure \[fig:skymap\]]. The total area of sky covered by the 90% confidence region was 1262 deg$^2$, with an estimated luminosity distance of 375 $\pm$ 108 Mpc. As Figure \[fig:skymap\] shows, the probability was concentrated in two distinct regions on the sky, one largely north of the celestial equator at $\mathrm{RA}\approx20.5$h, and another region south of the equator roughly centered at $\mathrm{RA}\approx13.5$h. The initial classification of the event was consistent with several possible progenitor scenarios. The initial GCN circular classified the event as a BNS merger with a probability of 49%, a compact binary merger with at least one object with a mass in the hypothetical “mass gap” between neutron stars and black holes (3–5 solar masses) with a probability of 24%, a terrestrial event (ie., not astrophysical) with a probability of 14%, and a neutron star-black hole (NS-BH) merger with a probability of 13% [@gcn2]. These probabilities were later updated in favor of the NS-BH interpretation, which was assigned a revised probability of 73.1% (including the mass gap probability), with no change to the probability of being a terrestrial event [@gcn3]. Given the significant probability of the event originating from a NS merger, we decided to trigger our DECam program to search for an optical counterpart. ![image](decam.pdf){width="100.00000%"} Observations ============ We triggered DECam follow-up of S190426c under NOAO proposal 2019A-0205 (PIs Goldstein & Andreoni), publishing a GCN circular describing our plan for the observations [@gcn4] and our intentions to make the data public immediately. We adopted an integrated observing strategy using the $r$ and $z$ filters with 30s and 50s exposures, respectively. The visits in $r$ and $z$ were spaced in time by at least 30 minutes to facilitate the rejection of moving objects. We observed from 2019-04-26 22:57:35 until 2019-04-27 10:25:54 UT, for a total of 11.28hr. We acquired 196 exposures in $r$ and 163 in $z$, covering an area of 525deg$^2$ and 437deg$^2$ respectively, assuming an effective 60-CCD 2.68deg$^2$ field of view for DECam that excludes the chip gaps. Our observations resulted in empirical limiting magnitudes of $m_z = 22.5$ and $m_r = 22.9$ at the 5$\sigma$ level of significance. The information provided by the GW skymap (large localization area, large distance, possible BH companion) compelled us to modify the observing strategy that we originally designed for this program, which was based on 3 visits in $g$-$z$-$g$ bands on the first night and a $g$-$z$ pair on the second night after the trigger. Exposure times were planned to be 15s in $g$ and $25$s in $z$ band. Such a strategy was designed to follow-up primarily BNS mergers enclosed in an error region $\lesssim 150$deg$^2$ in extension and $<200$Mpc away. The $g-z$ filter combination is optimal to capture and recognize the rapidly-evolving blue component that BNS mergers such as GW170817 are expected to show [see e.g., @2017Sci...358.1565E; @2017Sci...358.1574S; @2019PASP..131f8004A; @2019ApJ...874...88C]. The large distance to S190426c, along with the theoretical expectation that NS-BH mergers may not show any bright blue component at early times [@2017Natur.551...80K], advocated in favor of deeper exposures and redder filters. The third visit planned for the first night was dropped in favor of a broader sky coverage with longer $z$-band exposures. For further details on schedule optimization for our DECam program, see Andreoni, Goldstein, et al. (in preparation). Our observations of S190426c were scheduled automatically by the GROWTH target-of-opportunity (ToO) marshal system[^2] described in [@2019PASP..131d8001C] and [@2019PASP..131c8003K]. For this event, we instructed the ToO marshal to employ a “greedy” algorithm to generate a schedule of observations that tiled as much of the 90% credible position region of the initial [<span style="font-variant:small-caps;">BAYESTAR</span>]{} skymap as possible. The schedule was generated before sunset in Chile on 2019 April 26 and exported as a [`json`]{} file. The initial [<span style="font-variant:small-caps;">BAYESTAR</span>]{} skymap and our series of observations are shown in Figure \[fig:skymap\]. The [`json`]{} file was ingested into the DECam Survey Image System Process Integration [SISPI; @2012SPIE.8451E..12H] readout and control system, which executed the observations. As soon as each exposure was completed, SISPI transferred each raw exposure to NOAO in Tucson, AZ via the Data Transport System [DTS; @2010SPIE.7737E..1TF] for archiving. A second epoch was planned for the following night using the same filters, but the refined skymap that LVC released after our observations [@gcn7] using the more precise LALInference localization pipeline [@2015PhRvD..91d2003V] completely eliminated the localization probability in any sky region with DECam surveys template coverage (see Section \[sec:templates\]), necessary to discover transients with our pipeline. Moreover, the visible region of sky that we could have observed resides on the Galactic plane, where several magnitudes of extinction and crowded stellar fields make the detection of faint, extragalactic transients a particularly difficult task. Therefore we decided against more disruptive ToO observations, ending our DECam observing campaign for S190426c after a single night of data-taking. We describe three additional discovery engines and several follow-up facilities that undertook the search for the electromagnetic counterpart to S190426c as part of the GROWTH network in a suite of companion papers (Kasliwal et al. in prep, Bhalerao et al. in prep). A synopsis of the worldwide community observations reported in GCNs can be found in [@2019arXiv190502186H]. Real-Time Automated Difference Imaging Pipeline {#sec:pipeline} =============================================== As soon as observations commenced on the first night of our trigger, we programmatically checked the NOAO archive each second for new images from the DTS. Each time a new image was found, we automatically downloaded it over FTP to the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California and stored it on a high-performance `Lustre` parallel filesystem, making use of the ESNet energy sciences high-speed internet backbone connecting US Department of Energy facilities. The typical data transfer rate from Tucson to Berkeley was 40MB/s, enabling each 550 MB `fits` focal plane exposure to be delivered in an average transfer time of 14 seconds. Exposure Segmentation and Parallelization ----------------------------------------- When each raw image arrived at NERSC, a job was programmatically launched via `slurm`[^3] to process it, beginning the real-time search. Jobs were executed on the Cray XC40 `cori` supercomputer. Each exposure was delegated for processing to a single 64-logical core `haswell` compute node. In each job, each of the 62 DECam science CCDs was assigned to a single logical core. We arranged a special, low-latency “realtime” job queue for this project to provide near-immediate access to NERSC computer resources. Our realtime queue gave us on-demand access to 18 `haswell` compute nodes, allowing us to process up to 18 exposures simultaneously. We found that this allocation of computer resources was sufficient to ensure fast turnaround. As a first step in the processing, each raw DECam `fits` file was split into 62 separate `fits` files, one for each CCD. Except for template generation, all subsequent pipeline steps were performed on a per-CCD basis, using the Message Passing Interface (MPI) to facilitate the concurrent execution of 62 independent copies of the pipeline in each of up to 18 jobs running simultaneously. The top-level pipeline code was written in the Python programming language and run inside a high-performance `shifter`[^4] container to increase performance on the NERSC hardware. Detrending and Astrometric Calibration -------------------------------------- The raw frames we ingested from the NOAO archive underwent no calibration, containing only [`fits`]{} header keywords and integer pixel values, so we first performed a series of detrending and preprocessing steps to transform them into usable science frames. For each frame, we made an overscan correction as described in [@2017PASP..129k4502B]. We also generated a mask frame for each CCD, masking out any pixels above the saturation value of their amplifier. Because our observations were time-sensitive, and because DECam is a very stable instrument, we used flat and bias frames from a previous night for the real-time processing (i.e., we did not take flats or bias frames in our observing sequence). The flat and bias frames we used to process the data for S190426c were taken on 2018 Nov 1 as part of the DECam Legacy Survey [@2019AJ....157..168D]. We subtracted the bias frames from the raw pixels and then divided by the flat frames. Any science pixel values rendered invalid by the flat-fielding were masked. We applied the standard DECam bad pixel masks, but to achieve fast turnaround did not apply crosstalk corrections or correct for the brighter-fatter effect. These effects are only relevant to high-precision photometry and have little impact on transient discovery. We processed all science CCDs from each pointing, including those that have been deemed defective (N30 and S30). We produced a source catalog of each detrended science image using `SExtractor` that we fed into a development-branch version of `SCAMP` [@2006ASPC..351..112B] to perform astrometric calibration against the *Gaia* DR1 catalog , which consistently provided extremely reliable astrometry. Template Generation {#sec:templates} ------------------- ![image](r_template.png){width="100.00000%"} ![image](z_template.png){width="100.00000%"} To perform image subtraction, we assembled a library of template images from three publicly available DECam datasets: the Dark Energy Survey DR1 [@2016MNRAS.460.1270D; @2018ApJS..239...18A], the DECam Legacy Survey DR7 [@2019AJ....157..168D], and the Blanco Imaging of the Southern Sky Survey [BLISS; @bliss] stacked images distributed by the NOAO archive. We downloaded all of these astrometrically and photometrically calibrated template images to disk at NERSC. In total, the templates required about 50 TB of disk space and covered about 14,500 deg$^2$ of the sky below a declination of $+30^\circ$. Our $r$- and $z$-band template coverage relative to the sky map of S190426c is shown in Figure \[fig:templates\]. Only a small region of the sky map for this event had template coverage (about 120 deg$^2$ in $z$-band, and 100 deg$^2$ in $r$- and $z$-band) from the DECaLS and BLISS surveys. We used `SWarp` [@2010ascl.soft10068B] to combine and crop the individual template images into references for each CCD. The coaddition employed clipped mean stacking to suppress artifacts and increase signal-to-noise [@2014PASP..126..158G]. The pipeline produced template images on the fly for each CCD and pointing. For images with no template coverage, the pipeline exited gracefully. We are currently working to improve the template coverage of our pipeline by integrating more exposures that are publicly available from the NOAO archive. Photometric Calibration ----------------------- To photometrically calibrate our science images, we compared the magnitudes of stars extracted with `SExtractor` to the same stars on the reference images. We then derived a zeropoint for the science images by taking the median zeropoint derived from each calibrator. We also used this procedure to estimate the seeing on the science images, taking the median FWHM of each calibrator. To choose calibrators, we selected only objects with no `SExtractor` extraction error flags and a signal-to-noise ratio of at least 5. Image Subtraction, Source Identification, and Artifact Rejection ---------------------------------------------------------------- For each pair of photometrically and astrometrically calibrated science images and templates, we used `scamp` to align the images to a common $x$-$y$ grid and the `HOTPANTS` [@2015ascl.soft04004B] implementation of the [@1998ApJ...503..325A] algorithm to convolve the images to a common PSF and perform a pixel-by-pixel subtraction. We then ran `SExtractor` on the resulting difference images to identify sources of variability. We rejected any objects that overlapped masked pixels on either the template or science images, had `SExtractor` extraction flags, had an axis ratio greater than 1.5, had a FWHM more than twice the seeing, had a PSF magnitude greater than 30, had a signal-to-noise ratio less than 5, or had a semi-major axis less than 1 pixel. After making these initial cuts, we used the publicly available `autoScan` code [@2015AJ....150...82G], based on the machine learning technique Random Forest, to probabilistically classify the “realness” of the remaining extracted sources. The code has been successfully used in past DECam searches for GW counterparts in independent difference imaging pipelines [e.g., @2017ApJ...848L..16S]. We pushed the candidates immediately and automatically to the GROWTH marshal, a dynamic web portal for time-domain astronomy [@2019PASP..131c8003K], where they were scanned by a team of roughly 10 scientists. We reported nine promising counterpart candidates via GCN 2.5 hours before the end of our observations [@gcn5]. We used the numerical score assigned by `autoScan` to each candidate to determine the order in which we looked at objects. Using `autoScan` we were able to identify the transients we reported in the GCN by looking at less than 1% of the candidate pool. We also cross-matched each of our candidates against *Gaia* DR2 to reject variable stars, the Minor Planet Center online checker[^5] to reject asteroids, and the Transient Name Server[^6] to reject known transients. Figure \[fig:cutouts\] shows images of two example candidates identified by the pipeline that were reported in the GCN, and Table \[tab:candidates\] gives DECam photometry of all candidates. Search Results and Pipeline Performance --------------------------------------- Processing each exposure with the pipeline required 16.7 minutes of wall-clock time, on average. This fast turnaround time allowed us to detect transients quickly and rapidly communicate them to the community. We identified 84,007 candidates: 45,587 in $r$-band and 48,931 in $z$-band images. 15,432 of our candidates had at least 2 detections. The measured depth reached during during our observations would have likely enabled the detection in both $r$ and $z$ bands of a GW170817-like event (Figure \[fig: models\], left panel). Under the hypothesis that S190426c was in fact an NS-BH merger, the detection would have been more uncertain (Figure \[fig: models\], right panel) and longer exposure times would have aided the search. One mildly red transient ($r-z = 0.3$ in DECam images), labelled DG19vkgf, was spectroscopically and photometrically followed-up by our team using the Hale 200-inch telescope (P200) at Palomar observatory [@gcn6]. A spectrum was obtained with the Double Beam Spectrograph [@1982PASP...94..586O] on P200. Due to the high airmass and poor seeing conditions, the transient was not clearly identified in the trace, but the host redshift was confirmed to be $z = 0.04$ using the host emission lines. Imaging with the Wafer Scale Imager for Prime (WASP) on P200 confirmed the presence of a point source at the transient location. [@gcn9] followed up 2 events that we reported in [@gcn5], DG19kplb and DG19ytre. The authors performed photometric follow-up using the 1.5m telescope at the Observatorio de Sierra Nevada (Spain) starting on 2019-04-27 20:53 UT, spectoscopic follow-up using the 10.4m Gran Telescopio Canarias equipped with OSIRIS at La Palma (Spain) starting on 2019-04-27 21:40 UT. Those observations allowed [@gcn9] to classify DG19kplb as a broad-line Type Ic supernova at redshift $z = 0.09123$ and DG19ytre as a Type Ia supernova at $z = 0.1386$. The association of DG19kplb or DG19ytre with S190426c was therefore excluded. When LIGO released a skymap [@gcn7] that completely ruled out the possible association of DG19vkgf or any of the other transients we discovered using our pipeline with S190426c, we interrupted our photometric and spectroscopic follow-up of those sources. We then focused our follow-up efforts on transients discovered with northern hemisphere facilities that could access regions of higher localization probability (Kasliwal et al, in preparation; Bhalerao et al., in preparation). Conclusion ========== We carried out follow-up observations of the LIGO/Virgo gravitational wave trigger S190426c with DECam. Using an automated difference imaging pipeline, we were able to rapidly search our data and publish candidates to the community before we completed our observations. Although we did not identify a counterpart with these observations, this enabled us to validate our DECam infrastructure for future events, demonstrating that we can readily trigger, observe, scan, and detect transients on timescales, sky-areas and magnitude limits relevant for the discovery of gravitational wave counterparts. Availability of updated LVC sky maps on an even shorter timescale would allow us to more prudently use our telescope resources. In the future, we expect DECam to continue its important role as a discovery engine for gravitational wave counterparts. ---------- ------------ ------------- -------- -------- ------------ ------------- Name RA Dec Filter $m$ $\sigma_m$ MJD (J2000) (J2000) DG19ftnb 167.595555 $-$4.358792 $z$ 20.393 0.086 58599.99056 $r$ 20.651 0.055 58599.96644 DG19kqxe 163.781705 $-$0.237631 $z$ 21.059 0.117 58600.17142 $r$ 22.075 0.125 58600.13044 DG19nmaf 163.752355 $-$1.486911 $z$ 21.603 0.102 58600.17142 $r$ 22.899 0.209 58600.13044 DG19ouub 171.473410 $-$9.488396 $z$ 21.615 0.119 58600.00142 $r$ 22.123 0.102 58599.97506 DG19vkgf 165.844300 $-$7.917442 $z$ 19.580 0.031 58600.19049 $r$ 19.888 0.017 58600.15045 DG19zdwb 167.296930 $-$2.268391 $z$ 22.007 0.097 58599.99542 $r$ 22.803 0.117 58599.97024 DG19zyaf 163.471788 $-$1.151129 $z$ 21.559 0.091 58600.17142 r 22.665 0.125 58600.13044 DG19pklb 168.658618 -6.975466 $z$ 21.274 0.146 58599.99355 $r$ 20.829 0.110 58599.96570 DG19ytre 167.760365 0.527199 $z$ 21.298 0.072 58600.11954 $r$ 20.693 0.040 58600.08185 ---------- ------------ ------------- -------- -------- ------------ ------------- ![image](cands.pdf){width="100.00000%"} ![image](models_plot.pdf){width="100.00000%"} D.A.G. and I.A. gratefully acknowledge Kathy Vivas, Steve Heathcote, and the NOAO staff for facilitating these target-of-opportunity observations. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. D.A.G. acknowledges support from Hubble Fellowship grant HST-HF2-51408.001-A. Support for Program number HST-HF2-51408.001-A is provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. P.E.N. acknowledges support from the DOE through DE-FOA-0001088, Analytical Modeling for Extreme-Scale Computing Environments. E.O.O. is grateful for support by a grant from the Israeli Ministry of Science, ISF, Minerva, BSF, BSF transformative program, and the I-CORE Program of the Planning and Budgeting Committee and The Israel Science Foundation (grant No 1829/12). M. W. Coughlin is supported by the David and Ellen Lee Postdoctoral Fellowship at the California Institute of Technology. J.S. Bloom, J.Martinez-Palomera, and K.Zhang are partially supported by a Gordon and Betty Moore Foundation Data-Driven Discovery grant. J. Cooke is supported in part by the Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav), CE170100004. This work was supported by the GROWTH (Global Relay of Observatories Watching Transients Happen) project funded by the National Science Foundation under PIRE Grant No 1545949. GROWTH is a collaborative project among California Institute of Technology (USA), University of Maryland College Park (USA), University of Wisconsin Milwaukee (USA), Texas Tech University (USA), San Diego State University (USA), University of Washington (USA), Los Alamos National Laboratory (USA), Tokyo Institute of Technology (Japan), National Central University (Taiwan), Indian Institute of Astrophysics (India), Indian Institute of Technology Bombay (India), Weizmann Institute of Science (Israel), The Oskar Klein Centre at Stockholm University (Sweden), Humboldt University (Germany), Liverpool John Moores University (UK), University of Sydney (Australia) and Swinburne University of Technology (Australia). This project used public archival data from the Dark Energy Survey (DES). Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology FacilitiesCouncil of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, the Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Funda[ç]{}[ã]{}o Carlos Chagas Filho de Amparo [à]{} Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cient[í]{}fico e Tecnol[ó]{}gico and the Minist[é]{}rio da Ci[ê]{}ncia, Tecnologia e Inova[ç]{}[ã]{}o, the Deutsche Forschungsgemeinschaft, and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energ[é]{}ticas, Medioambientales y Tecnol[ó]{}gicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgen[ö]{}ssische Technische Hochschule (ETH) Z[ü]{}rich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ci[è]{}ncies de l’Espai (IEEC/CSIC), the Institut de F[í]{}sica d’Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universit[ä]{}t M[ü]{}nchen and the associated Excellence Cluster Universe, the University of Michigan, the National Optical Astronomy Observatory, the University of Nottingham, The Ohio State University, the OzDES Membership Consortium, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University. Based in part on observations at Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. The Legacy Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS; NOAO Proposal ID \# 2014B-0404; PIs: David Schlegel and Arjun Dey), the Beijing-Arizona Sky Survey (BASS; NOAO Proposal ID \# 2015A-0801; PIs: Zhou Xu and Xiaohui Fan), and the Mayall z-band Legacy Survey (MzLS; NOAO Proposal ID \# 2016A-0453; PI: Arjun Dey). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory (NOAO); the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOAO. The Legacy Surveys project is honored to be permitted to conduct astronomical research on Iolkam Du’ag (Kitt Peak), a mountain with particular significance to the Tohono O’odham Nation. NOAO is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. [^1]: <https://gcn.gsfc.nasa.gov/gcn3_archive.html> [^2]: <https://github.com/growth-astro/growth-too-marshal> [^3]: <https://slurm.schedmd.com/overview.html> [^4]: A docker-like containerization service for high-performance computing, see @Gerhardt_2017. [^5]: <https://minorplanetcenter.net/cgi-bin/checkmp.cgi> [^6]: <https://wis-tns.weizmann.ac.il/>
{ "pile_set_name": "ArXiv" }
--- abstract: 'Based on the notion of a construction process consisting of the stepwise addition of particles to the pure fluid, a discrete model for the apparent viscosity as well as for the maximum packing fraction of polydisperse suspensions of spherical, non-colloidal particles is derived. The model connects the approaches by and and is valid for large size ratios of consecutive particle classes during the construction process. Furthermore, a new general form of the well-known equation allowing for the choice of a second-order coefficient for the volume fraction ($\phi^2$-coefficient) is proposed and then applied as a monodisperse reference equation in the course of polydisperse modeling. By applying the polydisperse viscosity model to two different particle size distributions (<span style="font-variant:small-caps;">Rosin-Rammler</span> and uniform distribution), the influence of polydispersity on the apparent viscosity is examined. The extension of the model to the case of small size ratios as well as to the inclusion of shear rate effects is left for future work.' author: - Aaron Dörr - Amsini Sadiki - Amirfarhang Mehdizadeh title: A discrete model for the apparent viscosity of polydisperse suspensions including maximum packing fraction --- Introduction {#chap:Grundlagen} ============ Multiphase flow is a very important but unexploited field of research according to the variety of unsolved questions. In both nature and technology multiphase flow is rather the rule than the exception. The field of applications includes spray, bubbly flows, process and environmental engineering, combustion, rheology of blood and suspension multiphase systems as well as electro- and magnetorheological fluids. At this point of time, researchers neither agree on the cause of various effects nor on their theoretical description. In this paper, focus is put on suspensions. Depending on the length scale of observation as well as on the effects to be described, different approaches may be chosen. On the scale of individual particles in a microscopic approach, besides the strongly restricted possibilities for exact calculation [@1906_Einstein], <span style="font-variant:small-caps;">Stokes</span>ian dynamics ([@1989_Russel; @1993_Brady]) and Lattice-<span style="font-variant:small-caps;">Boltzmann</span> methods ([@2005_Hyvaeluoma; @2005_Kehrwald]) are employed, for instance. These methods provide a means to investigate the mechanisms occurring in suspensions and to explain the origin of macroscopically observable effects. However, resolving individual particle requests a large computational effort and is therefore not applicable for engineering purposes. Increasing the observation length scale thus leads to reduced computational effort but also to loss of information because of coarser sampling. The <span style="font-variant:small-caps;">Euler-Lagrange</span>-method uses groups of particles—so-called parcels—to represent the particle phase within the fluid carrier phase whereas the <span style="font-variant:small-caps;">Euler-Euler</span>-method considers both phases as interacting continuous media (see e.g. [@2005_Chrigui]). Both methods are frequently used to investigate transport, dispersion and reaction processes in dilute and dense suspension systems. In case a pure macroscopic description of the suspension is sufficient, one may model certain flow parameters of the suspension as a whole so that only, for instance, the volume fraction of the particle phase has to be determined during the computation to provide a basis for the calculation of macroscopic suspension properties such as the apparent viscosity. Here, micromechanics models ([@2004_Pabst; @2002_Torquato]) are well-suited, especially the notion of construction processes employed in the present work. In the following, we intend to describe the apparent viscosity as well as the maximum packing fraction of disperse systems by means of the volume fractions of the particle phase. Especially, we consider polydisperse suspensions as they are the most general case of dispersions. The apparent viscosity is a macroscopic quantity resulting from the presence of the particles. This quantity was first described by [@1906_Einstein] for the dilute limit, that is small volume fractions of the particle phase. Later, various attempts to extend the validity of the viscosity relation were made as reported in ([@2004_Pabst; @2009_Mendoza]). Taking into account that the apparent viscosity is a macroscopic quantity, individual particles are not considered, rather the particle size is represented by particle size classes. The suspension is assumed to be built up from a finite number of such classes in the course of a construction process that allows for the calculation of the macroscopic suspension properties and thus needs to be described in detail. Since the suspension shows in general non-Newtonian behavior,the influence of the shear rate on the apparent viscosity will thus also be modeled in a future study. The present paper is organized as follows. In section \[sec:basics\] some basics on apparent viscosity are provided along with an overview of recent work. Section \[chap:viskogleichung\] is dedicated to a generalization of the viscosity correlation for monodisperse suspensions. In section \[chap:Grundmodell\] a construction process approach used to describe polydisperse suspension viscosity by monodisperse viscosity correlations is presented. Accordingly a model for the maximum packing fraction of polydisperse systems is outlined. The resulting model is then applied to two different particle size distributions demonstrating the effect of polydispersity on the apparent viscosity. Section \[chap:conclusions\] is devoted to conclusions. Basics of apparent viscosity and review of recent works {#sec:basics} ======================================================= Following the common quasi-single phase formulation we assume the rheological behavior of suspensions to be described by a stress tensor of the form [@2004_Pabst] $$\Tij = \Tij(\rho,T,\Sij) \label{eq:allgT}$$ where $T$ and $\Sij$ denote the temperature and the symmetric part of the velocity gradient tensor $\Sij=\left[\nabla\ui + (\nabla\ui)^T\right]$, respectively. We confine ourselves to the case of incompressible flow with a constant deformation history (so-called viscometric flow, see [@2000_Boehme]). Within the framework of continuum mechanics ([@1977_Truesdell; @2008_Irgens]) these presumptions are met by the constitutive relation of the so-called generalized ian fluid $$\Tij = -p \mathbf{I} +2\eta(T,\gpunkt)\,\Sij \label{eq:Newtonallg2}$$ with the generalized shear rate $$\gpunkt {\mathrel{\mathop:}=}\sqrt{2\,{\mathrm{tr}\,}\Sij^2} \label{eq:gpunktallg}$$ which is equivalent to the second invariant of the symmetric part of the velocity gradient tensor $\Sij$. The quantity $\eta$ expresses the apparent viscosity to be determined. Throughout the present work, we will disregard the existence of single particles but represent the particles summarily by the properties of so-called *particle size classes*. The presence of particles within the flow increases the viscous dissipation compared with the pure fluid phase with viscosity $\eta_0$ which leads to the observability of the apparent viscosity $\eta$. In order to isolate the influence of the particle phase on the apparent viscosity one defines the *relative viscosity* $\eta_r$ as $$\eta_r{\mathrel{\mathop:}=}\frac{\eta}{\eta_0} \label{eq:defetar}$$ The apparent viscosity is mainly dependent on the *volume fraction* $\phi$ $$\phi{\mathrel{\mathop:}=}\frac{V_{\text{particle}}}{V_{\text{fluid}}+V_{\text{particle}}} \label{eq:defphi}$$ of the particle phase, where $V_{\text{particle}}$ and $V_{\text{fluid}}$ denote the volumes of the particle and fluid phase, respectively. The volume fraction is sometimes called packing fraction as well. From experiments[@1997_Gondret] it is well-known that the relative viscosity monotonously increases with increasing volume fraction and exhibits a singular behavior at a value $\phi<1$. The point where the relative viscosity diverges is commonly denoted as the *maximum packing fraction* $\phi_c$. By experiment, it has been found that for monodisperse suspensions $\phi_c$ coincides with the so-called *random close packing* state of the suspension which equals $$\phi_c\approx0.64 \label{eq:phic064}$$ for spheres (see [@2004_Pabst]). A value for the maximum packing fraction of course also exists in the case of polydisperse systems but differs from the monodisperse case in general. We will propose a model for the polydisperse maximum packing fraction in [section \[sec:Packungsdichte\]]{}. Suspensions having a fixed particle size distribution but a variable volume fraction of the particle phase behave quantitatively similar to the monodisperse suspension described above. The quantitative changes concern the maximum packing fraction $\phi_c$ and the slope of the curve $\eta_r=\eta_r(\phi)$. These effects have been examined systematically in [@1999_Luckham], see [Figure \[fig:1999\_Luckham\_01\]]{}. ![Dependence of the relative viscosity on the volume fraction and the width of the particle size distribution from the experiment [@1999_Luckham]: narrow (ND), moderately broad (BD1) and very broad (BD2) distributions[]{data-label="fig:1999_Luckham_01"}](1999_Luckham_01.pdf) Therein the relative viscosity is depicted as a function of the volume fraction for three distributions of various width. Two trends can be observed. On the one hand, the viscosity of a polydisperse system with a broad size distribution always lies below the viscosity of a system with a narrow distribution, thus below the viscosity of a monodisperse system as well (see also [@1997_Gondret; @1997_Greenwood; @2001_He]). On the other hand, the maximum packing fraction increases with increasing polydispersity, that is with the broadening of the particle size distribution. As already mentioned, the fundamental work on the apparent viscosity of disperse systems has been written by (see [@1906_Einstein], correction [@1911_Einstein]). Therein, the equation $$\eta\Delta\ui=-\frac{1}{\rho}\nabla p \label{eq:Stokes}$$ is solved in a three-dimensional dilatational flow around a spherical particle at rest. Afterwards, the solution is transferred to the case of a suspension with a finite number of particles (volume fraction $\phi$). The dissipation change due to the presence of the particles leads to the well-known relation $$\eta_r=1+2.5\phi \label{eq:Einstein}$$ Though fluid inertia is neglected in the equation , the [equation ]{} serves a an exact limit for dilute suspensions, that is for $\phi\to0$. The relation is commonly considered as the first-order series expansion of every correlation between viscosity and volume fraction [@2004_Pabst]. We denote the $\phi-$coefficient as *first-order intrinsic viscosity* ${\left[\eta_{1}\right]}$ and analogously ${\left[\eta_{m}\right]}$ as *$m$th-order intrinsic viscosity*, so that we can write down the series expansion of the relative viscosity in the form $$\begin{split} \eta_r & \approx1+{\left[\eta_{1}\right]}\phi+{\left[\eta_{2}\right]}\phi^2+\dotsb+{\left[\eta_{n}\right]}\phi^n+\dotsb \\ &=1+\sum_{m=1}^{\infty}{\left[\eta_{m}\right]}\phi^m \\ \end{split} \label{eq:nEntwicklung}$$ This representation will be used later in this work. In the literature a great number of equations of the form $$\eta_r=\eta_r(\phi) \label{eq:H}$$ are provided. Tables \[tab:etaint2\] and \[tab:Korrelationen\] summarize the most important correlations considering neither their derivation nor the underlying models. Most of the equations are empirical or semi-empirical, only few are exact. The reason for the existence of such a great number of different correlations is that relations of the form  do not cover the entire parameter space governing the physical problem. In [@2004_Pabst] this is expressed by the formulation $$\eta_r=\eta_r(\phi,\,\text{all other details of microstructure}) \label{eq:etarpabst}$$ Clearly the parameter space must be confined to allow for useful modeling and thus the range of validity has to be confined a priori. In the present context, the correlations , denoted as *viscosity relations* in the following, can be classified in two groups: #### Series expansions with respect to the volume fraction for $\boldsymbol{\phi\ll1}$ according to the equation (\[eq:nEntwicklung\]):\ While the relation  with the intrinsic viscosity ${\left[\eta_{1}\right]}=2.5$ is commonly accepted as the first order series expansion of the relative viscosity $\eta_r$ of suspensions with spherical particles, there is no unique value of ${\left[\eta_{2}\right]}$ because of a strong case-sensitivity of this parameter. In [Table \[tab:etaint2\]]{} some examples taken from the literature are listed. [@lc@X r@@[.]{}l]{} Author(s) & Reference& Annotations &\ <span style="font-variant:small-caps;">Batchelor & Green</span> (1972) & [@2009_Mendoza] & ian motion neglected, random spatial particledistribution & 5&2\ <span style="font-variant:small-caps;">Batchelor</span> (1977) & [@2009_Mendoza] & ian motion included, random spatial particledistribution & 6&17\ <span style="font-variant:small-caps;">Bedeaux</span> et al. (1977) & [@2009_Mendoza] & formalism in wave number space & 4&8\ <span style="font-variant:small-caps;">Cichocki & Felderhof</span> (1991) & [@2009_Mendoza] & ian motion neglected, <span style="font-variant:small-caps;">Smoluchowski</span> equations & 5&00\ <span style="font-variant:small-caps;">Cichocki & Felderhof</span> (1991) & [@2009_Mendoza] & ian motion included, <span style="font-variant:small-caps;">Smoluchowski</span> equations & 5&91\ <span style="font-variant:small-caps;">Kim & Karrila</span> (1991) & [@2003_Cheng] & ian dynamics similar to <span style="font-variant:small-caps;">Batchelor</span> (1972) & 6&95\ <span style="font-variant:small-caps;">Krieger & Dougherty</span> (1959) & [@1959_Krieger] & second-order <span style="font-variant:small-caps;">Taylor</span> coefficient of the relation  for $\phi_c=0.64$& 5&08\ <span style="font-variant:small-caps;">Mooney</span> (1951) & [@2004_Pabst] & second-order <span style="font-variant:small-caps;">Taylor</span> coefficient of the relation  for $\phi_c=0.64$ & 7&03\ #### Correlations for the entire range of $\phi$:\ The correlations listed in [Table \[tab:Korrelationen\]]{} are intended to be valid over the entire range of $\phi$. 0.5ex [lclr]{} Author(s) & Reference& Equation & No.\ <span style="font-variant:small-caps;">Krieger & Dougherty</span> (1959) & [@1959_Krieger] & $\eta_r= \left(1-\frac{\phi}{\phi_c}\right)^{-{\left[\eta_{1}\right]}\phi_c}$&${\refstepcounter{equation}\quad(\theequation)\label{eq:krieger1}}\label{test1}$\ <span style="font-variant:small-caps;">Mooney</span> (1951) & [@2001_He] & $\eta_r=\mathrm{exp}\left(\frac{{\left[\eta_{1}\right]}\phi}{1-\nicefrac{\phi}{\phi_c}}\right)$ &${\refstepcounter{equation}\quad(\theequation)\label{eq:Mooney}}$\ <span style="font-variant:small-caps;">Frankel & Acrivos</span> (1967) & [@2001_He] & $\eta_r=\frac{9}{8}\frac{\left(\frac{\phi}{\phi_c}\right)^{\nicefrac{1}{3}}}{1-\left(\frac{\phi}{\phi_c}\right)^{\nicefrac{1}{3}}}$ &${\refstepcounter{equation}\quad(\theequation)\label{eq:Frankel}}$\ <span style="font-variant:small-caps;">Eilers</span> (1941) & [@2004_Pabst] & $\eta_r= \left[1+\frac{1}{2}{\left[\eta_{1}\right]}\left(\frac{\phi}{1-\nicefrac{\phi}{\phi_c}}\right)\right]^2$&${\refstepcounter{equation}\quad(\theequation)\label{eq:Eilers}}$\ <span style="font-variant:small-caps;">Quemada</span> (1977) & [@1977_Quemada] & $\eta_r=\left(1-\frac{\phi}{\phi_c}\right)^{-2}$ &${\refstepcounter{equation}\quad(\theequation)\label{eq:Quemada}}$\ <span style="font-variant:small-caps;">Robinson</span> (1949) & [@2004_Pabst] & $\eta_r=1+{\left[\eta_{1}\right]}\left(\frac{\phi}{1-\nicefrac{\phi}{\phi_c}}\right)$ &${\refstepcounter{equation}\quad(\theequation)\label{eq:Robinson}}$\ They all coincide with respect to a singular behavior at the point $\phi=\phi_c$, that is when the maximum packing fraction is reached. For $\phi\to0$ the equations by and both do not reduce to the relation. Therefore, in [@2004_Pabst] a correction to the relation is proposed in the form $$\begin{split} \eta_r=1&+\frac{9}{8}\phi_c\frac{\left(\frac{\phi}{\phi_c}\right)^{\nicefrac{1}{3}}}{1-\left(\frac{\phi}{\phi_c}\right)^{\nicefrac{1}{3}}}\cdot\\ &\cdot\left[\frac{\phi}{\phi_c}+\frac{20}{9}\left(1-\frac{\phi}{\phi_c}\right)\left(\frac{\phi}{\phi_c}\right)^{\nicefrac{2}{3}}\right] \end{split}$$ [Figure \[fig:Korrelationen\]]{} shows the relations  to  from [Table \[tab:Korrelationen\]]{} for the parameters $\phi_c=0.64$ and ${\left[\eta_{1}\right]}=2.5$. ![Correlations between relative viscosity $\eta_r$ and volume fraction $\phi$ for $0\leq\phi<\phi_c$ according to the equations (\[eq:krieger1\]) to (\[eq:Robinson\]) with $\phi_c=0.64$ and ${\left[\eta_{1}\right]}=2.5$[]{data-label="fig:Korrelationen"}](Korrelationen.pdf) In the next section, an attempt to generalize the viscosity correlation for monodisperse suspensions is presented. Generalization of the viscosity correlation for monodisperse suspensions {#chap:viskogleichung} ======================================================================== As shown in the previous section, there are two main types of viscosity correlations, namely polynomial and closed correlations. Polynomial correlations are well suited for describing the low-concentration range but do not show divergence for $\phi\to\phi_c$. Closed correlations diverge for $\phi\to\phi_c$ and therefore seem to be suited for the whole concentration range, but cannot show proper asymptotic behavior for $\phi\to0$ because the second order -coefficient ${\left[\eta_{2}\right]}$ is determined a priori through the viscosity relation. So in the following section we combine the low-concentration behavior of polynomial correlations with the high-concentration behavior of a special closed viscosity correlation based on the well-known equation. Derivation of the viscosity correlation --------------------------------------- We derive a correlation that in addition to the properties of the closed correlations in [Table \[tab:Korrelationen\]]{} allows for choosing the second-order coefficient ${\left[\eta_{2}\right]}$. This value depends on particle interactions and may differ considerably from case to case. So the modified viscosity correlation shall include the three parameters ${\left[\eta_{1}\right]}$, ${\left[\eta_{2}\right]}$ and $\phi_c$. Starting from the [equation ]{} $$\eta_r= \left(1-\frac{\phi}{\phi_c}\right)^{-{\left[\eta_{1}\right]}\phi_c} \label{eq:krieger2}$$ we assume the modified correlation to be of the form $$\eta_r=A+B\left(1-\frac{\phi}{\phi_c}\right)^{-C} \label{eq:viskansatz}$$ This ansatz has to fulfill the conditions $$\begin{aligned} \eta_r(\phi=0)&=1\quad \text{and}\label{eq:RB1visko}\\\eta_r(\phi\to0)&=1+{\left[\eta_{1}\right]}\phi+{\left[\eta_{2}\right]}\phi^2\label{eq:RB2visko}\end{aligned}$$ Condition  implies that $$A+B=1 \label{eq:bed1}$$ The second condition  is equivalent to $$\begin{aligned} \left.\frac{{\mathrm{d}}\eta_r}{{\mathrm{d}}\phi}\right|_{\phi=0} &= {\left[\eta_{1}\right]} \label{eq:abl1}\\[0.3cm]\left.\frac{{\mathrm{d}}^2\eta_r}{{\mathrm{d}}\phi^2}\right|_{\phi=0} &=2{\left[\eta_{2}\right]}\label{eq:abl2}\end{aligned}$$ In order to ensure the fulfillment of this condition we calculate the first two derivatives of the ansatz  with respect to $\phi$ and additionally use the [equation ]{}. We obtain $$\begin{aligned} \frac{{\mathrm{d}}\eta_r}{{\mathrm{d}}\phi} &= \frac{BC}{\phi_c}\left(1-\frac{\phi}{\phi_c}\right)^{-C-1}\label{eq:RB2bvisko}\\[0.3cm]\frac{{\mathrm{d}}^2\eta_r}{{\mathrm{d}}\phi^2} &=\left(C+1\right)\frac{BC}{\phi_c^2}\left(1-\frac{\phi}{\phi_c}\right)^{-C-2}\label{eq:RB2cvisko}\end{aligned}$$ and evaluate the derivatives at the point $\phi=0$. It follows from the conditions  and  $$\begin{aligned} A&=1-\frac{\phi_c{\left[\eta_{1}\right]}^2}{2\phi_c{\left[\eta_{2}\right]}-{\left[\eta_{1}\right]}}\label{eq:a}\\[0.3cm]B&=\frac{\phi_c{\left[\eta_{1}\right]}^2}{2\phi_c{\left[\eta_{2}\right]}-{\left[\eta_{1}\right]}}\label{eq:b}\\[0.3cm] C&=\frac{2\phi_c{\left[\eta_{2}\right]}-{\left[\eta_{1}\right]}}{\phi_c{\left[\eta_{1}\right]}}\label{eq:c}\end{aligned}$$ so the final form of the modified viscosity equation according to the ansatz  is $$\boxed{ \begin{aligned} \eta_r(\phi)=1&-\frac{\phi_c{\left[\eta_{1}\right]}^2}{2\phi_c{\left[\eta_{2}\right]}-{\left[\eta_{1}\right]}}\cdot\\ &\cdot\left[1-\left(1-\frac{\phi}{\phi_c}\right)^{\frac{2\phi_c{\left[\eta_{2}\right]}-{\left[\eta_{1}\right]}}{\phi_c{\left[\eta_{1}\right]}}}\right] \end{aligned} } \label{eq:viskogleichung}$$ Special cases of the viscosity correlation ------------------------------------------ Since [equation ]{} has been derived under rather formal considerations than from a physical point of view, it has to be understood as an empirical equation to be fitted to experimental data. Because [equation ]{} reduces to the [equation ]{} for the choice $${\left[\eta_{2}\right]}=\frac{\left(1+{\left[\eta_{1}\right]}\phi_c\right){\left[\eta_{1}\right]}}{2\phi_c} \label{eq:etaint2krieger}$$ it is of course superior to the latter for fitting purposes. As a consequence, we do not learn anything from fitting [equation ]{} to experimental results for testing. Instead we examine if the closed correlations in [Table \[tab:Korrelationen\]]{} can be reproduced by the modified viscosity equation. From [Figure \[fig:Korrelationen\]]{} we chose two extreme examples, namely the relations by and (equations  and ). Figure \[fig:Vergleich\_Mooney\_Robinson\] shows the results. ![Comparison of equation (\[eq:viskogleichung\]) with the relations by Mooney (\[eq:Mooney\]) and Robinson (\[eq:Robinson\]); ${\left[\eta_{1}\right]}=2.5$[]{data-label="fig:Vergleich_Mooney_Robinson"}](Vergleich_Mooney_Robinson.pdf) We find good agreement in both cases. However, it was necessary to decrease the value of the maximum packing fraction $\phi_c$ in order to reproduce the behavior of the equation because of its exponential variation compared with the algebraic variation of the [equation ]{}. The remaining correlations in [Table \[tab:Korrelationen\]]{} vary algebraically and may therefore be reproduced by the [equation ]{} with high accuracy. Because of these reproduction properties we will in the following consider the modified viscosity [equation ]{} as a general viscosity correlation for monodisperse suspensions. Development of a viscosity correlation for polydisperse suspensions {#chap:Grundmodell} =================================================================== In this section we develop the polydisperse viscosity model based on the notion of a construction process. This approach is first exactly described in [section \[sec:Aufbauprozess\]]{}. Subsequently in [section \[sec:Diskretes\_Modell\]]{} the construction process is applied to the determination of relative viscosity. Then, a model for the maximum packing fraction of polydisperse suspensions is developed in [section \[sec:Packungsdichte\]]{} to complete the viscosity model. A graphical scheme provided in Figure \[fig:Ablauf\] may serve for the reader’s guidance during the calculation. It is important to note that the basic model developed in this section can only be applied to polydisperse suspensions with large diameter ratios of consecutive particle size classes in the construction process. Work on an extension of the model to small particle size ratios is in progress. Starting point: The differential Bruggeman model {#sec:Bruggeman} ------------------------------------------------ The differential model (see also [@2009_Hsueh] and more detailed [@2002_Torquato]) makes it possible to derive a closed viscosity relation for the full concentration range starting from the relation. The model is also known as *Differential Effective Medium approach (DEM)*. A generalization of the DEM approach is presented in [@1985_Norris]. The model is based on the notion that an infinitesimal volume fraction of particles $\phi^*$ is added to an existing suspension with effective viscosity $\eta$ and volume fraction $\phi$. In the course of this addition it is assumed that the existing suspension can be treated as a homogeneous medium. This can only be valid if the newly added particles have a large diameter compared with the particles already present in the suspension. We now ask for the change in effective viscosity due to the newly added volume fraction $\phi^*$. The infinitesimal volume fraction of the newly added particles in the resulting suspension is, according to [@2009_Hsueh] (compare the later relations  and ) $$\phi_{P} = \frac{\phi^*}{1 + \phi^*} = \frac{{\mathrm{d}}\phi}{1-\phi} \label{eq:diffphistern}$$ Because of the small size of the volume fraction $\phi_{P}$ we may use the relation to describe the change in effective viscosity by $$\eta+{\mathrm{d}}\eta = \eta\left(1+{\left[\eta_{1}\right]}\phi_{P}\right) \label{eq:etaplusdeta}$$ By inserting the [equation ]{} into the relation  we obtain $$\eta+{\mathrm{d}}\eta = \eta\left(1+{\left[\eta_{1}\right]}\,\frac{{\mathrm{d}}\phi}{1-\phi}\right) \label{eq:etaplusdetaeingesetzt}$$ Simplification and separation of variables yield $$\frac{{\mathrm{d}}\eta}{{\left[\eta_{1}\right]}\eta}=\frac{{\mathrm{d}}\phi}{1-\phi} \label{eq:detasterndphi}$$ We integrate the [equation ]{} under the initial condition $\eta(\phi=0) = \eta_0$ for the pure fluid and find $$\eta = \eta_0 \left(1-\phi\right)^{-{\left[\eta_{1}\right]}} \label{eq:roscoeohnephic}$$ The equation  is known as equation. Since the model requires a large diameter ratio of consecutively added particle classes, the suspension must therefore consist of a solid phase that can be divided into particle size classes of large diameter ratios. This structure is called hierarchical, see also [@1985_Norris] and [@2002_Torquato] So the differential model can be applied to polydisperse Suspensions. Another important assumption of the differential model is the validity of [equation ]{}. The volume fraction of newly added spheres in [equation ]{} has to be small enough for the relation to be valid. We note that the volume fractions $\phi^*$ and accordingly $\phi_{P}$ may be finite in principle. However, by introducing the infinitesimal increment ${\mathrm{d}}\eta$ into the differential model the volume fractions $\phi^*$ and $\phi_{P}$ are required to be infinitesimal. The advantage of this limitation is the possibility to derive the closed [equation ]{}. Assumptions ----------- In the following we assume the suspension to consist of a solid particle phase suspended in a fluid carrier phase with constant viscosity $\eta_0$. The particle phase consists of spheres with different diameters $d_i$ that can be categorized in a finite number $n$ of size or diameter classes. So with $i$ as the index of the size class we have $i=1\dotsc n$. The size classes shall be sorted by diameter in ascending order, so that $d_{i}<d_{i+1}$. The ratio of two consecutive diameters $$u_i{\mathrel{\mathop:}=}\frac{d_{i+1}}{d_i} \label{eq:uidef1}$$ should be larger than 7 according to [@2010_Brouwers] so that the existing suspension behaves like a homogeneous medium towards the newly added spheres. In the completed suspension resulting from the construction process the $i$th size class occupies a volume $V_i$ while the fluid phase occupies the volume $V_f$. So the total volume $\Vges$ of the suspension is given by $$\Vges = V_f + \sum_{m=1}^n V_m \label{eq:Vges}$$ We assume that the suspension can be entirely described by volume fractions, so that no higher moments of the so-called indicator function (see [@2002_Torquato]) have to be considered. By the way, the indicator function is defined on the entire space occupied by the suspension and is unity at all points belonging to phase 1 (e.g. the solid phase) and zero for all of the remaining points (e.g. the fluid phase). The function is commonly used for a statistical description of phase interactions by means of its moments. The lowest-order moment is given by the particle phase volume fraction. Confining ourselves to a description based on volume fractions corresponds to the assumption of an isotropic and homogeneous suspension throughout the control volume. As already outlined in [section \[sec:basics\]]{}, the total volume fraction is defined by $$\phi=\frac{V_{\text{particle}}}{V_{\text{fluid}}+V_{\text{particle}}}=\frac{\sum_{m=1}^n V_m}{\Vges} \label{eq:phivol}$$ Analogously it is useful to define volume fractions of single size classes, both during the construction process and in the completed suspension. Construction process {#sec:Aufbauprozess} -------------------- The models for the apparent viscosity and the maximum packing fraction that are developed in the following sections are based on the notion that the suspension is constructed by successive addition of new size classes. We call this process the *construction process*. In the following we generalize the considerations in [@2004_Pabst] and especially [@1985_Norris]. There are two possible approaches for the construction process shown in [Figure \[fig:Aufbauprozess2\]]{}: Variable total volume : In this case the volume of the fluid phase $V_f$ is held constant during the construction process, so that the total volume of the suspension increases with each step until the suspension occupies the final volume $\Vges$. The construction process thus only consists of additions of size classes. Constant total volume : In order to keep the total volume of the suspension constant throughout the construction process, it is necessary to extract a suspension volume with a size equal to the added particle volume in each construction step (see also [@1985_Norris]). So the extracted volume represents the composition of the existing suspension.\[Verschmierung\] Though we will show in a later section that both approaches are equivalent, we will choose the constant-volume approach for reasons of simplicity. ![Scheme of the construction process for a bidisperse Suspension ($n=2$) at variable (above) and constant volume (below) including homogenization[]{data-label="fig:Aufbauprozess2"}](Aufbauprozess2){width="\columnwidth"} The homogenization step in [Figure \[fig:Aufbauprozess2\]]{} symbolizes the fact that the existing suspension acts as a homogeneous medium towards the newly added particles because of the large diameter ratio. ### Construction process at variable total volume In the case of variable total suspension volume the added volumes of the respective size classes are identical to the volumes of the size classes in the completed suspension at the end of the construction process. In contrast, this is not the case when the total volume is constant. First we show explicitly the addition of the first two size classes with volumes $V_1$ and $V_2$ to the fluid phase with volume $V_f$. The total volume fraction of the particle phase after the $i$th construction step $\phi_i$ is given by (see e.g. [@2004_Pabst]) $$\begin{split} \phi_0 &= \frac{0}{V_f+0}=0 \\ \rightarrow\,\phi_1 &= \frac{V_1}{V_f+V_1} \\ \rightarrow\,\phi_2 &= \frac{V_1+V_2}{V_f+V_1+V_2} \end{split} \label{eq:aufbauvariabel}$$ A general $(i+1)$th construction step can thus be written in the form $$\phi_i = \frac{\sum_{m=1}^{i}V_m}{V_f+\sum_{m=1}^{i}V_m} \,\rightarrow\, \phi_{i+1} = \frac{\sum_{m=1}^{i+1}V_m}{V_f+\sum_{m=1}^{i+1}V_m} \label{eq:aufbauvariabelallg}$$ If we relate the added volume $V_{i+1}$ to the total suspension volume before the addition of $V_{i+1}$ and call this ratio $\phi_{i+1}^*$ $$\phi_{i+1}^* {\mathrel{\mathop:}=}\frac{V_{i+1}}{V_f+\sum_{m=1}^{i}V_m} \label{eq:phiisternvar}$$ we can modify the expression for the total volume fraction of the particle phase $\phi_{i+1}$ in [equation ]{} so that $$\phi_{i+1} = \frac{\phi_i + \phi_{i+1}^*}{1 + \phi_{i+1}^*} \label{eq:phiiphisternvariabel}$$ The concentration of the $(i+1)$th size class after the $(i+1)$th construction step is consequently given by $$\phi_{P,i+1} {\mathrel{\mathop:}=}\frac{V_{i+1}}{V_f+\sum_{m=1}^{i+1}V_m} = \frac{\phi_{i+1}^*}{1 + \phi_{i+1}^*} \label{eq:phipplus1variabel}$$ The volume fraction $\phi_{P,i+1}$ in [equation ]{} will be important for the calculation of the apparent viscosity because it serves as an argument in the viscosity relation (compare [equation ]{}). For a different formulation we denote the change in total volume fraction during the $(i+1)$th construction step as $$\delta\phi_{i+1}{\mathrel{\mathop:}=}\phi_{i+1}-\phi_i \label{eq:deltaphiallg}$$ Using [equation ]{} we first obtain $$\phi_i+\delta\phi_{i+1} = \frac{\phi_i + \phi_{i+1}^*}{1 + \phi_{i+1}^*}$$ and finally under consideration of definition  $$\phi_{P,i+1} = \frac{\delta\phi_{i+1}}{1 - \phi_i} \label{eq:deltaphivariabel}$$ ### Construction process at constant total volume {#sec:Aufbaukonst} If the total suspension volume is held constant throughout the construction process, in every construction step we have to extract as much volume as is added with the new size class. The extracted volume represents, as already explained on [page ]{}, the composition of the existing suspension and therefore contains particles of smaller size classes (see also [@1985_Norris]). This implies that the volume of the respective size class added in each construction step has to be larger than the volume of this size class contained in the completed suspension. We notice that the last size class (the largest particles) is an exception and is therefore added with its final volume. The volume added in the $i$th construction step is thus called $V_i^*$ in order to be distinguished from the volume $V_i$ of the $i$th size class in the completed suspension. The constant total volume is still called $\Vges$. Now we consider the construction process in detail. The first addition implies a simple change in the total volume fraction of the particle phase $\phi_i$: $$\phi_0 = \frac{0}{\Vges}=0 \quad\longrightarrow \quad\phi_1 = \frac{V_1^*}{\Vges} \label{eq:aufbaukonst1}$$ The more complex second construction step is given by $$\phi_1 = \frac{V_1^*}{\Vges} \quad\longrightarrow\quad \phi_2 = \frac{\left(V_1^*-V_2^*\frac{V_1^*}{\Vges}\right)+V_2^*}{\Vges} \label{eq:aufbaukonst2}$$ In the [equation ]{} the term in brackets represents the volume of the first size class still present after the second construction step. Therein the term $V_2^*V_1^*/\Vges$ describes the loss of volume of the first size class due to the necessary extraction of volume. If we introduce the notation $$\phi_k^* = \frac{V_k^*}{\Vges} \label{eq:phikstern}$$ we may analogously to the [equation ]{} express the concentration of the $(i+1)$th size class in the existing suspension by $$\phi_{P,i+1} = \phi_{i+1}^* \label{eq:phipplus1konstant}$$ Unlike in the case of variable total volume we obviously do not need to distinguish between the volume fractions $\phi_{P,i+1}$ and $\phi_{i+1}^*$. By rearranging of the expressions in the [equation ]{} we find $$\begin{aligned} \phi_1 = \underbrace{\phi_1^*}_{{=\mathrel{\mathop:}}\dphi_1^1}\quad\text{and} \label{eq:aufbaukonst2dim1}\\ \phi_2 = \underbrace{\phi_1^*(1-\phi_2^*)}_{{=\mathrel{\mathop:}}\dphi_1^2}+\underbrace{\phi_2^*}_{{=\mathrel{\mathop:}}\dphi_2^2} \label{eq:aufbaukonst2dim2}\end{aligned}$$ In the equations  and  we have introduced a notation for the volume fractions of the individual size classes during the construction process. The representation $\dphi_k^i$ refers to the volume fraction of the $k$th size class after the $i$th construction step, that is after the addition of the $i$th size class. So $i$ means an index and no exponent. This should cause no confusion because the volume fraction will always occur linearly in all of the following expressions. It can easily be shown, using the equations  and , that the total volume fraction after the third construction step is given by $$\label{eq:phi3konstant} \begin{split} \phi_3 &= \phi_2(1-\phi_3^*)+\phi_3^* \\ &= [\phi_1^*(1-\phi_2^*)+\phi_2^*](1-\phi_3^*)+\phi_3^* \\ &= \underbrace{\phi_1^*{(1-\phi_{2}^*)}{(1-\phi_{3}^*)}}_{{=\mathrel{\mathop:}}\dphi_1^3} + \underbrace{\phi_2^*{(1-\phi_{3}^*)}}_{{=\mathrel{\mathop:}}\dphi_2^3}+\underbrace{\phi_3^*}_{{=\mathrel{\mathop:}}\dphi_3^3} \end{split}$$ The underlying pattern can be recognized clearly and so we may generalize intuitively: $$\phi_{i+1} = \phi_i{(1-\phi_{i+1}^*)}+\phi_{i+1}^* \label{eq:phiiplus1allg}$$ Table \[tab:Aufbau\] schematically outlines the construction process. 0.5ex ---------- ----------- --------------- ------------- -------------- ---------- -------------- $ 1 $ $\phi_1 $ $ \dphi_1^1 $ $2$ $\phi_2$ $\dphi_1^2$ $\dphi_2^2$ $3$ $\phi_3$ $\dphi_1^3$ $\dphi_2^3$ $\dphi_3^3$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\ddots$ $n$ $\phi_n$ $\dphi_1^n$ $\dphi_2^n$ $ \dphi_3^n$ $\dotso$ $\dphi_n^n $ ---------- ----------- --------------- ------------- -------------- ---------- -------------- By introducing the difference $\delta\phi_{i+1}$ from the [equation ]{} into the [equation ]{}, we find $$\begin{aligned} \phi_i + \delta\phi_{i+1} &= \phi_i{(1-\phi_{i+1}^*)}+\phi_{i+1}^* \quad\text{sowie} \\ \phi_{P,i+1}&= \phi_{i+1}^* = \frac{\delta\phi_{i+1}}{1-\phi_i} \label{eq:deltaphikonstant}\end{aligned}$$ which is formally identical with the [equation ]{}. Attention should be paid to the fact that the volume fractions $\phi_{i+1}^*$ used in the cases of variable and constant volume, respectively, are not identical. The proof of the equivalence of the equations  and  is provided in appendix \[sec:Beweisderaequivalenz\] for clarity. From this proof we draw the conclusion that the construction processes at variable and constant total volume are equivalent. The only difference within the framework of our dimensionless representation, which is a representation by volume fractions, lies within the [equation ]{}. We choose the case of constant total volume because of the intuitive meaning of volume fractions originating from the constant volume $\Vges$. Considerations involving volumes may therefore easily be transferred to the notion of volume fractions. In the case of variable total volume the total volume contrarily depends on the amount of added volumes. ##### Representation of the quantities by the volume fractions $\boldsymbol{\dphi_k}$\ Up to now, the equations contain quantities influenced by the construction process. However, in the case of calculation we only know the volume fractions $\dphi_k^n$ of the size classes in the completed suspension. It is therefore useful to express the quantities describing the construction process by the composition of the completed suspension. For simplification we define $$\dphi_k {\mathrel{\mathop:}=}\dphi_k^n \label{eq:defdphi}$$ These volume fractions in the completed suspension are given by $$\dphi_k = \frac{V_k}{V_f+\sum_{m=1}^{n}V_m} = \frac{V_k}{\Vges} \label{eq:dphiivol}$$ From the equations  to  we deduce a general expression for $\dphi_k^i$: $$\begin{aligned} {2} \dphi_k^i =& \phi_k^* \prod_{m=k+1}^i {(1-\phi_{m}^*)} &\quad\text{für } k<i \label{eq:dphikhochia}\\ \dphi_k^k =& \phi_k^* &\quad\text{für } k=i \label{eq:dphikhochib}\end{aligned}$$ We recall that there are no volume fractions with $k>i$ (compare [Table \[tab:Aufbau\]]{}). If we succeed in calculating the volume fractions $\phi_m^*$, we are in a position to calculate the volume fractions of the individual size classes $\dphi_k^i$ and the total volume fractions after each step $\phi_i$ from the equations  and . Per definition we have $$\phi_i = \sum_{k=1}^i \dphi_k^i \label{eq:summedphi}$$ (consider the [equation ]{}as an example). Writing down the volume fractions of the last size classes in the completed suspension and using the definition  as well as the equations  and  reveals the possibility to calculate the volume fractions $\phi_k^*$: $$\begin{aligned} \dphi_n &= \phi_n^* \label{eq:dphin1}\\ \dphi_{n-1} &= \phi_{n-1}^*{(1-\phi_{n}^*)}\label{eq:dphin2}\\ \dphi_{n-2} &= \phi_{n-2}^*{(1-\phi_{n-1}^*)}{(1-\phi_{n}^*)}\label{eq:dphin3}\\ \dphi_{n-3} &=\dotso \nonumber\end{aligned}$$ So we find by recursive insertion that $$\begin{aligned} \phi_n^* &= \dphi_n \label{eq:phistern1}\\ \phi_{n-1}^* &= \frac{\dphi_{n-1}}{1-\dphi_n}\label{eq:phistern2}\\ \phi_{n-2}^* &= \frac{\dphi_{n-2}}{1-\dphi_n-\dphi_{n-1}}\label{eq:phistern3}\\ \phi_{n-3}^* &=\dotso \nonumber\end{aligned}$$ This may be generalized in the form $$\boxed{ \phi_{P,k}=\phi_k^* = \frac{\dphi_k}{1-\sum_{m=k+1}^n \dphi_m}} \label{eq:phimstern}$$ We now introduce the abbreviation $$\sigma_k{\mathrel{\mathop:}=}1-\sum_{m=k}^n \dphi_m \label{eq:defsigma}$$ into the [equation ]{} and find $$1-\phi_k^* = \frac{1-\sum_{m=k}^n \dphi_m}{1-\sum_{m=k+1}^n \dphi_m} = \frac{\sigma_{k}}{\sigma_{k+1}} \label{eq:einsminusphistern}$$ Therefore, we may write $$\begin{aligned} \dphi_k^i & =\phi_k^* {(1-\phi_{k+1}^*)}{(1-\phi_{k+2}^*)}\dotsm{(1-\phi_{i}^*)} \nonumber\\ & = \phi_k^* \frac{\sigma_{k+1}}{\sigma_{k+2}} \frac{\sigma_{k+2}}{\sigma_{k+3}} \frac{\sigma_{k+3}}{\sigma_{k+4}} \dotsm \frac{\sigma_{i}}{\sigma_{i+1}} \nonumber \\ & = \phi_k^* \frac{\sigma_{k+1}}{\sigma_{i+1}} \label{eq:dphikhochi2}\end{aligned}$$ instead of the expression . Using the result  and the definition  it follows from the [equation ]{} that $$\boxed{ \dphi_k^i = \frac{\dphi_k}{1-\sum_{m=i+1}^n \dphi_m}} \label{eq:dphikhochi3}$$ Combination of the equations  und  finally yields $$\phi_i = \frac{\sum_{m=1}^i \dphi_m}{1-\sum_{m=i+1}^n \dphi_m} \label{eq:phiikonstant}$$ So we have represented all of the quantities occurring in the construction process by the volume fractions in the complete suspension. A discrete model for the relative viscosity {#sec:Diskretes_Modell} ------------------------------------------- As we have already noted, the differential model lacks any information about the volume fraction of the individual particle size classes. For that reason we have described the construction process of the suspension in a discrete form in [section \[sec:Aufbauprozess\]]{}. In preparation for the development of the discrete viscosity model we need to make a connection between the model and the maximum packing fraction. ### Introduction of the maximum packing fraction into the differential Bruggeman model The [equation ]{} diverges as the total volume fraction $\phi$ approaches unity. In a real suspension the achievable value of $\phi$ is limited by the maximum packung fraction. In order to introduce the maximum packing fraction into the differential model we proceed in a way proposed in [@2009_Hsueh]. A quite similar way can be found in [@2009_Mendoza]. In both publications it emerges that the notion of maximum packing fraction is introduced under little convincing considerations. The approach followed in [@2009_Hsueh] consists of modifying [equation ]{} by using the maximum packing fraction $\phi_c$ in the description of the volume fraction $\phi_P$. Therefore, it is supposed without derivation that $\phi_P$ may be written as $$\phi_{P} = \frac{{\mathrm{d}}\phi}{1-\frac{\phi}{\phi_c}} \label{eq:diffphisternphic}$$ In combination with the [equation ]{} one finds $$\eta+{\mathrm{d}}\eta = \eta\left(1+{\left[\eta_{1}\right]}\,\frac{{\mathrm{d}}\phi}{1-\frac{\phi}{\phi_c}}\right) \label{eq:etaplusdetaphic}$$ Rearrangement of the [equation ]{} analogously to the [equation ]{} yields $$\frac{{\mathrm{d}}\eta}{{\left[\eta_{1}\right]}\eta}=\frac{{\mathrm{d}}\phi}{1-\frac{\phi}{\phi_c}} \label{eq:detasterndphiphic}$$ which can be integrated under the initial condition $\eta(\phi=0) = \eta_0$ for the pure fluid. This results in the relation  $$\eta = \eta_0 \left(1-\frac{\phi}{\phi_c}\right)^{-{\left[\eta_{1}\right]}\phi_c} \label{eq:krieger}$$ In [@2009_Mendoza] the introduction of a so-called effective volume fraction leads to the equation $$\eta = \eta_0 \left(1-\frac{\phi}{\phi_c}\right)^{-{\left[\eta_{1}\right]}} \label{eq:Mendoza}$$ The expressions  and  obviously differ with respect to their exponents only. Both results show divergence for $\phi\to1$ but lack a physical rationale for the respective approaches. Therefore, we will state a different principle in the next section. ### Introduction of the maximum packing fraction into the discrete construction process It would formally be possible to transfer the modification  to the discrete construction process, that is the volume fractions , and employ the result for the viscosity calculation. In the following it will be explained why this approach cannot be valid in general. Partially anticipating the later viscosity calculation, we raise the two following points. Firstly, the construction process described in [section \[sec:Aufbauprozess\]]{} is by no means dependent on the particle geometry. This is emphasized by the notion of homogenization between two construction steps. In contrast, the maximum packing fraction is strongly influenced by the particle geometry. So it would be artificial to introduce this quantity into the description of volume fractions during the construction process. Secondly, the consideration of the volume fractions during the construction process is independent of the physical quantity that is calculated (here: the viscosity). It does not make any difference whether one calculates the viscosity or, for instance, the electric conductivity (or both at the same time). In both cases the construction process is constituted by the same volume fractions. Only through the employed relation between the volume fractions and the change in the physical quantity of interest parameters like the maximum packing fraction are included. This will be the approach followed during the later viscosity calculation. The above considerations imply that the differential model only allows for the derivation of the equation  because as a consequence of this differential approach the right-hand side of the [equation ]{} may only consist of a linear expression (the relation) that cannot contain the maximum packing fraction. So the approaches presented in [@2009_Hsueh] and [@2009_Mendoza] are formally possible but physically questionable. ##### Notation for the maximum packing fraction in the construction process\ {#sec:NotationPackung} At this point it is necessary to introduce a distinct notation for the maximum packing fraction in order to avoid misinterpretations. The calculation of the maximum packing fraction will be conducted in [section \[sec:Packungsdichte\]]{}. We have to distinguish between $\phi_c$ as a parameter within the correlations listed in [Table \[tab:Korrelationen\]]{} and the modeled maximum packing fraction that may change during the construction process as well as under the influence of the shear rate. We choose the following notations referring to [@2010_Brouwers]: [lcX]{} $\varphi_{Tk}^i$ & : & We denote as $\varphi_{Tk}^i$ the maximum packing fraction of a polydisperse suspension consisting of $k$ size classes after the $i$th construction step ($i$th line in [Table \[tab:Aufbau\]]{}).\ $\varphi_c$ & : & We write $\varphi_c{\mathrel{\mathop:}=}\varphi_{T1}^i$ for the monodisperse packing fraction. The monodisperse maximum packing fraction $\varphi_c$ is constant throughout the construction process and thus carries no upper index $i$.\ This notation is visualized in [Table \[tab:Packung\]]{}(compare [Table \[tab:Aufbau\]]{}). 0.5ex ---------- ----------- -------------- ------- ------------------ ------- ------------------- ------- ---------- ------- ------------------- $1$ $\phi_1 $ $ \varphi_c$ $2$ $\phi_2$ $\varphi_c$ $\to$ $\varphi_{T2}^2$ $3$ $\phi_3$ $\varphi_c$ $\to$ $\varphi_{T2}^3$ $\to$ $\varphi_{T3}^3$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $n$ $\phi_n$ $\varphi_c$ $\to$ $\varphi_{T2}^n$ $\to$ $ \varphi_{T3}^n$ $\to$ $\dotso$ $\to$ $\varphi_{Tn}^n $ ---------- ----------- -------------- ------- ------------------ ------- ------------------- ------- ---------- ------- ------------------- After each construction step the maximum packing fraction is newly calculated according to the new composition of the suspension. This is conducted within a recursive process (arrows in [Table \[tab:Packung\]]{}). Starting from the monodisperse maximum packing fraction $\varphi_c$ all previously added size classes are taken into account and so for each step the value $\varphi_{Ti+1}^{i+1}$ is calculated. We will need this value for the [equation ]{} still to derive. ### Transition to a discrete relative viscosity model We now build on the description of the discrete construction process given in [section \[sec:Aufbauprozess\]]{} and focus on the effect of the construction process on the relative viscosity. When during the construction process the $(i+1)$th size class is added, the total volume fraction of the particle phase changes from $\phi_i$ to $\phi_{i+1}$. This is associated with a change in apparent viscosity from $\eta_i$ to $\eta_{i+1}$. To emphasize the analogy to the differential model, we temporarily confine ourselves to the linear relation as a description of the viscosity change and subsequently in [section \[sec:hoeher\]]{} we will extend the model to higher orders. According to sections \[sec:Aufbaukonst\] and \[sec:Bruggeman\] at constant total volume the volume fraction of the newly added $(i+1)$th size class related to the resulting suspension volume is $$\phi_{P,i+1}=\phi_{i+1}^* = \frac{\delta\phi_{i+1}}{1-\phi_i} \label{eq:diskretphistern}$$ So the new apparent viscosity is given by $$\eta_{i+1} = \eta_i \left( 1+{\left[\eta_{1}\right]} \phi_{P,i+1} \right) \label{eq:etasterniplus1}$$ In the [equation ]{} we have used the volume fraction $\phi_{P,i+1}$ related to the suspension volume after the $(i+1)$th construction step according to the notion of the construction process. The volume fraction $\phi_{i+1}^*$ may not be used in general because the volume fractions $\phi_{P,i+1}$ and $\phi_{i+1}^*$ only coincide in the case of constant total suspension volume (see [section \[sec:Aufbaukonst\]]{}). All the following relations containing $\phi_{P,i+1}$ may thus be used both at variable and constant total volume. In the latter case, the [equation ]{} can be cast in a different form employing the volume fractions $\dphi_k$ from the [equation ]{}: $$\eta_{i+1} = \eta_i \left( 1+{\left[\eta_{1}\right]}\, \frac{\dphi_{i+1}}{1-\sum_{m=i+2}^n \dphi_m} \right) \label{eq:etasternmitdphi}$$ This equation is a recursive formula that can alternatively be written in an explicit form. However, we note that a numerical evaluation has to be conducted in a recursive way because the explicit formula offers no simplification compared with the recursive formula. The explicit form of the [equation ]{} is given by $$\eta_{i+1} = \eta_0\prod_{m=1}^{i+1} \left( 1+{\left[\eta_{1}\right]}\, \frac{\dphi_{m}}{1-\sum_{l=m+1}^n \dphi_l} \right) \label{eq:expetasternmitdphi}$$ with $\phi_0=0$. For $i+1 = n$ we find the apparent viscosity  of the complete suspension (it is well known that $\sum_{l=n+1}^{n} x = 0$ for all $x$, which may be important for a numerical implementation). ##### Extension of the discrete model to higher order terms of $\boldsymbol{\phi_{P}}$\ {#sec:hoeher} The differential [equation ]{} contains only the first-order relation because of the infinitesimal character of ${\mathrm{d}}\phi$. In the case of the discrete model represented by [equation ]{} it is not necessary to confine oneself to linear terms. Therefore, we are allowed to describe the modification of the apparent viscosity more accurately by higher order terms of $\phi_{P,k}$. So the [equation ]{} can—according to the general expression $$\eta_{i+1} = \eta_0\prod_{m=1}^{i+1} \left[ 1+{\left[\eta_{1}\right]}\,\phi_{P,m} +{\left[\eta_{2}\right]}\left(\phi_{P,m}\right)^2+ \dotsb\right]\label{eq:allgetasterniplus1}$$ —be extended to $$\boxed{ \begin{aligned} \eta_{i+1} =\,\, &\eta_0\prod_{m=1}^{i+1} \Bigg[ 1 +{\left[\eta_{1}\right]}\, \frac{\dphi_{m}}{1-\sum_{l=m+1}^n \dphi_l} +\\ &+ {\left[\eta_{2}\right]} \left( \frac{\dphi_{m}}{1-\sum_{l=m+1}^n \dphi_l}\right)^2 + \dotsb\Bigg] \end{aligned} } \label{eq:allgexpetasternmitdphiphic}$$ which according to the [equation ]{} may alternatively be written as $$\boxed{ \begin{aligned} \eta_{i+1} =\,\, &\eta_i \Bigg[ 1+{\left[\eta_{1}\right]}\, \frac{\dphi_{m}}{1-\sum_{l=m+1}^n \dphi_l} +\\ & + {\left[\eta_{2}\right]} \left( \frac{\dphi_{m}}{1-\sum_{l=m+1}^n \dphi_l}\right)^2 + \dotsb\Bigg] \end{aligned}} \label{eq:rekursetasternmitdphi}$$ ##### Comparison with relations present in the literature\ {#sec:VergleichLiteratur} In the following we draw a connection between the [equation ]{} and relations existing in the literature. Selecting the coefficients from the expansion of a viscosity relation (, for instance) at the point $\phi=0$ for the coefficients ${\left[\eta_{k}\right]}$ in the [equation ]{} and formally considering an infinite number of terms, we may substitute the bracketed term in the [equation ]{} with the viscosity relation itself. This step requires the introduction of the maximum packing fraction. Using the [equation ]{} we find $$\boxed{ \begin{aligned} \eta_{i+1} =\,\, &\eta_0\prod_{m=1}^{i+1} \left\{\rule{0cm}{7ex}1-\frac{\varphi_{Tm}^{m}{\left[\eta_{1}\right]}^2}{2\varphi_{Tm}^{m}{\left[\eta_{2}\right]}-{\left[\eta_{1}\right]}}\cdot\right.\\ &\left.\rule{0cm}{7ex}\cdot\left[1-\left(1-\frac{\phi_{P,m}}{\varphi_{Tm}^{m}}\right)^{\dfrac{2\varphi_{Tm}^{m}{\left[\eta_{2}\right]}-{\left[\eta_{1}\right]}}{\varphi_{Tm}^{m}{\left[\eta_{1}\right]}}}\right]\right\} \end{aligned} \label{eq:allgvisko}}$$ In the [equation ]{} $\varphi_{Tm}^{m}$ refers to the maximum packing fraction after the $m$th construction step that will be calculated in [section \[sec:Packungsdichte\]]{}. This notation has already been outlined in [section \[sec:NotationPackung\]]{}. The general recursive representation of the [equation ]{} is given by $$\boxed{ \eta_{i+1}=\eta_i\,\eta_r(\phi_{P,i+1},\varphi_{Ti+1}^{i+1}) } \label{eq:etamitvarphi}$$ and more specifically using the viscosity relation  $$\boxed{ \begin{aligned} \eta_{i+1} =\,\, & \eta_i\left\{\rule{0cm}{8ex}1-\frac{\varphi_{Ti+1}^{i+1}{\left[\eta_{1}\right]}^2}{2\varphi_{Ti+1}^{i+1}{\left[\eta_{2}\right]}-{\left[\eta_{1}\right]}}\cdot\right.\\ &\left.\rule{0cm}{8ex}\cdot\left[1-\left(1-\frac{\phi_{P,i+1}}{\varphi_{Ti+1}^{i+1}}\right)^{\dfrac{2\varphi_{Ti+1}^{i+1}{\left[\eta_{2}\right]}-{\left[\eta_{1}\right]}}{\varphi_{Ti+1}^{i+1}{\left[\eta_{1}\right]}}}\right]\right\} \end{aligned} } \label{eq:rekursallgvisko}$$ Despite the use of the variable maximum packing fraction the [equation ]{} corresponds to the so-called model [@1968_Farris] also referred to in [@2010_Brouwers] and [@1990_Cheng]. So we have found an interesting connection between the differential model and the model which is drawn by the discrete construction process. Remarkably, the approach followed in [@1990_Cheng] includes a viscosity relation which depends on the particle size class, that is the particle radius. Although this additional degree of freedom is useful for fitting purposes, it is inconsistent with the assumptions of the models by and , which only consider a relative but not an absolute influence of the particle radius. Therefore, we will not use this approach in the current work. In later sections we will use the result  as a general formula for the apparent viscosity of hierarchical polydisperse suspensions (compare [section \[sec:Aufbauprozess\]]{}). It is important to note that the extension of the [equation ]{} to  has been conducted by means of considering the complete expansion of relation , suggesting the admissibility of arbitrary volume fractions of the individual size classes which is uncertain regarding the notion of the construction process. Since it is not possible to state a limiting value of $\phi_{P,i+1}$ for the validity of the result , we will though use this equation at least as a reasonable approximation also for higher values of $\phi_{P,i+1}$. To sum up, we state that the [equation ]{} resulting form the construction process is at least justified for the first two orders of $\phi_{P,i+1}$, while the model (corresponding to the [equation ]{}) provides a useful but possibly unjustified extrapolation for high values of $\phi_{P,i+1}$. Determination of the maximum packing fraction {#sec:Packungsdichte} --------------------------------------------- In the following we will present two models for computing the polydisperse maximum packing fraction. The first model has been proposed by (see also [@2010_Brouwers]). Although this model is insufficient for reasons to be given in [section \[sec:DasModellvonFurnas\]]{}, we will use the underlying considerations for developing an improved model in [section \[sec:DasneueModell\]]{}. The approaches differ with respect to the underlying treatment of the particle size distribution. During the calculation we will use the quantities occurring in the construction process according to the tables \[tab:Aufbau\] and \[tab:Packung\]. For instance, after the third construction step the values $\varphi_{T2}^3$ and $\varphi_{T3}^3$ are being calculated starting from the monodisperse maximum packing fraction $\varphi_{T1}^3=\varphi_c$ and considering the existing volume fractions $\dphi_1^3$, $\dphi_2^3$ and $\dphi_3^3$. According to [section \[sec:NotationPackung\]]{} we denote as $\varphi_{Tk}^i$ the maximum packing fraction after the $i$th construction step so far containing $k$ size classes while the completed suspension consists of $n$ size classes. ### The Furnas model {#sec:DasModellvonFurnas} The model is based on the assumption that the state of maximum packing fraction is constructed successively from size classes with decreasing diameter. At first, the larger spheres fill the entire available suspension volume with the monodisperse maximum packing fraction $\varphi_c$. So the total volume fraction is $$\varphi_{T1}^i = \varphi_1^i = \varphi_c \label{eq:FurnasT0}$$ Subsequently, the remaining volume fraction $(1-\varphi_c)$ is filled by the spheres with the next smaller diameter, also to a realizable part of $\varphi_c$. So the additional volume fraction of small spheres $\varphi_{T1}^i$ is given by $$\begin{split} \varphi_{T1}^i &= \varphi_1^i+\varphi_2^i\\ &= \varphi_c + (1-\varphi_c)\varphi_c = 1 - (1-\varphi_c)^2 \end{split} \label{eq:FurnasT1}$$ Generalizing these considerations to a number of $k$ size classes, one finds $$\varphi_{Tk}^i = 1 - (1-\varphi_c)^{k} \label{eq:FurnasTn}$$ According to [@2010_Brouwers] this approach requires a minimal size ratio of 7 to 10 between consecutive size classes in order to avoid interactions between the packings of the individual size classes. ##### Discussion of the Furnas model\ The advantages of the model are its simple derivation, the applicability to an unlimited number of size classes and the simplicity of the result. Remarkably, the maximum packing fraction does not depend on the volume fractions of the individual size classes but only on their number. This is the crucial disadvantage of the model. As an illustration, imagine a given polydisperse suspension with a total volume fraction of $\phi_i<\varphi_{Ti}^i$. In the first case, we add a large amount of small spheres, in the first case a small amount. The model would yield the same value of the maximum packing fraction for both cases, which runs contrary to intuition. In this model the maximum packing fraction is calculated without effectively considering the particle size distribution in the suspension. ### An improved model for the maximum packing fraction {#sec:DasneueModell} We will derive the new model in a way that at first the vivid cases of bi- and tridisperse suspensions are treated explicitly in order to prepare the subsequent general derivation. The general formulae will contain the bi- and tridisperse suspensions as special cases. ##### Description of the volume fractions\ A consequent calculation of the maximum packing fraction requires the consideration of the individual particle size classes’ volume fractions. This can be achieved by retaining the particle size distribution existing in the fluent suspension state during the composition of the maximum packing fraction. This assumption can be expressed in the form $$\frac{\varphi_{k+1}^i}{\varphi_k^i} = \frac{\dphi^i_{k+1}}{\dphi^i_k} \label{eq:verhaeltnis}$$ where $\varphi_k^i$ in the state of maximum packing fraction corresponds to $\dphi^i_k$ in the fluent state, that is the volume fraction of the $k$th size class. Introducing the abbreviation $$\phi^{i}_k {\mathrel{\mathop:}=}\sum_{m=1}^{k}\dphi^i_m=\dphi^i_1+\dphi^i_2+\ldots+\dphi^i_k\label{eq:phi_idef}$$ we conclude that $$\begin{aligned} \phi^{i}_1 &= \dphi^i_1 \label{eq:phi_1ident}\quad\text{as well as}\\ \phi^{i}_k &= \phi^{i}_{i-1}+\dphi^i_k \label{eq:phi_irekurs}\end{aligned}$$ ##### Derivation for bidisperse systems\ Bidisperse (or bimodal) suspensions contain two different particle sizes. Given the ratio $$\frac{\varphi_2^i}{\varphi_1^i} = \frac{\dphi^i_{2}}{\dphi^i_1} = \frac{\dphi^i_{2}}{\phi_1^i} \label{eq:alpha1a}$$ we ask how the state of maximum packing fraction can be achieved retaining the volume fraction ratio . There are tow possibilities differing with respect to the value of $\dphi^i_2$. Both situations are visualized in [Figure \[fig:Bimodal\]]{}. #### Situation 1\ The volume fraction $\varphi_2^i$ of the large spheres lies below the monodisperse packing fraction $\varphi_c$ and the remaining interstice $1-\varphi_2^i$ is filled to an amount of $\varphi_c$ with small spheres, compare [Figure \[fig:Bimodal1\]]{}. So the total particle volume fraction is $$\varphi_{T2}^{i\eins} = \varphi_1^i+\varphi_2^i = \varphi_2^i + (1-\varphi_2^i)\varphi_c \label{eq:IchT2}$$ (compare [equation ]{}, where restrictively $\varphi_2^i = \varphi_c$). Using the [equation ]{} we find from the [equation ]{} $$\varphi_2^i = \frac{\varphi_c}{\varphi_c + \frac{\phi_1^i}{\dphi^i_2}} \label{eq:phi2bi}$$ and furthermore $$\label{eq:phiT1sit1} \begin{split} \varphi_{T2}^{i\eins} &= \varphi_1^i+\varphi_2^i =\left(\frac{\phi_1^i}{\dphi^i_2}+1 \right)\frac{\varphi_c}{\varphi_c + \frac{\phi_1^i}{\dphi^i_2}}\\ &= \frac{\varphi_c\phi_2^i}{\varphi_c\dphi^i_2 + \phi_1^i} \end{split}$$ In the last step of the [equation ]{} we have used the identity . The limitation for the validity of the [equation ]{} follows from the realizability condition   (arbitrarily, we assign the equal sign to the situation 2 in order to avoid an ambiguous definition of the case $\varphi_2^i=\varphi_c$\[fuss\]) which in combination with the [equation ]{} yields $$\dphi^i_2 < \frac{\phi_1^i}{1-\varphi_c} \label{eq:alphagrenz1sit1}$$ as the condition for the validity of situation 1. #### Situation 2\ In this situation the volume fraction $\varphi_2^i$ of the large spheres equals the monodisperse maximum packing fraction $\varphi_c$ (it is not possible to exceed the value of $\varphi_c$ in a monodisperse loading) and so we have $\varphi_2^i = \varphi_c$, see [Figure \[fig:Bimodal2\]]{}. Using the [equation ]{} we immediately find $$\begin{aligned} \varphi_1^i&=\varphi_2^i\frac{\phi_1^i}{\dphi^i_2}\quad\text{and} \label{eq:phi1sit2}\\ \varphi_{T2}^{i\zwei} &= \varphi_1^i+\varphi_2^i = \varphi_c\left(1+\frac{\phi_1^i}{\dphi^i_2}\right) = \varphi_c\frac{\phi_2^i}{\dphi^i_2} \label{eq:phiT1sit2}\end{aligned}$$ It follows from the fundamental condition $\varphi_{T2}^{i\zwei}<1$ and the [equation ]{} that $$\dphi^i_2>\frac{\varphi_c\phi_1^i}{1-\varphi_c} \label{eq:alpha1schwach}$$ whereas the realizability condition $\varphi_1^i\leq(1-\varphi_2^i)\varphi_c$ under consideration of $\varphi_2^i=\varphi_c$ and the [equation ]{} yields $$\dphi^i_2\geq\frac{\phi_1^i}{1-\varphi_c} \label{eq:alphagrenz1sit2}$$ The condition includes the [inequality ]{}. So the [equation ]{} serves as a condition for the validity of situation 2. #### Summary for bidisperse systems\ The inequalities  and cover all possible values of $\dphi^i_2$. On the basis of a given volume fraction $\dphi^i_2$ one has to decide whether situations 1 or 2 are present. Subsequently the respectively valid equations  or  may be used to calculate the maximum packing fraction $\varphi_{T2}^i$. The relations  and  are—if notation is adapted—identical to the equations (3) and (4) presented in [@1997_Gondret]. ##### Derivation for tridisperse systems\ The following considerations refer to tridisperse systems, that is systems with three particle size classes. The pattern underlying the derivation will later be turned into a general formulation for an arbitrary number of size classes. With tridisperse systems we also distinguish between two situations because, regardless of the number of size classes, there are always two situations. This relies on the geometric fact that no particle class except the one with the largest particle diameter can reach the monodisperse packing fraction. If hypothetically a particle class with a smaller diameter reached the monodisperse maximum packing fraction, the larger particles would not fit into the interstices between the smaller particles and could thus not contribute to this state of maximum packing fraction. #### Situation 1\ The volume fraction $\varphi_2^i$ of the large spheres is smaller than the monodisperse maximum packing fraction $\varphi_c$ and so the remaining interstice $1-\varphi_2^i$ is filled by the medium and smallest particles to an amount of $\varphi_{T2}^i$ (equations  or ) which is the *bidisperse* maximum packing fraction calculated previously. The volume fraction of the medium and small spheres is thus given by $$\varphi_1^i+\varphi_2^i = (1-\varphi_3^i)\varphi_{T2}^i \label{eq:phi1phi2mitphi3}$$ Rearranging the [equation ]{} and using the [equation ]{} we find $$\begin{split} \varphi_1^i+\varphi_2^i &= \varphi_3^i\frac{\dphi^i_1}{\dphi^i_3}+\varphi_3^i\frac{\dphi^i_2}{\dphi^i_3}\\ &= \phi_2^i\frac{\varphi_3^i}{\dphi^i_3} \end{split} \label{eq:phi1phi2mitalpha1}$$ It is now possible to insert the [equation ]{} into the relation which yields $$\varphi_3^i = \frac{\varphi_{T2}^i}{\varphi_{T2}^i + \frac{\phi_2^i}{\dphi^i_3}} \label{eq:phi3tri}$$ From the equations  and  we can deduce the total volume fraction of spheres $$\label{eq:phiT2sit1} \varphi_{T3}^{i\eins} = \varphi_1^i+\varphi_2^i+\varphi_3^i = \frac{\varphi_{T2}^i\phi_3^i}{\varphi_{T2}^i\dphi^i_3 + \phi_2^i}$$Using [equation ]{}, we can write the realizability condition $\varphi_3^i<\varphi_c$ in the form $$\dphi^i_3 < \frac{\varphi_c\phi_2^i}{\varphi_{T2}^i(1-\varphi_c)} \label{eq:alphagrenz2sit1}$$ The inequality serves as a condition for the validity of situation 1. #### Situation 2\ Analogously to the bidisperse case, situation 2 is characterized by a volume fraction $\varphi_3^i \equiv \varphi_c$ of the largest spheres. Under consideration of the [equation ]{} (see also [equation ]{}) we have $$\label{eq:phiT2sit2} \begin{split} \varphi_{T3}^{i\zwei}&=\varphi_1^i+\varphi_2^i+\varphi_3^i = \varphi_c\left(\frac{\dphi^i_1}{\dphi^i_3}+\frac{\dphi^i_2}{\dphi^i_3}+1\right)\\ &= \varphi_c \frac{\phi_3^i}{\dphi^i_3} \end{split}$$ for the total volume fraction of spheres in situation 2 of a tridisperse suspension. The bound for $\dphi^i_3$ follows from the condition $\varphi_1^i+\varphi_2^i \leq (1-\varphi_c)\varphi_{T2}^i$ in the form $$\dphi^i_3 \geq \frac{\varphi_c\phi_2^i}{\varphi_{T2}^i(1-\varphi_c)} \label{eq:alphagrenz2sit2}$$ corresponding to [inequality ]{} as expected. #### Summary for tridisperse systems\ By means of the inequalities  and one decides whether the situations 1 or 2 are present. Afterwards, $\varphi_{T3}^i$ is calculated according to the equations  or ). ##### Generalization for polydisperse systems\ Now we will generalize the model for polydisperse systems which corresponds to deriving expressions for the total volume fraction $\varphi_{Tk+1}^i$ in the situations 1 and 2 and for the limiting value of $\dphi^i_{k+1}$. #### Situation 1\ In the polydisperse case the interstices between the largest particles are filled by the smaller particle size classes with the packing fraction $\varphi_{Ti}^k$ which is determined by the volume fractions $\dphi^i_1$ to $\dphi^i_k$. So the volume fraction of the largest particles is $\varphi_{k+1}^i$ and the one of all the smaller particles is $(1-\varphi_{k+1}^i)\varphi_{Tk}^i$. The sum of both fractions yields the maximum packing fraction, that is $$\varphi_{Tk+1}^{i\eins} = \varphi_{k+1}^i + (1-\varphi_{k+1}^i)\varphi_{Tk}^i \label{eq:phiTiplus1urspr}$$ So we may express the volume fractions of all the small particles by $$\sum_{m=1}^k \varphi_m = (1-\varphi_{k+1}^i)\varphi_{Tk}^i \label{eq:summephik}$$ Equation allows for representing all the occuring volume fractions by the fraction $\varphi_{k+1}^i$ of the largest particles, so we can restate the sum in the [equation ]{} with the help of the [equation ]{}: $$\sum_{m=1}^k \varphi_m = \varphi_{k+1}^i {\frac{1}{\dphi^i_k}} \sum_{m=1}^k \dphi^i_m = \varphi_{k+1}^i \frac{\phi^{i}_k}{\dphi^i_{k+1}} \label{eq:summephik2}$$ It immediately follows from the equations  and that $$(1-\varphi_{k+1}^i)\varphi_{Tk}^i = \varphi_{k+1}^i \frac{\phi^{i}_k}{\dphi^i_{k+1}}$$ that can be rearranged into $$\varphi_{k+1}^i = \frac{\varphi_{Tk}^i\dphi^i_{k+1}}{\varphi_{Tk}^i\dphi^i_{k+1} + \phi^{i}_k} \label{eq:phiiplus1}$$ Additionally, under consideration of the [equation ]{} the following relation holds for both the situations 1 and 2: $$\begin{split} \varphi_{Tk+1}^i &= \varphi_{k+1}^i + \sum_{m=1}^k \varphi_m = \varphi_{k+1}^i \left(1+\frac{\phi^{i}_k}{\dphi^i_{k+1}}\right)\\ &= \varphi_{k+1}^i\frac{\phi^{i}_{k+1}}{\dphi^i_{k+1}} \end{split} \label{eq:phiTiplus1allg}$$ Inserting the [equation ]{} into , we find the result for the maximum packing fraction in situation 1: $$\boxed{ \varphi_{Tk+1}^{i\eins} = \frac{\varphi_{Tk}^i\phi^{i}_{k+1}}{\varphi_{Tk}^i\dphi^i_{k+1} + \phi^{i}_k} \label{eq:phiTiplus11} }$$ The limiting value for $\dphi^i_{k+1}$ follows analogously to the bi- and tridisperse cases from the requirement $\varphi_{k+1}^i<\varphi_c$ and is thus given by $$\boxed{ \dphi^i_{k+1} < \frac{\varphi_c\phi^{i}_k}{\varphi_{Tk}^i(1-\varphi_c)} \label{eq:alphagrenzsit1} }$$ #### Situation 2\ In situation 2 the largest particles occupy a volume fraction equal to the monodisperse maximum packing fraction, so $\varphi_{k+1}^i=\varphi_c$. Therefore, by means of the [equation ]{} we may write $$\boxed{ \varphi_{Tk+1}^{i\zwei} = \varphi_c \frac{\phi^{i}_{k+1}}{\dphi^i_{k+1}} \label{eq:phiTiplus12} }$$ for the maximum packing fraction in situation 2. The limiting value of $\dphi^i_{k+1}$ in situation 2 is determined by the condition $$\sum_{m=1}^k \varphi_m \leq (1-\varphi_c)\varphi_{Tk}^i \label{eq:forderung2}$$ and combined with  and  takes the form $$\boxed{ \dphi^i_{k+1} \geq \frac{\varphi_c\phi^{i}_k}{\varphi_{Tk}^i(1-\varphi_c)} \label{eq:alphagrenzsit2} }$$ #### Properties of the bounds\ {#Schranken} The bounds  and  can alternatively be derived by equaling the prescriptions  and . This underlines the consistency between the situations 1 and 2 because there is a continuous transition. This is equivalent to choosing the smallest value out of the two calculated maximum packing fractions because for all values of $\dphi_{k+1}^i$ the situation that is present is always the one with the smaller maximum packing fraction. Thus we are lead to the prescription $$\boxed{ \varphi_{Tk+1}^i=\text{min}\left[\varphi_{Tk+1}^{i\eins},\varphi_{Tk+1}^{i\zwei}\right] } \label{eq:minphiT1}$$ Scheme for the viscosity estimation ----------------------------------- [Figure \[fig:Ablauf\]]{} shows the procedure for the calculation of maximum packing fraction and relative viscosity after the $i$th construction step. The complete suspension shall contain $n$ particle size classes. ![Scheme for the calculation of maximum packing fraction and relative viscosity after the $i$th construction step for $n$ particle size classes with references to the respective equations[]{data-label="fig:Ablauf"}](Ablauf.pdf) Influence of polydispersity on relative viscosity ------------------------------------------------- In this section we examine the change in relative viscosity due to the number of particle size classes, provided that the size distribution is given. As already noted in [section \[sec:Aufbauprozess\]]{}, the model developed so far and presented in the scheme on [page ]{} is only valid for diameter ratios between consecutive size classes of about 7 to 10. A special choice of the diameter ratio—in the following we will choose $u_i=10$—results in a logarithmic sampling of the continuous size distribution. Therefore, the sampling points, that is the first diameter $d_1$, have to be adequately chosen in order to reproduce the characteristics of the distribution. If the volume fractions $\dphi_k$ are not equally distributed among the different diameter values, the choice of the sampling points thus depends on the number of size classes. This will be the case in the following examination concerning the <span style="font-variant:small-caps;">Rosin-Rammler</span> distribution. Subsequently, we will consider the case of a uniform distribution. ### Rosin-Rammler distribution The <span style="font-variant:small-caps;">Rosin-Rammler</span> distribution is frequently used to describe the drop size distribution of sprays. Its cumulative distribution function $R_{\text{CDF}}$ is given by $$R_{\text{CDF}}(d)=1-{\mathrm{e}^{-\left(\frac{d}{X}\right)^q}} \label{eq:CDF}$$ where $d$ is the particle diameter as well as $X$ and $q$ are model parameters. By differentiation of  the probability density function $R_{\text{PDF}}$ (we do not rigorously distinguish between probability and relative frequency) can be deduced: $$R_{\text{PDF}}(d)=\frac{q}{d}\left(\frac{d}{X}\right)^q{\mathrm{e}^{-\left(\frac{d}{X}\right)^{q-1}}} \label{eq:PDF}$$ The function $R_{\text{PDF}}(d)$ describes the relative frequency of occurrence of a particle diameter between $d$ and $d+{\mathrm{d}}d$ and is normalized to unity. So we find the volume fraction occupied by the particles with diameters between $d$ and $d+{\mathrm{d}}d$ through multiplication of the total volume fraction $\phi$ by $R_{\text{PDF}}(d)$ (it is $\phi=\phi_n=\sum_{m=1}^{n}\dphi_m$ for a number of $n$ size classes). [Figure \[fig:Verteilung\_01\]]{} shows the probability density function $R_{\text{PDF}}(d)$ for a special choice of the parameters $X$ and $q$ in [equation ]{}. ![Probability density function by Rosin-Rammler (equation (\[eq:PDF\])) with parameters  and  and sampling for bi- and tridisperse representations ($n=2$: $[d_1,d_2]=[5,50]\,\text{\textmu{}m}$ and $n=3$: $[d_1,d_2,d_3]=[1,10,100]\,\text{\textmu{}m}$, respectively)[]{data-label="fig:Verteilung_01"}](Verteilung_01.pdf) With a diameter ratio of $u_i=10$, the distribution only allows for useful sampling at no more than three points because of its asymptotic declination. The samplings for two and three particle size classes are shown in [Figure \[fig:Verteilung\_01\]]{}. We proceed as follows: In order to distribute the total volume fraction $\phi$ among the size classes we divide the individual values of the PDF at the sampling points by their sum and multiply them by $\phi$. So we achieve the volume fractions to be distributed according to the continuous size distribution. Of course, it is not necessary to choose sampling points for the monodisperse suspension. We now evaluate the assignment between the diameter $d_k$ and the relative volume fraction $\dphi_k$ given by the <span style="font-variant:small-caps;">Rosin-Rammler</span> distribution  for different values of $\phi$. Thereby we apply the [equation ]{} for the viscosity as well as the equations  and  for the maximum packing fraction. We therefore assume the relation  to be valid and so we de facto apply the -Modell (see [section \[sec:VergleichLiteratur\]]{}). The results are presented in [Figure \[fig:Verteilung\_02\]]{}. ![Relative viscosity as a function of the volume fraction for various numbers of size classes $n$ from equation (\[eq:rekursallgvisko\]) with parameters $\varphi_c=0.64$, ${\left[\eta_{1}\right]}=2.5$ and ${\left[\eta_{2}\right]}=6.17$; Rosin-Rammler size distribution (samplings shown in Figure \[fig:Verteilung\_01\]); maximum packing fraction $\varphi_c$, $\varphi_{T2}^2$ and $\varphi_{T3}^3$[]{data-label="fig:Verteilung_02"}](Verteilung_02.pdf) It is clearly visible that the relative viscosity decreases as the number of particle size classes increases. The strongest influence can be noticed during the transition from a monodisperse to a bidisperse suspension while adding a third size class only causes relatively small changes. The curves diverge at the respective values of the final maximum packing fraction $\varphi_{Tn}^n$ which are also depicted in [Figure \[fig:Verteilung\_02\]]{}. ### Uniform distribution The uniform size distribution is characterized by the equality of all the volume fractions $\dphi_k$, so one has $\dphi_k=\phi/n$ for each size class. Analogously to the case of the <span style="font-variant:small-caps;">Rosin-Rammler</span> distribution, we calculate the relative viscosity as a function of the total volume fraction $\phi$ using the [equation ]{} and the maximum packing fraction applying the relations  and . The results are depicted in [Figure \[fig:Verteilung\_03\]]{}, whereas the behavior qualitatively coincides with the results shown in [Figure \[fig:Verteilung\_02\]]{}. ![Relative viscosity as a function of the volume fraction for various numbers of size classes $n$ from equation (\[eq:rekursallgvisko\]) with parameters $\varphi_c=0.64$, ${\left[\eta_{1}\right]}=2.5$ und ${\left[\eta_{2}\right]}=6.17$ for a uniform distribution[]{data-label="fig:Verteilung_03"}](Verteilung_03.pdf) The relative viscosity decreases at constant $\phi$ and increasing $n$. We confine ourselves to at most four size classes because the trend is already clearly visible with that number. ### Range of influence of polydispersity Table \[tab:Verteilung\] shows the viscosity decrease as a function of the polydispersity by means of different values of $\phi$ related to the monodisperse viscosity. 1ex Distribution Ratio in $\phi=0$ 0.1 0.2 0.3 0.4 0.5 0.6 -------------- ----------------------------------- ---------- ------ ------ ------ ------ ------ ----- $\frac{\eta_r(n=2)}{\eta_r(n=1)}$ 100.0 99.1 95.4 87.0 71.1 43.6 7.1 $\frac{\eta_r(n=3)}{\eta_r(n=1)}$ 100.0 98.7 94.0 83.9 66.1 38.2 5.5 $\frac{\eta_r(n=2)}{\eta_r(n=1)}$ 100.0 99.0 95.1 86.9 71.9 46.2 8.6 $\frac{\eta_r(n=3)}{\eta_r(n=1)}$ 100.0 98.6 93.6 83.4 65.7 38.6 5.9 $\frac{\eta_r(n=4)}{\eta_r(n=1)}$ 100.0 98.5 92.9 81.7 63.0 35.5 5.1 The numerical values for the <span style="font-variant:small-caps;">Rosin-Rammler</span> distribution rarely differ from the values for the uniform distribution. At a total volume fraction of $\phi=0.1$ the deviation amounts to approximately one percentage point, for $\phi=0.2$ partially to more than five percentage points. If, for instance, the derivation shall be kept below one percentage point, it is necessary to calculate the viscosity using the polydisperse formulae at volume fractions higher than 0.1, provided that the particle size distribution is broad enough. A polydisperse calculation only makes sense if the size distribution allows for at least two size classes with significant volume fractions having a diameter ratio of 7 to 10. For example, the inclusion of a fourth size class with a particle diameter of $d_4=1000\micm$ in the case of the <span style="font-variant:small-caps;">Rosin-Rammler</span> distribution in [Figure \[fig:Verteilung\_01\]]{} has no effect on the relative viscosity because the relative frequency is approximately equal to $9\times10^{-16}$ and therefore negligible. Consideration of particle deformation {#sec:Deformation} ------------------------------------- In [@2009_Hsueh] an approach to the consideration of deformable particles is developed. Thereby, the particle deformability is represented by a particle viscosity $\eta_p$. Using the so-called modified <span style="font-variant:small-caps;">Eshelby</span> model (see also [@2002_Torquato]) as well as the elastic-viscous analogy the derivation of the equation $$\eta_r=\frac{2\eta_p+3\eta_0+3\phi\left(\eta_p-\eta_0\right)}{2\eta_p+3\eta_0-2\phi\left(\eta_p-\eta_0\right)} \label{eq:etarhsueh}$$ is outlined. This equation is confined to long-range particle interactions and is thus invalid for dense suspensions. However, as presented in [@2009_Hsueh], the first-order series expansion of the relation  can be used for the differential model (see [section \[sec:Bruggeman\]]{}), analogously to the use of the relation in [equation ]{}. So for the differential approach we have $$\eta+{\mathrm{d}}\eta = \eta\left[1+2.5\,\phi_{P}\left(\frac{\eta_p-\eta}{\eta_p+1.5\eta}\right)\right] \label{eq:betarhsueh}$$ To be consistent with the [equation ]{}, one has to set ${\left[\eta_{1}\right]}=2.5$. For $\eta_p\to\infty$ (rigid particles) the [equation ]{} reduces to the relation . In the course of the discrete construction process higher order terms out of the series expansion of the [equation ]{} can be considered. However, the limited validity of this equation for larger values of $\phi_{P}$ must be regarded. If we include terms up to the second order we can express the relative viscosity in the discrete construction process by $$\begin{split} \eta_{i+1} = &\eta_i \Big[ 1+2.5\,\phi_{P,i+1}\left(\frac{\eta_p-\eta_i}{\eta_p+1.5\eta_i}\right)\\ &+2.5\,\phi_{P,i+1}^{2}\left(\frac{\eta_p-\eta_i}{\eta_p+1.5\eta_i}\right)^2\Big] \end{split} \label{eq:betahsueh}$$ The [equation ]{} allows for the effect of the particle viscosity $\eta_p$ on the relative viscosity of polydisperse systems to be described approximately. Since in this work we do not focus on particle deformation, we simply state the result  without further validation. Conclusions {#chap:conclusions} =========== In the present work we provided a model for the relative viscosity of polydisperse suspensions of spherical non-colloidal particles. Using monodisperse viscosity correlations, we described polydisperse suspensions by means of a construction process consisting of successive additions of particle size classes.As a starting point, we proposed a generalized form of the well-known equation that allows for the choice of the second order intrinsic viscosity ${\left[\eta_{2}\right]}$. This modified equation can be used to approximate the various monodisperse viscosity relations existing in the literature and can therefore be regarded as a generic relation.Later, we described the construction process in detail applying a dimensionless way of description based on volume fractions. This rigorous description served as a basis for the calculation of the relative viscosity during the construction process. Starting from the model, we finally arrived at the model, connecting two approaches commonly regarded as uncorrelated.As an entirely new component, we introduced the polydisperse maximum packing fraction into the model. Here, we followed a physically consistent approach in contrast to the approaches presented in the literature. Consistently with the relative viscosity calculation, we derived a formalism to determine the polydisperse maximum packing fraction by means of a common construction process.The entire formalism for calculating the relative viscosity—whether including the maximum packing fraction or not—is depicted on page . We evaluated the model in the case of two different particle size distributions in order to observe the influence of polydispersity represented by the number of particle size classes.Additionally, we revealed a possible approach for integrating particle deformability, represented by a particle phase viscosity, into the viscosity model using a result from the literature.So far, our model is only valid for large diameter ratios of consecutive size classes during the construction process. An attempt to generalize the model to the case of small size ratio as well as the shear rate dependence of the relative viscosity will be presented in a future work. Proof of equivalence of the construction processes at variable and constant volume {#sec:Beweisderaequivalenz} ================================================================================== The Definition  implies for both cases—variable and constant total volume—the corollary $$\sum_{m=1}^i \delta\phi_m = \phi_i \label{eq:summedeltaphi}$$ Obviously both approaches must result in the same total volume fraction $\phi_n$ at the end of the construction process. So we ask if all partial sums $\phi_i$ ($i\leq n$) are identical in both cases. As a consequence, the values of $\delta\phi_{i+1}$ for $i=1\ldots n$ would coincide, too. The validity of these identities is important for the derivations in [section \[sec:Diskretes\_Modell\]]{} because it ensures that the viscosities calculated in both cases are equal. This is a prerequisite for a consistent model. In the following we show the equivalence of the two approaches. To distinguish between the volume fractions $\phi_{i+1}^*$ occurring in each case we introduce the notations $\phi_{i+1}^{*\variabel}$ for the case of variable total volume and $\phi_{i+1}^{*\konstant}$ for the case of constant total volume. In a first step we show that the volume fraction $\phi_i$ coincide for $i=1\ldots n$, so that $$\phi_i^{\variabel} = \phi_i^{\konstant} \label{eq:ident1}$$ According to the [equation ]{}, the volume fraction $\phi_i^{\variabel}$ is given by $$\phi_i^{\variabel} = \frac{\sum_{m=1}^{i}V_m}{V_f+\sum_{m=1}^{i}V_m} \label{eq:phiivariabel}$$ while the volume fraction $\phi_i^{\konstant}$ from the [equation ]{} is $$\phi_i^{\konstant} = \frac{\sum_{m=1}^i \dphi_m}{1-\sum_{m=i+1}^n \dphi_m} \label{eq:phiikonstant2}$$ Insertion of the relation  into the [equation ]{} yields after an intermediate step $$\begin{split} \phi_i^{\konstant} &= \frac{\sum_{m=1}^i V_m}{V_f+\sum_{m=1}^{n}V_m - \sum_{m=i+1}^{n}V_m}\\ &= \frac{\sum_{m=1}^{i}V_m}{V_f+\sum_{m=1}^{i}V_m} = \phi_i^{\variabel} \end{split} \label{eq:ident1gezeigt}$$ and so the validity of the proposition  has been proved. It follows directly the coincidence of the differences $\delta\phi_{i+1}$ in the [equation ]{}. In a second and last step we proof that $$\phi_{P,i+1}^\variabel=\frac{\phi_{i+1}^{*\variabel}}{1+\phi_{i+1}^{*\variabel}} = \phi_{i+1}^{*\konstant}=\phi_{P,i+1}^\konstant \label{eq:ident2}$$ Using the definition  we may write the left-hand side of the [equation ]{} in the form $$\frac{\phi_{i+1}^{*\variabel}}{1+\phi_{i+1}^{*\variabel}} = \frac{V_{i+1}}{V_f+\sum_{m=1}^{i+1} V_m} \label{eq:linksident2}$$ Rearranging of the right-hand side of the representation  and using the equations  and  we find $$\begin{split} \phi_{i+1}^{*\konstant}&=\frac{\dphi_{i+1}}{1-\sum_{m=i+2}^n \dphi_m}\\ &= \frac{ V_{i+1}}{V_f+\sum_{m=1}^{n}V_m - \sum_{m=i+2}^{n}V_m}\\ &= \frac{V_{i+1}}{V_f+\sum_{m=1}^{i+1} V_m} = \frac{\phi_{i+1}^{*\variabel}}{1+\phi_{i+1}^{*\variabel}} \end{split} \label{eq:rechtsident2}$$ So the proof of the proposition  is complete. [25]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [**]{}, edited by  (, ) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} **, @noop [Ph.D. thesis]{},  () @noop [****,  ()]{} @noop [**]{}, edited by , , ,  and  (, ) @noop [****,  ()]{} @noop [**]{},  ed. (, ) @noop [**]{} (, ) @noop [**]{} (, ) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{}
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we use a recent version of the Ruelle-Perron-Frobenius Theorem to compute, in terms of the maximal eigendata of the Ruelle operator, the pressure derivative of translation invariant spin systems taking values on a general compact metric space. On this setting the absence of metastable states for continuous potentials on one-dimensional one-sided lattice is proved. We apply our results, to show that the pressure of an essentially one-dimensional Heisenberg-type model, on the lattice $\mathbb{N}\times \mathbb{Z}$, is Fréchet differentiable, on a suitable Banach space. Additionally, exponential decay of the two-point function, for this model, is obtained for any positive temperature.' author: - 'E. A. Silva' title: 'Pressure Derivative on Uncountable Alphabet Setting: a Ruelle Operator Approach' --- fnsymbolroman Introduction ============ The Ruelle operator was introduced by David Ruelle in the seminal paper [@Ruelle-1968], in order to prove the existence and uniqueness of the Gibbs measures for some long-range Statistical Mechanics models in the one-dimensional lattice. Ever since the Ruelle operator has become a standard tool in a variety of mathematical fields, for instance, in Dynamical Systems, and other branches of Mathematics and Mathematical Physics. The Ruelle operator was generalized in several directions and its generalizations are commonly called transfer operators. Transfer operators appear in IFS theory, Harmonic Analysis and $C^{*}$-algebras, see for instance [@Excel-Lopes; @Lau; @Straub] respectively. In Dynamical Systems the existence of Markov Partitions allows one to conjugate uniform hyperbolic maps on compact differentiable manifolds with the shift map in the Bernoulli space. For more details see, for example, [@bo] and references therein. A field in which the Ruelle operator formalism has also been proved useful is the Multifractal Analysis. Bowen, in the seminal work [@boo], has established a relationship between the Hausdorff dimension of certain fractal sets and topological pressure, for more details see [@boo; @ma] and also the introductory texts [@Barreira; @Pesin]. The classical Thermodynamic Formalism was originally developed in the Bernoulli space $M^{\mathbb{N}}$, with $M$ being a finite alphabet, see [@PP]. The motivation to consider more general alphabets from the dynamical system point of view is given in [@Sarig; @1; @Sarig; @2], where proposed models with infinite alphabet $M=\mathbb{N}$ are used to describe some non-uniformly hyperbolic maps, for instance, the Manneville-Pomeau maps. Unbounded alphabets as general standard Borel spaces, which includes compact and non-compact, are considered in details in [@Geogii88]. In [@le] the authors considered the alphabet $M=S^1$ and a Ruelle operator formalism is developed. Subsequently, in [@LMMS], this formalism was extended to general compact metric alphabets. Those alphabets do not fit in the classical theory, since the number of preimages under the shift map may be not countable. To circumvent this problem the authors considered an a priori measure $\mu$ defined on $M$, and so a generalized Ruelle operator can be defined and a version of the Ruelle-Perron-Frobenius Theorem is proved. In this general setting concepts of entropy and pressure are also introduced. A variational principle is obtained in [@LMMS]. The authors also show that their theory can also be used to recovery some results of Thermodynamic Formalism for countable alphabets, by taking the one-point compactification of $\mathbb{N}$, and choosing a suitable a priori measure $\mu$. In Classical Statistical Mechanics uncountable alphabets shows up, for example, in the so-called $O(n)$ models with $n\geq 2$. These are models on a $d$-dimensional lattice for which a vector in the $(n-1)$-dimensional sphere is assigned to every lattice site and the vectors at adjacent sites interact ferromagnetically via their inner product. More specifically, let $n\geq 1$ be an integer and let $G=(V(G), E(G))$ a finite graph. A configuration of the [*Spin $O(n)$ model*]{} on $G$ is an assignment $\sigma:V(G)\to S^{n-1}$, we denote by $\Omega:=(S^{n-1})^{V(G)}$ the space of configurations. At inverse temperature $\beta\in (0, \infty),$ configurations are randomly chosen from the probability measure $\mu_{G,n, \beta}$ given by $$d\mu_{G,n, \beta}(\sigma):=\dfrac{1}{Z_{G, n, \beta}} \exp ( \beta \sum_{u,v \in E(G)} \sigma_u\cdot \sigma_v) \ d\sigma,$$ where $\sigma_u\cdot \sigma_v$ denotes the inner product in $\mathbb{R}^n$ of the assignments $\sigma_u$ and $\sigma_v$, $Z_{G, n, \beta}=\int_{\Omega} \exp( \beta \sum_{u,v \in E(G)} \sigma_u\cdot \sigma_v)\, d\sigma $ and $d\sigma$ is the uniform probability measure on $\Omega$. Special cases of these models have names of their own: when $n=0$ this model is the Self Avoiding Walking (SAW); when $n=1$, this model is the Ising model, when $n=2$, the model is called the $XY$ model; finally for $n=3$ this model is called the Heisenberg model, see [@le; @Ellis; @GJ-Book; @Geogii88; @ONModels] for details. In [@CL-SIL] the authors generalize the previous Ruelle-Perron-Frobenius Theorem for a more general class of potentials, satisfying what the authors called weak and strong Walters conditions, which in turn are natural generalizations for the classical Walters condition. The regularity properties of the pressure functional are studied and its analyticity is proved on the space of the Hölder continuous potentials. An exponential decay rate for correlations are obtained, in the case where the Ruelle operator exhibits the spectral gap property. An example, derived from the long-range Ising model, of a potential in the Walters class where the associated Ruelle operator has no spectral gap is given. One of the main results of this work provide an explicit expression for the derivative of the pressure $P:C^{\alpha}(\Omega)\to \mathbb{R}$, where $\Omega \equiv M^{\mathbb{N}}$ and $M$ is a general compact metric space. To be more precise, we show that $$\tag{1}\label{pressure-derivative-0} P^{'}(f)\varphi=\int_{\Omega}\varphi h_f\, d\nu_f,$$ where $h_f$ and $\nu_f$ are eigenfunction and eigenmeasure of the associated Ruelle operator. The proof follows closely the one given in [@ma], where the expression is obtained in the context of finite alphabet. We also prove the existence of the limit $$P(f)=\lim_{n\to \infty}\dfrac{1}{n}\log \mathscr{L}_{f}^n{\bf 1}(x)$$ in the uniform sense, for any continuous potential $f$. We would like to point out that the existence of this limit in this setting has been proved in [@CL16]. We give a new proof of this fact here for two reasons: first, it is different from the proof presented in [@CL16] and we believe it is more flexible to be adapted to other contexts; second, some pieces of it are used to compute the pressure derivative. In the last section we apply our results in cases where $M=(S^2)^{\mathbb{Z}}$ endowed with a suitable a priori DLR-Gibbs measure. We introduce a Heisenberg-type model on the lattice $\mathbb{N}\times \mathbb{Z}$, depending on a real parameter $\alpha$, and use the Ruelle operator to obtain differentiability of the pressure and exponential decay rate for the two-point function for any positive temperature. Basic Definitions and Results ============================= In this section we set up the notation and present some preliminaries results. Let $M=(M,d)$ be a compact metric space, equipped with a Borel probability measure $\mu:\mathscr{B}(M)\to [0,1]$, having the whole space $M$ as its support. In this paper, the set of positive integers is denoted by $\mathbb{N}$. We shall denote by $\Omega=M^{\mathbb{N}}$ the set of all sequences $x=(x_1,x_2,\ldots )$, where $x_i \in M$, for all $i\in {\mathbb N}$. We denote the left shift mapping by $\sigma:\Omega\to\Omega,$ which is given by $\sigma(x_1,x_2,\ldots)=(x_2,x_3,\ldots)$. We consider the metric $d_{\Omega}$ on $\Omega$ given by $$d_{\Omega} (x,y) = \sum_{n=1}^{\infty} \frac{1}{2^n}d(x_n,y_n).$$ The metric $d_{\Omega}$ induces the product topology and therefore it follows from Tychonoff’s theorem that $(\Omega,d_{\Omega})$ is a compact metric space. The space of all continuous real functions $C(\Omega, \mathbb R)$ is denoted simply by $C(\Omega)$ and will be endowed with the norm $\|\cdot\|_0$ defined by $\|f\|_0=\sup_{x\in \Omega} |f(x)|,$ which in turn is a Banach space. For any fixed $0< \alpha< 1$ we denote by $C^{\alpha}(\Omega)$ the space of all $\alpha$-Hölder continuous functions, that is, the set of all functions $f:\Omega\to\mathbb{R}$ satisfying $$\mathrm{Hol}_{\alpha}(f) = \sup_{x,y\in\Omega: x\neq y} \dfrac{|f(x)-f(y)|}{d_{\Omega}(x,y)^{\alpha}} <+\infty.$$ We equip $C^{\alpha}(\Omega),~0< \alpha< 1,$ with the norm given by $\|f\|_{\alpha}= \|f\|_0+ \mathrm{Hol}(f)$. We recall that $(C^{\alpha}(\Omega),\|\cdot\|_{\alpha}) $ is a Banach space for any $0< \alpha< 1$. Our *potentials* will be elements of $C(\Omega)$ and, in order to have a well defined Ruelle operator when $(M,d)$ is a general compact metric space, we need to consider an [*a priori measure*]{} which is simply a Borel probability measure $\mu:\mathscr{B}(M)\to \mathbb{R}$, where $\mathscr{B}(M)$ denotes the Borel $\sigma$-algebra of $M$. For many of the most popular choices of an uncountable space $M$, there is a natural a priori measure $\mu$. Throughout this paper the a priori measure $\mu$ is supposed to have the whole space $M$ as its support. The Ruelle operator $\mathscr{L}_f: C(\Omega) \to C(\Omega)$ is the mapping sending $\varphi$ to $\mathscr{L}_{f}(\varphi)$ defined for any $x\in\Omega$ by the expression $$\mathscr{L}_f(\varphi)(x) = \int_M e^{f(ax)}\varphi(ax)d\mu(a),$$ where $ax$ denotes the sequence $ax=(a, x_1, x_2, \ldots)\in \Omega$. The classical Ruelle operator can be recovered on this setting by considering $M=\{0,1,\ldots,n\}$ and the a priori measure $\mu$ as the normalized counting measure. \[teo-RPF-compacto\] Let $(M,d)$ be a compact metric space, $\mu$ a Borel probability measure on $M$ having full support and $f$ a potential in $C^{\alpha}(\Omega)$, where $0<\alpha<1$. Then $\mathscr{L}_f: C^{\alpha}(\Omega) \to C^{\alpha}(\Omega)$ has a simple positive eigenvalue of maximal modulus $\lambda_f$, and there are a strictly positive function $h_f$ and a Borel probability measure $\nu_{f}$ on $\Omega$ such that, - $\mathscr{L}_f h_f=\lambda_f h_f,$ $\mathscr{L}^{*}_f\nu_f=\lambda_f\nu_f$; - the remainder of the spectrum of $\mathscr{L}_f: C^{\alpha}(\Omega) \to C^{\alpha}(\Omega)$ is contained in a disc of radius strictly smaller than $\lambda_f$; - for all continuous functions $\varphi\in C(\Omega)$ we have $$\lim_{n\to\infty} \left\| \lambda_{f}^{-n}\mathscr{L}^{n}_{f}\varphi-h_f\int \varphi\, d\nu_{f} \right\|_0 = 0.$$ See [@le] for the case $M=S^1$ and [@LMMS] for a general compact metric space. Following the references [@le; @LMMS] we define the entropy of a shift invariant measure and the pressure of the potential $f$, respectively, as follows $$h(\nu) = \inf_{f \in C^{\alpha}(\Omega)} \left\{-\int_{\Omega}f d\nu+ \log\lambda_{f} \right\} \; \text{and} \;\, P(f) = \sup_{\nu\in \mathscr{M}_{\sigma}} \left\{h(\nu)+\int_{\Omega}f\, d\nu \right \},$$ where $\mathscr{M}_{\sigma}$ is the set of all shift invariant Borel probability measures. \[Principio variacional\] For each $f \in C^{\alpha}(\Omega)$ we have for all $x\in \Omega$ that $$P(f) = \lim_{n\to\infty} \frac{1}{n} \log[ \mathscr{L}_{f}^n({\bf 1})(x)] = \log\lambda_{f} =\sup_{\nu\in \mathscr{M}_{\sigma}} \left\{h(\nu)+\int_{\Omega}f\, d\nu \right \}.$$ Moreover the supremum is attained by $m_f=h_f \nu_f$. See [@LMMS] Corollary 1. \[Pressure Analiticity\] The function defined by $C^{\alpha}(\Omega)\ni f\mapsto P(f)\in \mathbb{R}$ is a real analytic function. See [@CL-SIL] for a proof. Let $f\in C^{\alpha}(\Omega)$ be a Hölder continuous potential, $\nu_f$ be the measure given by Theorem \[teo-RPF-compacto\] and $\varphi, \psi\in C^{\alpha}(\Omega)$. For each $n\in \mathbb{Z}$ we define the correlation function $$\label{correlation-function} C_{\varphi,\psi,m_f}(n) = \int (\varphi\circ \sigma^n)\psi \, dm_f -\int \varphi \,dm_f \int \psi\, dm_f.$$ We have that the above correlation function has exponential decay. More precisely we have the following proposition: \[Decay Correlation\] For each $n\in \mathbb{N}$ let $C(n)$ denote the correlation function defined by . Then there exist constants $K>0$ and $0<\tau<1$ such that $|C(n)|\leq K \tau^n$. The proof when $M$ is finite is given in [@Baladi]. Due to Theorem \[teo-RPF-compacto\] this proof can be easily adapted to the case where $M$ is compact metric space. We will include the details here for completeness. Before prove the above theorem, we present two auxiliary lemmas. \[lema-proj-spectral-pif\] Let $f\in C^{\alpha}(\Omega)$, and $\partial D$ the boundary of a disc $D$ with center in $\lambda_f,$ then the spectral projection $$\pi_{f}\equiv \pi_{\mathscr{L}_{f}}=\int_{\partial D}(\lambda I-\mathscr{L}_{f})^{-1}d\lambda.$$ is given by $ \pi_{f}(\varphi) = \big(\int \varphi\, d\nu_{f}\big) \cdot h_{f}. $ \[lema-phi1-fora\] Let be $\varphi,\psi \in C^{\alpha}(\Omega)$ then $ \mathscr{L}_{f}^n(\varphi\circ \sigma^n \cdot \psi \cdot h_{f}) = \varphi \mathscr{L}_{f}^n( \psi h_{f}). $ The proof of both lemmas are straightforward computation so they will be omitted. [**Proof of Proposition \[Decay Correlation\]**]{}. Since $m_f=h_{f} d\nu_{f}$ it follows from the definition of the correlation function that $$\begin{aligned} \label{D. Correlation 2} |C_{\varphi,\psi,m_f }(n)| &= \left| \int (\varphi\circ \sigma^n)\psi h_{f}\, d\nu_{f} - \int\varphi h_{f}\, d\nu_{f} \int\psi h_{f}\, d\nu_{f} \right|.\end{aligned}$$ Notice that $ (\mathscr{L}^{*}_{f})^n\nu_{f} = \lambda_{f}^n\nu_{f} $ and therefore the rhs above is equal to $$\left| \int\lambda_{f}^{-n} \mathscr{L}_{f}^n((\varphi\circ \sigma^n)\psi h_{f}) \, d\nu_{f} - \int\varphi h_{f}\, d\nu_{f} \int\psi h_{f}\, d\nu_{f} \right|.$$ By using the Lemma \[lema-phi1-fora\] and performing simple algebraic computations we get $$\begin{aligned} \label{estimativa-1-dec-pol} |C_{\varphi,\psi,m_f }(n)| \leq \left( \int|\varphi|\, d\nu_{f} \right) \left\| \lambda^{-n}_{f} \mathscr{L}_{f}^n \left( \psi h_{f}- h_{f}\int\psi h_{f}\, d\nu_{f} \right) \right\|_{0}.\end{aligned}$$ By Theorem \[teo-RPF-compacto\] we know that the spectrum of $\mathscr{L}_{f}: C^{\alpha}(\Omega)\to C^{\alpha}(\Omega)$ consists in a simple eigenvalue $\lambda_{f}>0$ and a subset of a disc of radius strictly smaller than $\lambda_{f}.$ Set $ \tau=\sup\{|z|; |z|<1~\textnormal{and}~z\cdot\lambda_{f} \in \mathrm{Spec}(\mathscr{L}_{f})\}. $ The existence of the spectral gap guarantees that $\tau<1$. Let $\pi_{f}$ the spectral projection associated to eigenvalue $\lambda_{f}$, then, the spectral radius of the operator $\mathscr{L}_{f}(I-\pi_{f})$ is exactly $\tau\cdot \lambda_f$. Since the commutator $[\mathscr{L}_{f},\pi_{f}]=0$, we get $\forall n \in\mathbb{N}$ that $ [\mathscr{L}_{f}(I-\pi_{f})]^n = \mathscr{L}_{f}^{n}(I-\pi_{f}). $ From the spectral radius formula it follows that for each choice of $\widetilde{\tau}>\tau$ there is $n_0\equiv n_0(\widetilde{\tau})\in\mathbb{N}$ so that for all $n\geq n_0$ we have $ \|\mathscr{L}_{f}^{n}(\varphi-\pi_{f}\varphi)\|_0 \leq \lambda_{f}^n\widetilde{\tau}^n \|\varphi\|_0,~\forall\varphi\in C^{\alpha}(\Omega). $ Therefore there is a constant $C( \widetilde{\tau})>0$ such that for every $n\geq 1$ $$\|\mathscr{L}_{f}^n(\varphi-\pi_{f}\varphi)\|_0 \leq C( \widetilde{\tau}) \ \lambda_{f}^n\ \widetilde{\tau}^n\ \|\varphi\|_0 \qquad\forall\varphi\in C^{\alpha}(\Omega).$$ By using the Lemma \[lema-proj-spectral-pif\] and the above upper bound in the inequality we obtain $$\begin{aligned} |C_{\varphi,\psi,m_f}(n)| &\leq \left( \int |\varphi|\, d\nu_{f} \right) C\ \widetilde{\tau}^n\ \|\psi h_{f}\|_0 \\ &\leq C(\widetilde{\tau}) \|h_{f}\|_{0} \left( \int |\varphi|d\nu_{f} \right) \|\psi\|_0 \ \widetilde{\tau}^n. \tag*{\qed} \end{aligned}$$ Main Results ============ Proposition \[Principio variacional\] ensures for any Hölder potential $f$ that the limit $P(f) = \lim_{n\to\infty} n^{-1} \log[ \mathscr{L}_{f}^n({\bf 1})(x)] $ always exist and is independent of $x\in \Omega$. In what follows, we will extend this result for all continous potentials $f\in C(\Omega)$. This is indeed a surprising result and it does not have a counter part on one-dimensional two-sided lattices, due to the existence of metastable states discovered by Sewell in [@Sewell]. Before present the proof of this fact, we want to explain what is the mechanism behind the absence of metastable states for one-dimensional systems on the lattice $\mathbb{N}$. The absence of metastable states for continuous potentials and finite state space $M$, on one-dimensional one-sided lattices, as far as we known was first proved by Ricardo Mañé in [@ma], but apparently he did not realize it. A generalization of this result for $M$ being a compact metric space appears in [@CL16] and again no mention to metastable states is made in this paper. The proof of this result presented in [@CL16] is completely different from ours and we believe that one presented here is more suitable to be adapted to other context. An alternative explanation of this fact can be found in [@CER17] Remark 2.2, and following the first author of [@CER17] the first person to realize this fact was Aernout van Enter. Let $(\widetilde{\mathscr{B}},\|\cdot\|)$ and $(\mathscr{B},|\!|\!|\cdot|\!|\!| )$ the classical Banach spaces of interactions defined as in [@Israel]. In case of free boundary conditions, we have for any $\Phi\in\widetilde{\mathscr{B}}$ and a finite volume $\Lambda_n\subset\mathbb{Z}$ that $ |P_{\Lambda_n}(\Phi)-P_{\Lambda_n}(\Psi)| \leq |\!|\!|\Phi-\Psi|\!|\!| $ and therefore the finite volume pressure with [**free boundary conditions**]{} is 1-Lipschitz function from $(\widetilde{\mathscr{B}},|\!|\!|\cdot|\!|\!|)$ to $\mathbb{R}$. On the lattice $\mathbb{Z}$, when boundary conditions are considered the best we can prove is $ |\tau_nP_{\Lambda_n}(\Phi)-\tau_nP_{\Lambda_n}(\Psi)| \leq \|\Phi-\Psi\|, $ for interactions $\Phi,\Psi\in \mathscr{B}$. From this we have that finite volume pressure with boundary conditions is 1-Lipschitz function from the smaller Banach space $(\mathscr{B},\|\cdot\|)$ to $\mathbb{R}$. The last inequality can not be improved for general boundary conditions and interactions in the big Banach space $(\widetilde{\mathscr{B}},|\!|\!|\cdot|\!|\!|)$, because as we will see in the proof of Theorem \[Pressao-assintotica\] it would imply the independence of the boundary conditions of the infinite volume pressure which is a contradiction with Sewell’s theorem. The mechanism that prevents existence of metastable states for continuous potential on the lattice $\mathbb{N}$ is the possibility of proving that the analogous of the finite volume pressure with boundary conditions on the lattice $\mathbb{N}$ is indeed 1-Lipschitz function. To be more precise. \[Pressao-assintotica\] For each continuous potential $f\in C(\Omega)$ there is a real number $P(f)$ such that $$\lim_{n\to \infty} \left\| \dfrac{1}{n}\log\mathscr{L}_f^n{\bf 1} -P(f)\right\|_{0} =0.$$ It is sufficient to prove that $\Phi_{n}:C(\Omega) \to C(\Omega)$, given by $$\Phi_{n}(f)(x)=\frac{1}{n}\log \mathscr{L}_f^n{\bf 1}(x)$$ converges to a Lipschitz continuous function $\Phi:C(\Omega) \to C(\Omega)$ in the following sense $\|\Phi_n(f)-\Phi(f)\|_{0}\to 0$, when $n\to\infty$. Indeed, by Proposition \[Principio variacional\], for any fixed $0<\alpha< 1$ we have $\Phi(C^{\alpha}(\Omega))\subset \langle 1\rangle$, where $\langle 1 \rangle$ denotes the subspace generated by the constant functions in $C(\Omega)$. Since $C^{\alpha}(\Omega)$ is a dense subset of $(C(\Omega),\|\cdot\|_{0})$ and $\Phi$ is Lipschitz, then $\Phi(C(\Omega))=\Phi(\overline{C^{\alpha}(\Omega)})\subset \langle 1 \rangle$. In order to deduce the convergence of $ (\Phi_{n})_{n\in \mathbb{N}}, $ it is more convenient to identify $ \Phi_{n}:C(\Omega)\to C(\Omega) $ with the function $ \Phi_{n}:C(\Omega)\times \Omega\rightarrow\mathbb{R}, $ given by $ \Phi_{n}(f,x)=(1/n)\log \mathscr{L}_f^n{\bf 1}(x). $ For any fixed $x\in\Omega$, follows from the Dominated Convergence Theorem that the Fréchet derivative of $ \Phi_{n}:C(\Omega)\times \Omega\rightarrow\mathbb{R}, $ evaluated at $f$ and computed in $\varphi$, is given by $$\frac{\partial}{\partial f}\Phi_n(f,x)\cdot\varphi = \frac{1}{n} \frac{\displaystyle\int_{M^n}(S_{n}\varphi)({{\bf a}}x)\exp(S_{n}f)({{\bf a}}x)\, d\mu({{\bf a}})} {\mathscr{L}_f^n{\bf 1}(x)},$$ where $(S_{n}\varphi)(x)\equiv \sum_{j=0}^{n-1}\varphi\circ\sigma^{j}(x).$ Clearly, $ \displaystyle\left\| n^{-1}S_n\varphi\right\|_{0} \leq \|\varphi\|_{0}, $ and therefore we have the following estimate $$\begin{aligned} \label{pressao 3} \left|\frac{\partial}{\partial f}\Phi_{n}(f,x)\cdot\varphi\right| &= \left|\frac{\displaystyle\int_{M^n}\frac{1}{n}(S_{n}\varphi)({{\bf a}}x)\exp(S_{n} f)({{\bf a}}x)\, d\mu({{\bf a}})} {\mathscr{L}_f^n{\bf 1}(x)}\right| \nonumber \\ &\leq \frac{\displaystyle\int_{M^n}\left|\frac{1}{n}(S_{n}\varphi)({{\bf a}}x)\exp(S_{n}f)({{\bf a}}x)\, \right| d\mu({{\bf a}})} {\left|\displaystyle\int_{M^n}\exp(S_{n}f)({{\bf a}}x)\, d\mu({{\bf a}})\right|} \nonumber \\ &\leq \|\varphi\|_0,\end{aligned}$$ for any $f\in C(\Omega)$ independently of $x\in \Omega$. Taking the supremum in over $x\in\Omega$ we obtain $$\left\|\frac{\partial}{\partial f}\Phi_{n}(f,\cdot)\cdot\varphi\right\|_0 \leq \|\varphi\|_0,$$ for all $n\in\mathbb{N}$. The above inequality allows to conclude that $\|\frac{\partial}{\partial f}\Phi_{n}(f)\| \leq 1$, where $\|\cdot\|$ means the operator norm. Fix $f$ and $\tilde{f}$ in $C(\Omega)$ and define for each $n\in \mathbb{N}$ the map $\hat{\Phi}_{n}(t)=\Phi_{n}(\alpha(t),x),$ where $\alpha(t)=t f+(1-t)\tilde{f}$ with $0\leq t\leq 1$. Obviously, $\hat{\Phi}_{n}$ is a differentiable map when seen as a map from $[0,1]$ to $\mathbb{R}$ and we have that $ |\hat{\Phi}_{n}(1)-\hat{\Phi}_{n}(0)|=|\frac{d}{dt}\hat{\Phi}_{n}(\hat{t})(1-0)| $ for some $\hat{t}\in (0,1)$. Using the above estimative of the Fréchet derivative norm we have that $$\begin{aligned} \label{estimativa-dif-Phin} |\Phi_{n}(f,x)-\Phi_{n}(\tilde{f},x)| = \left|\frac{\partial}{\partial f}\Phi_{n}(f,x)(f-\tilde{f})\right|\leq\|f-\tilde{f}\|_{0}.\end{aligned}$$ As an outcome for any fixed $f\in C(\Omega)$ the sequence $(\Phi_{n}(f))_{n\in\mathbb{N}}$ is uniformly equicontinuous. Moreover, $\sup_{n\in\mathbb{N}}\|\Phi_{n}(f)\|_{0}<\infty$. Indeed, from inequality $\eqref{estimativa-dif-Phin}$, the triangular inequality and existence of the limit $\lim_{n\to\infty} \Phi_n({\bf 1})$, it follows that $$\begin{aligned} |\Phi_n(f)| = |\Phi_n(f)-\Phi_n({\bf 1})|+|\Phi_n({\bf 1})| &\leq \|f-{\bf 1} \|_{\infty} +|\Phi_n({\bf 1})| \\ &\leq \|f-{\bf 1}\|_{\infty} +\sup_{n\in\mathbb{N}}|\Phi_n({\bf 1})| \\ &\equiv M(f). \end{aligned}$$ Now, we are able to apply the Arzelà-Ascoli’s Theorem to obtain a subsequence $ (\Phi_{n_k}(f))_{k\in\mathbb{N}}, $ which converges to a function $\Phi(f)\in C(\Omega)$. We now show that $\Phi_n(f)\to \Phi(f)$, when $n\to\infty$. Let $\varepsilon>0$ and $g\in C^{\alpha}(\Omega)$ such that $\|f-g\|_0<\varepsilon$. Choose $n_k$ and $n$ sufficiently large so that the inequalities $\|\Phi_{n_k}(f)- \Phi(f)\|_{0}<\varepsilon$, $\|\Phi_n(g)-\Phi_{n_k}(g)\|_{0}<\varepsilon$ and $\|\Phi_{n_k}(g)-\Phi_{n_k}(f)\|_{0}<\varepsilon$ are satisfied. For these choices of $g,n$ and $n_k$, we have by the triangular inequality and inequality that $$\begin{aligned} \|\Phi_n(f)- \Phi(f)\|_{0} &\leq \|\Phi_n(f)-\Phi_{n_k}(f)\|_{0} + \|\Phi_{n_k}(f)- \Phi(f)\|_{0} \\ &< \|\Phi_n(f)-\Phi_{n_k}(f)\|_{0} + \varepsilon \\ &\leq \|\Phi_n(f)-\Phi_{n}(g)\|_{0} + \|\Phi_n(g)-\Phi_{n_k}(f)\|_{0} + \varepsilon \\ &< \|\Phi_n(g)-\Phi_{n_k}(f)\|_{0} + 2\varepsilon \\ &\leq \|\Phi_n(g)-\Phi_{n_k}(g)\|_{0} + \|\Phi_{n_k}(g)-\Phi_{n_k}(f)\|_{0} + 2\varepsilon \\ &< 4\varepsilon,\end{aligned}$$ thus proving the desired convergence. To finish the proof it is enough to observe that the inequality \[estimativa-dif-Phin\] implies that $\Phi$ is a Lipschitz continuous function. \[Pressure derivative\] For each fixed $0<\alpha<1$ and $f\in C^{\alpha}(\Omega)$ the Fréchet derivative of the pressure functional $P:C^{\alpha}(\Omega)\rightarrow\mathbb{R}$ is given by $$\label{pressure-derivative} P'(f)\varphi=\int\varphi h_{f}\, d\nu_{f}$$ For each fixed $f \in C^{\alpha}(\Omega),$ $0<\alpha< 1,$ there exists the limit $$\lim_{n\to \infty} \left\| \frac{1}{n} \frac{\mathscr{L}_{f}^{n}(S_{n}\varphi)} {\mathscr{L}^{n}_{f}{\bf 1}} - \int\varphi h_{f}\, d\nu_{f} \right\|_0 = 0$$ for every $\varphi\in C(\Omega)$, and the convergence is uniform. A straightforward calculation shows that $$\begin{aligned} \label{desigualdade} \left\| \frac{1}{n}\frac{\mathscr{L}_{f}^{n}(S_{n}\varphi)}{\mathscr{L}^{n}_{f}{\bf 1}} - \int\varphi h_{f}\, d\nu_{f} \right\|_{0} &= \left\| \frac{\lambda_f^{n}}{\mathscr{L}^{n}_{f}{\bf 1}}\frac{1}{n} \lambda_f^{-n}\mathscr{L}_{f}^{n}(S_{n}\varphi) - \int\varphi h_{f}\, d\nu_{f} \right\|_{0} \\[0.4cm] & \hspace*{-3,175cm} \leq \sup_{n\in\mathbb{N}} \left\|\frac{\lambda_f^{n}}{\mathscr{L}^{n}_{f}{\bf 1}}\right\|_{0} \left\|\frac{1}{n}\lambda_f^{-n}\mathscr{L}_{f}^{n}(S_{n}\varphi)- (\lambda_f^{-n}\mathscr{L}^{n}_{f}{\bf 1})\int\varphi h_{f}\, d\nu_{f}\right\|_{0}. \nonumber\end{aligned}$$ Therefore to get the desired result it is sufficient to show that: - $ \displaystyle\sup_{n}\left\|\frac{\lambda_f^{n}}{\mathscr{L}^{n}_{f}{\bf 1}}\right\|_{0} $ is finite; - $ \displaystyle \quad (\lambda_f^{-n}\mathscr{L}^{n}_{f}{\bf 1})\int\varphi h_{f}\, d\nu_{f} $ converges to $ \displaystyle h_f\int\varphi h_{f}\, d\nu_{f}; $ - $ \displaystyle \quad\frac{1}{n}\lambda_f^{-n}\mathscr{L}_{f}^{n}(S_{n}\varphi) $ converges to $ \displaystyle h_f\int\varphi h_{f}\, d\nu_{f}. $ The first two items are immediate consequences of the Ruelle-Perron-Frobenius Theorem. Indeed, the convergence $ \lambda_f^{-n}\mathscr{L}_f^n {\bf 1}\stackrel{\|\cdot\|_0}{\longrightarrow}h_f, $ immediately give that $$\begin{aligned} (\lambda_f^{-n}\mathscr{L}^{n}_{f}{\bf 1}) \int\varphi h_{f}\, d\nu_{f} \stackrel{\|\cdot\|_0}{\longrightarrow} h_f\int\varphi h_{f}\, d\nu_{f}.\end{aligned}$$ Since $h_f$ is a continuous strictly positive function, it follows from compactness of $\Omega$ that $h_f$ is bounded away from zero, and consequently $ \lambda_f^{n}/\mathscr{L}_f^n{\bf 1} \stackrel{\|\cdot\|_0}{\longrightarrow} 1/h_f. $ Once $h_f$ is strictly positive $1/h_f$ is also positive and bounded away from zero, which gives that $$\begin{aligned} \sup_{n}\left\|\frac{\lambda_f^{n}}{\mathscr{L}^{n}_{f}{\bf 1}}\right\|_{0}<\infty.\end{aligned}$$ The third expression in (c) is harder to analyze than the previous two, so we will split the analysis in three claims. *Claim 1.* For all $\varphi\in C(\Omega)$ and $n\in\mathbb{N}$ we have that, $$\label{Claim 1} \lambda_f^{-n}\mathscr{L}_{f}^{n}(S_{n}\varphi) = \sum_{j=0}^{n-1}\lambda_f^{-(n-j)}\mathscr{L}_{f}^{n-j} (\varphi\lambda_f^{-j}\mathscr{L}^{j}_{f}{\bf 1}).$$ We first observe that, $ \mathscr{L}_{f}^{n}(\varphi\circ\sigma^{n}) = \varphi\mathscr{L}^{n}_{f}{\bf 1}, $ which is an easy consequence of the definition of the Ruelle operator. From that and the linearity of the Ruelle operator it follows, $$\begin{aligned} \mathscr{L}_{f}^{n}(S_{n}\varphi) &= \mathscr{L}_{f}^{n}(\varphi)+\mathscr{L}_{f}^{n-1}(\mathscr{L}_{f}(\varphi\circ\sigma))+ \ldots+\mathscr{L}_{f}(\mathscr{L}_{f}^{n-1}\varphi\circ\sigma^{n-1}) \\ & \hspace*{-1cm} = \mathscr{L}_{f}^{n}(\varphi)+\mathscr{L}_{f}^{n-1}(\varphi\mathscr{L}_{f}{\bf 1})+ \ldots+\mathscr{L}_{f}(\varphi\mathscr{L}^{n-1}_{f}{\bf 1}) = \sum_{j=0}^{n-1}\mathscr{L}_{f}^{n-j}(\varphi\mathscr{L}^{j}_{f}{\bf 1})\end{aligned}$$ finishing the proof. From the linearity of the Ruelle operator we easily get $$\begin{aligned} \lambda_f^{-n}\mathscr{L}_{f}^{n}(S_{n}\varphi) = \sum_{j=0}^{n-1}\lambda_f^{-(n-j)}\mathscr{L}_{f}^{n-j} (\varphi\lambda_f^{-j}\mathscr{L}^{j}_{f}{\bf 1}).\end{aligned}$$ *Claim 2.* For each $\varphi\in C(\Omega)$ we have $$\label{eq:5} \lim_{n\to \infty}\left\| \frac{1}{n}\left( \lambda_f^{-n}\mathscr{L}_{f}^{n}(S_{n}\varphi)-\sum_{j=0}^{n-1}\lambda_f^{-(n-j)}\mathscr{L}_{f}^{n-j}\varphi h_{f} \right) \right\|_{0} = 0.$$ To verify we use to obtain the following estimate, $$\begin{aligned} \left\| \frac{1}{n} \left\{ \lambda_f^{-n}\mathscr{L}_f^{n}(S_{n}\varphi) - \sum_{j=0}^{n-1}\lambda_f^{-(n-j)}\mathscr{L}_f^{(n-j)}\varphi h_f \right\} \right\|_{0} \\[0.4cm] &\hspace*{-4cm}= \left\|\frac{1}{n}\sum_{j=0}^{n-1}\lambda_f^{-(n-j)}\mathscr{L}_f^{n-j}\left(\varphi\lambda_f^{-j}\mathscr{L}^{j}_f{\bf 1}-\varphi h_f\right)\right\|_{0} \\[0.4cm] &\hspace*{-4cm}\leq \frac{const.}{n}\, \sum_{j=0}^{n-1}\left\|\varphi\lambda_f^{-j}\mathscr{L}^{j}_f{\bf 1}-\varphi h_f\right\|_{0}.\end{aligned}$$ The last term in the above inequality converges to zero, when $n\to \infty$, because it is a Cesàro summation associated to the sequence $\varphi\lambda_f^{-j}\mathscr{L}^{j}_f{\bf 1}-\varphi h_f$, which converges to zero in the uniform norm by the Ruelle-Perron-Frobenius Theorem, so the claim is proved. *Claim 3.* For each $\varphi\in C(\Omega)$ there exists the following limit $$\label{eq:7} \lim_{n\to \infty}\left\|\frac{1}{n}\sum_{j=0}^{n-1}\lambda_{f}^{-(n-j)}\mathscr{L}_{f}^{n-j}\varphi h_{f}- h_{f}\int\varphi h_{f}d\nu_{f}\right\|_{0}=0.$$ Define $A_{n,j}:=\lambda_f^{-(n-j)}\mathscr{L}_{f}^{n-j}\varphi h_{f}$ and $B:=h_{f}\int\varphi h_{f}\, d\nu_{f}.$ In one hand, by the Ruelle-Perron-Frobenius Theorem, we must have for any fixed $j\in\mathbb{N}$ that $\lim_{n\to \infty}\left\|A_{n,j}-B\right\|_{0}=0.$ On the other hand, we have by the triangular inequality and the convergence in the Cesàro sense that $$\begin{aligned} \left\|\dfrac{1}{n}\sum_{j=0}^{n-1}A_{n,j}-B\right\|_{0} \leq \dfrac{1}{n}\sum_{j=0}^{n-1}\left\|A_{n,j}-B\right\|_{0}\longrightarrow 0,\end{aligned}$$ when $n\rightarrow\infty,$ finishing the proof of Claim 3. Therefore the proof of the Lemma is established. [**Proof of Theorem \[Pressure derivative\]**]{} Fix $x\in \Omega $ and define function $\Phi_{n}:C^{\alpha}(\Omega)\rightarrow\mathbb{R}$ as $$\Phi_{n}(f)\equiv \frac{1}{n}\log(\mathscr{L}_{f}^{n}{\bf 1})(x).$$ As we have seen in the proof of Theorem \[Pressao-assintotica\] the Fréchet derivative of $\Phi_n$ at $f$ evaluated in $\varphi\in C^{\alpha}(\Omega)$ is given by $$\Phi_{n}'(f)\varphi = \frac{1}{n} \frac{\mathscr{L}_{f}^{n}(S_{n}\varphi)}{\mathscr{L}^{n}_{f}{\bf 1}},$$ Since we have the analyticity of the pressure functional in $C^{\alpha}(\Omega)$ (Theorem \[Pressure Analiticity\]) it follows from the previous Lemma that $$P'(f)\varphi = \lim_{n\to \infty} \Phi_{n}'(f)\varphi = \int\varphi h_{f}\, d\nu_{f}.$$ A Heisenberg type Model ======================== The aim of this section is to introduce a Heisenberg type model on the half-space $\mathbb{N}\times \mathbb{Z}$, prove absence of phase transition and exponential decay of correlations for this model. The construction of this model is split in two steps. Firts step. We consider $(S^2)^\mathbb{Z}$ as the configuration space. At inverse temperature $\beta\in (0, \infty),$ the configurations are randomly chosen according to the following probabilities measures $\mu_{n, \beta}$ $$\label{Heisenberg-measure} d\mu_{n, \beta}(\sigma):=\dfrac{1}{Z_{n, \beta}} \exp ( \beta \sum_{i,j\in \Lambda_n} \sigma_i\cdot \sigma_j )\, d\sigma,$$ where $\Lambda_n$ denotes the symmetric interval of integers $[-n,n]$, $\sigma_i\cdot \sigma_j$ denotes the inner product in $\mathbb{R}^3$ of the first neighbors $\sigma_i$ and $\sigma_j$, $$Z_{n, \beta}=\int_{(S^2)^{\Lambda_n}}\exp ( \beta \sum_{i,j} {\sigma_i\cdot \sigma_j } )\, d\sigma$$ and $d\sigma$ is the uniform probability measure on $(S^{2})^{\Lambda_n}$. Let measure $\hat{\nu}$ be the unique accumulation point of the sequence of probability measures given by , see [@Geogii88] for details. The measure $\hat{\nu}$ will be used as the a priori measure in the second step. Second step. Now we introduce a Heisenberg type model. We begin with the compact metric space $(S^2)^{\mathbb{Z}}$, where $S^2$ is the 2-dimensional unit sphere in $\mathbb{R}^3$, as our alphabet. Now the configuration space is the Cartesian product $\Omega =((S^2)^{\mathbb{Z}})^{\mathbb{N}},$ that is, a configuration is a point $\sigma=(\sigma(1),\sigma(2), \cdots)\in \Omega$, where each ${\sigma}(i)$ is of the form $\sigma(i)=(\ldots,\sigma_{(i,-2)},\sigma_{(i,-1)},\sigma_{(i,0)}, \sigma_{(i,1)}, \ldots)$, and each $\sigma_{(i,j)}\in S^2$. We denote by $\|\cdot\|$ the $\mathbb{R}^3$ Euclidean norm, and $v\cdot w$ the inner product of two elements of $\mathbb{R}^3$. Fix a summable ferromagnetic translation invariant interaction $J$ on $\mathbb{Z}$, that is, a function $J:\mathbb{Z}\to (0,\infty)$ and assume that $J(n)= e^{-|n|\alpha}$, for some $\alpha>0$. Of course, we have $ \sum_{n\in \mathbb{Z}} J(n)<\infty. $ Now we consider the potential $f:\Omega \to \mathbb{R}$ given by $$\label{Heisenberg Potential} f(\sigma)= \sum_{ n\in \mathbb{Z}} J(n)\ \sigma_{(1,n)}\cdot\sigma_{(2,n)}.$$ Note that this potential has only first nearest neighbors interactions. ![The configuration space $((S^2)^{\mathbb{Z}})^{\mathbb{N}}$[]{data-label="fig:FigPaper"}](FigPaper2){width="0.6\linewidth"} The potential $f$ given by is actually an $\alpha$-Hölder continuous function. Indeed, $$\begin{aligned} \label{Holder-1} |f(\sigma)-f(\omega)| &\leq \left |\sum_{n\in \mathbb{Z}}J(n)\sigma_{(1,n)}\cdot (\sigma_{(2,n)}-\omega_{(2,n)})\right|\nonumber \\ &\hspace*{3.15cm} + \left|\sum_{n\in \mathbb{Z}}J(n)\omega_{(2,n)}\cdot (\sigma_{(1,n)}-\omega_{(1,n)})\right| \nonumber \\[0.5cm] &\leq \sum_{n\in \mathbb{Z}}J(n)\|\sigma_{(2,n)}-\omega_{(2,n)}\|+ \sum_{n\in \mathbb{Z}}J(n)\|\sigma_{(1,n)}-\omega_{(1,n)}\| \end{aligned}$$ From the very definition of the distance we have $$\begin{aligned} \label{Holder-2} d(\sigma,\omega) \geq \dfrac{1}{2^n}\sum_{j\in \mathbb{Z}}\dfrac{1}{2^{|j|}}\|\sigma_{(n,j)}-\omega_{(n,j)}\| \geq \dfrac{1}{2^{n+|j|}}\|\sigma_{(n,j)}-\omega_{(n,j)}\|.\end{aligned}$$ By using and we get that $$\dfrac{|f(\sigma)-f(\omega)|}{d(\sigma,\omega)^{\alpha}} \leq K_1\sum_{n\in \mathbb{N}}J(n)2^{\alpha n} + K_2\sum_{n\in \mathbb{N}}J(n)2^{\alpha n}.$$ Since for all $n\in\mathbb{N}$ we have $ J(n)2^{\alpha n} \leq \exp({-n(\alpha(1-\log 2))}), $ and the constant $\alpha (1-\log 2)$ is positive, follows that the series $\sum_{n\in \mathbb{N}}J(n)2^{\alpha n}$ is convergent. Therefore, $f$ is an $\alpha$-Hölder continuous function. From Theorem \[teo-RPF-compacto\], we have that there is a unique probability measure $\nu_{f}$ so that $\mathscr{L}_{f}^{*}\nu_f=\lambda_{f}\nu_f$. This probability measure, following [@CL16], is a unique DLR-Gibbs measure associated to a quasilocal specification associated to $f$, see [@CL14] for the construction of this specification. By observing that the horizontal interactions, in our model, goes fast to zero in the $y$-direction is naturally to expect that the model is essentially a one-dimensional model. This feature allow us to obtain the following result Let $\beta f$ be a potential, where $f$ is given by and the inverse temperature $\beta \in (0,\infty)$. Consider the a priori measure $\hat{\nu}$ on $(S^2)^{\mathbb{Z}}$, constructed from . Then for any fixed $\beta>0$ and $m\in\mathbb{Z}$ there are positive constants $K(\beta)$ and $c(\beta)$ such that for all $n\in\mathbb{N}$ we have $$\int_{\Omega} (\sigma_{(1,m)}\cdot\sigma_{(n+1,m)})\ d\nu_{\beta f} \leq K(\beta)e^{-c(\beta)n}.$$ Furthermore, the pressure functional is differentiable at $\beta f$ and its derivative is given by expression . Fix $m\in\mathbb{Z}$ and let $\sigma_{(n,m)}\equiv(\sigma_{(n,m)}^{x},\sigma_{(n,m)}^{y}, \sigma_{(n,m)}^{z})$. Consider the following continuous potentials $\varphi^u, \xi^u$ given by $$\varphi^u(\sigma) = \sigma_{(1,m)}^{u} \qquad\text{and}\qquad \xi^u(\sigma) = \frac{\sigma_{(1,m)}^{u}}{h_{\beta f}(\sigma)}, \quad u=x,y,z.$$ Note that $$\int_{\Omega} \xi \, dm_{\beta f} = \int_{\Omega} \sigma_{(1,m)}^{u} \, d\nu_{\beta f}(\sigma) = 0,\quad u=x,y,z,$$ where the last equality comes from the $O(3)$-invariance of the eingemeasure. Since $h_{\beta f}(\sigma)=h_{\beta f}(-\sigma)$, see [@CL17], it follows again from the $O(3)$-invariance of $\nu_{f}$ that $$\int_{\Omega} \sigma_{(1,m)}^{u} \, dm_{\beta f} = \int_{\Omega} \sigma_{(1,m)}^{u} h_f(\sigma) \, d\nu_{\beta f} = 0, \quad u=x,y,z.$$ Therefore $$\begin{aligned} \label{Decay-Two-Point} \int_{\Omega} (\sigma_{(1,m)}^{u}\sigma_{(n+1,m)}^u)\ d\nu_{\beta f} &= \int (\varphi^u\circ \sigma^n)\xi^u \, dm_{\beta f} \nonumber \\ &= C_{\varphi^{u},\xi^{u},m_{\beta f}}(n) = O(e^{-c(\beta)n}), \end{aligned}$$ $u=x,y,z.$ By summing with $u=x,y,z$ we get the claimed exponential decay. The last statement follows from the fact that $(S^2)^{\mathbb{Z}}$ is a compact metric space, $\hat{\nu}$ is a full support a priori measure in $(S^2)^{\mathbb{Z}}$ and $\beta f$ is an $\alpha$-Hölder. Therefore the Proposition \[Decay Correlation\] applies and corollary follows. Acknowledgments {#acknowledgments .unnumbered} =============== The author would like to thank Leandro Cioletti, Artur Lopes and Andréia Avelar for fruitfull discussions and comments. [Dillo 83]{} . *On the general one-dimensional XY Model: positive and zero temperature, selection and non-selection*. Rev. Math. Phys., v. 23, p. 1063–1113, [**2011**]{}. . *Positive Transfer Operators and Decay of Correlations*. World Scientific Publishing Co., [**2000**]{}. . *Thermodynamic Formalism and Applications to Dimension Theory*. Birkhauser, [**2010**]{}. . *Equilibrium states and the ergodic theory of Anosov diffeomorphisms*. Lecture Notes in Mathematics, v. 470, Springer, [**1994**]{}. . *Hausdorff dimensions of quasicircles*. IHES Publ. Math., v. 50, p. 11–25, [**1977**]{}. . *Renormalization group and analyticity in one dimension: a proof of [D]{}obrushin’s theorem*. Comm. Math. Phys., v. 80, p. 255–269, [**1981**]{}. . *The Double Transpose of the Ruelle Operator*. ArXiv e-print:1710.03841, p. 1–19, [**2017**]{}. . *Interactions, Specifications, DLR probabilities and the Ruelle Operator in the One-Dimensional Lattice*. Disc. Cont. Dyn. Sys.-A, Vol 37, n 12, p. 6139–6152, [**2017**]{}. . *Ruelle Operator for Continuous Potentials and DLR-Gibbs Measures*. arXiv:1608.03881, [**2016**]{}. . *Correlation Inequalities and Monotonicity Properties of the Ruelle Operator*. arXiv:1703.06126, [**2017**]{}. . *Spectral properties of the Ruelle operator on the Walters class over compact spaces*. Nonlinearity v. 29, p. 2253–2278, [**2016**]{}. *Entropy, Large Deviation and Statistical Mechanics*. Springer, [**2005**]{}. . *$C^{*}$-Algebras, approximately proper equivalence relations and thermodynamic formalism*. Erg.Theo. and Dyn. Syst., v. 24(4), p. 1051–1082, [**2004**]{}. . *Quantum physics: A functional integral point of view*. Second edition, Springer-Verlag, New York, [**1987**]{}. , *Gibbs Measures and Phase Transitions.* de Gruyter, Berlin, [**1988**]{}. . *Convexity in the theory of lattice gases*. Princeton University Press, Princeton, N.J., 1979. Princeton Series in Physics, with an introduction by Arthur S. Wightman. . *Ruelle Operator with nonexpansive IFS*. Studia Mathematica, v. 148(2), p. 143–169, [**2001**]{}. . *Entropy and Variational Principle for one-dimensional Lattice Systems with a general a-priori measure: finite and zero temperature*. Erg. Theo. and Dyn. Syst., v. 35, p. 1925–1961, [**2015**]{}. . *The [H]{}ausdorff dimension of horseshoes of diffeomorphisms of surfaces*, Bol. Soc. Brasil. Mat., v. 20, p. 1–24, [**1990**]{}. . *Zeta functions and the periodic orbit structure of hyperbolic dynamics*. Astérisque, v. 187–188, [**1990**]{}. . *Lecture Notes on the Spin and Loop $O(n)$ models.* Notes, University of Bath, [**2016**]{}. . *Dimension theory in dynamical systems contemporary views and application*. Chicago Lectures in Mathematics Series, [**1997**]{}. . *Statistical mechanics of a one-dimensional lattice gas.* Comm. Math. Phys. v. 9, p. 267–278, [**1968**]{}. . *Thermodynamic formalism for countable markov shifts*. Erg.Theo. and Dyn. Syst., v. 19, p. 1565–1593, [**1999**]{}. . *Lecture Notes on Thermodynamic Formalism for Topological Markov Shifts*. Preprint Pen State USA , [**2009**]{}. . *Metastable states of quantum lattice systems*. Comm. Math. Phys., 55(1):63–66, [**1977.**]{} . *The analyticity of a generalized Ruelle operator*. Bull. Braz. Math. Soc., v. 45, p. 1–20, [**2014**]{}. . *The Ruelle Transfer Operator in the Context of Orthogonal Polynomials*. Complex Anal. Oper. Theory. v. 8(3), p. 709–732, [**2014**]{}. . *Invariant Measures and Equilibrium States for Some Mappings which Expand Distances*. Trans. Amer. Math. Soc., v. 236, p. 121–153, [**1978**]{}.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Considering initial data in $\dot{H}^s$, with $\frac{1}{2} < s < \frac{3}{2}$, this paper is devoted to the study of possible blowing-up Navier-Stokes solutions such that ${\displaystyle}{({T_{*}}(u_{0}) -t)^{\frac{1}{2} (s- \frac{1}{2})} \,\, \| u \|_{\dot{H}^s} }$ is bounded. Our result is in the spirit of the tremendous works of L. Escauriaza, G. Seregin, and V. $\breve{\mathrm{S}}$ver$\acute{\mathrm{a}}$k and I. Gallagher, G. Koch, F. Planchon, where they proved there is no blowing-up solution which remain bounded in $L^3(\R^3)$. The main idea is that if such blowing-up solutions exist, they satisfy critical properties.' address: '([[Eugénie Poulon]{}]{}) Laboratoire Jacques-Louis Lions - UMR 7598, Université Pierre et Marie Curie, Boîte courrier 187, 4 place Jussieu, 75252 Paris Cedex 05, France' author: - Eugénie Poulon title: 'About the possibility of minimal blow up for Navier-Stokes solutions with data in $\dot{H}^s(\R^3)$' --- =1 Introduction and statement of main result ========================================= We consider the Navier-Stokes system for incompressible viscous fluids evolving in the whole space $\R^{3}$. Denoting by $u$ the velocity, a vector field in $\R^3$, by $p$ in $\R$ the pressure function, the Cauchy problem for the homogeneous incompressible Navier-Stokes system is given by $$\left \lbrace \begin {array}{ccc} \partial_{t}u + u\cdot\nabla{u}-\Delta{u}&=&-\nabla{p}\\ {\mathop{\rm div}\nolimits}u&=&0\\ u_{|t=0}&=&u_{0}.\\ \end{array} \right.$$ [0.2cm]{} We recall a crucial property of the Navier-Stokes equation : the scaling invariance. Let us define the operator $$\label{notation} \begin{split} \forall \alpha \in \R^{+},\,\, \forall \lambda \in \R^{+}_{*},\,\, &\forall x_{0} \in \R^3,\,\,\,\, \Lambda^{\alpha}_{\lambda,x_{0}}\,u(t,x) \eqdefa \frac{1}{\lambda^{\alpha}}u\Bigl(\frac{t}{\lambda^2} \virgp \frac{x-x_{0}}{\lambda} \Bigr).\\ &\hbox{If} \,\,\, \alpha =1,\,\,\, \hbox{we note} \,\,\, \Lambda^{1}_{\lambda,x_{0}} = \Lambda_{\lambda,x_{0}}. \end{split}$$ Clearly, if $u$ is smooth solution of Navier-Stokes system on  $[0,T] \times \R^3$ with pressure $p$ associated with the initial data $u_{0}$, then, for any positive $\lambda$, the vector field and the pressure $$u_{\lambda} \eqdefa \Lambda_{\lambda,x_{0}}\,u \quad \hbox{and} \quad p_{\lambda} \eqdefa \Lambda^{2}_{\lambda,x_{0}}\,p$$ is a solution of Navier-Stokes system on the interval $[0,\lambda^2T] \times \R^3$, associated with the initial data $$u_{0,\lambda} = ~\Lambda_{\lambda,x_{0}}\, u_{0}.$$ This leads to the definition of scaling invariant space. *[ A Banach space $X$ is said to be scaling invariant (or also critical), if its norm is invariant under the scaling transformation defined by $u \mapsto u_{\lambda}$ $$|| u_{\lambda} ||_{X} = || u ||_{X}.$$]{}* Let us give some exemples of critical spaces in dimension $3$ $$\dot{H}^{\frac{1}{2}}(\R^3) \hookrightarrow L^3(\R^3) \hookrightarrow \dot{B}^{-1 +\frac{3}{p}}_{p,\infty}(\R^3)_{p < \infty} \hookrightarrow \mathcal{BMO}^{-1}(\R^3) \hookrightarrow \dot{B}^{-1 }_{\infty,\infty}(\R^3).$$ The framework of this work is functional spaces which are above the natural scaling of Navier-Stokes equations. More precisely, our statements will take place in some Sobolev and Besov spaces, with a regularity index $s$ such that ${\displaystyle}{\frac{1}{2} < s < \frac{3}{2}\cdotp}$\ **Notations.** We shall constantly be using the following simplified notations: $$L^{\infty}_{T}(\dot{H}^s) \eqdefa L^{\infty}([0,T],\dot{H}^s)\quad \hbox{and} \quad L^{2}_{T}(\dot{H}^{s+1}) \eqdefa L^{2}([0,T],\dot{H}^{s+1}),$$ and the relevant function space we shall be working with in the sequel is $$X^{s}_{T} \eqdefa L^{\infty}_{T}(\dot{H}^s)\, \cap\, L^{2}_{T}(\dot{H}^{s+1}),\quad \hbox{endowed with the norm} \quad\| u \|^2_{X^{s}_{T}} \eqdefa \| u \|^2_{L^{\infty}_{T}(\dot{H}^s)} + \| u \|^2_{L^{2}_{T}(\dot{H}^{s+1})}.$$ 0.2cm Let us start by recalling the local existence theorem for data in the Sobolev space $\dot{H}^s$. *[ \[key theorem\] Let $u_{0}$ be in $\dot{H}^s$, with ${\displaystyle}{\frac{1}{2} < s < \frac{3}{2}}\cdotp$ Then there exists a time $T$ and there exists a unique solution $NS(u_{0})$ such that ${\displaystyle}{NS(u_{0}) \quad\hbox{belongs to} \quad L^{\infty}_{T}(\dot{H}^s) \cap L^{2}_{T}(\dot{H}^{s+1})}$.\ Moreover, denoting by ${T_{*}}(u_{0})$ the maximal time of existence of such a solution, there exists a positive constant  $c$ such that $$\label{relation Ac et ts} {T_{*}}(u_{0})\,\, \| u_{0} \|^{\sigma_{s}}_{\dot{H}^s} \geqslant c,\quad \hbox{with} \quad \sigma_{s} \eqdefa \frac{1}{\frac{1}{2}(s-\frac{1}{2})}\cdotp$$]{}* Throughout this paper, we will adopt the useful notation $NS(u_{0})$ to mean the maximal solution of the Navier-Stokes system, associated with the initial data $u_{0}$. Notice that our whole work relies on the hypothesis there exists some blowing up $NS$-solutions, e.g some $NS$-solutions with a finite lifespan ${T_{*}}(u_{0})$. This is still an open question. We point out that the infimum of the quantity ${\displaystyle}{{T_{*}}(u_{0})\,\, \| u_{0} \|^{\sigma_{s}}_{\dot{H}^s}}$ exists and is positive (because of the constant $c$). It has been proved in [@P] that there exists some intial data which reach this infimum and that the set of such data is compact, up to dilations and translations. Theorem \[key theorem\] implies there exists a constant $c>0$, such that $$\label{remark c} ({T_{*}}(u_{0}) - t)\, \| NS(u_{0})(t) \|^{\sigma_{s}}_{\dot{H}^s} \geqslant c,$$ and thus we get in particular the blow up of the $\dot{H}^s$-norm $$\lim_{t \to {T_{*}}(u_{0})} \| NS(u_{0})(t)\|^{\sigma_{s}}_{\dot{H}^s} = +\infty.$$ 0.2cm Our motivation here is to wonder if there exist some Navier-Stokes solutions which stop living in finite time (e.g ${T_{*}}(u_{0}) < \infty$) and which blows up at a minimal rate, namely: there exists a positive constant $M$ such that ${\displaystyle}{({T_{*}}(u_{0}) - t)\, \| NS(u_{0}) \|^{\sigma_{s}}_{\dot{H}^s} \leqslant M}$. In others terms,\ *Question:* *[ Does there exist some blowing up $NS$-solutions such that ${\displaystyle}{({T_{*}}(u_{0}) - t)\, \| NS(u_{0}) \|^{\sigma_{s}}_{\dot{H}^s} \leqslant M}$ ?\ If yes, what do they look like ? ]{}* 0.3cm We assume an affirmative answer and we search to characterize such solutions.\ *Hypothesis $\mathcal{H}$:* *[ There exist some blowing up $NS$-solutions such that ${\displaystyle}{({T_{*}}(u_{0}) - t)\, \| NS(u_{0}) \|^{\sigma_{s}}_{\dot{H}^s} \leqslant M}$. ]{}* 0.1cm Notice that a very close question to this one is to prove that $$\hbox{If} \quad {T_{*}}(u_{0}) < \infty, \quad \hbox{does} \quad \limsup_{t \to {T_{*}}(u_{0}) } ({T_{*}}(u_{0}) - t)\, \| NS(u_{0})(t) \|^{\sigma_{s}}_{\dot{H}^s} = +\infty \quad ?$$ We underline that this question about blowing-up Navier-Stokes solutions has been highly developed in the context of critical spaces, namely $\dot{H}^{\frac{1}{2}}(\R^3)$ and $L^{3}(\R^3)$. Indeed, L. Escauriaza, G. Seregin and V. $\breve{\mathrm{S}}$ver$\acute{\mathrm{a}}$k showed in the fundamental work [@ESS] that any “Leray-Hopf” weak solution which remains bounded in $L^{3}(\R^3)$ can not develop a singularity in finite time. Alternatively, it means that $$\hbox{If} \,\, {T_{*}}(u_{0}) < +\infty,\,\,\,\, \hbox{then} \,\, \limsup_{t \to {T_{*}}(u_{0})} \, \| NS(u_{0})(t) \|_{L^3} = +\infty.$$ I. Gallagher, G. Koch and F. Planchon revisited the above criteria in the context of mild Navier-Stokes solutions. They proved in [@GKP] that strong solutions which remain bounded in $L^{3}(\R^3)$, do not become singular in finite time. To perform it, they develop an alternative viewpoint : the method of “critical elements” (or “concentration-compactness”), which was introduced by C. Kenig and F. Merle to treat critical dispersive equations. Recently, same authors extend the method in [@GKP2] to prove the same result in the case of the critical Besov space ${\displaystyle}{\dot{B}^{-1 +\frac{3}{p}}_{p,q}(\R^3)}$, with ${\displaystyle}{3<p,q<\infty}$. Notice the work of J.-Y.Chemin and F. Planchon in [@CP], who gives the same answer in the case of the Besov space ${\displaystyle}{\dot{B}^{-1 +\frac{3}{p}}_{p,q}(\R^3)}$, with ${\displaystyle}{3<p<\infty}$, $q<3$ and with an additional regularity assumption on the data. To conclude the non-exhaustive list of blow up results, we mention the work of C. Kenig and G. Koch who carried out in [@KK] such a program of critical elements for solutions in the simpler case $\dot{H}^{\frac{1}{2}}(\R^3)$. More precisely, they proved for any data $u_{0}$ belonging to the smaller critical space  $\dot{H}^{\frac{1}{2}}(\R^3)$, $$\hbox{If} \,\, {T_{*}}(u_{0}) < +\infty,\,\,\,\, \hbox{then} \,\, \lim_{t \to {T_{*}}(u_{0})} \, \| NS(u_{0})(t) \|_{\dot{H}^{\frac{1}{2}}} = +\infty.$$ In our case (remind : we consider Sobolev spaces $\dot{H}^{s}(\R^3)$ with ${\displaystyle}{ \frac{1}{2} < s < \frac{3}{2}}$ which are non-invariant under the natural scaling of Navier-Stokes equations), we can not expect to prove our result in the same way, because of the scaling. Indeed, a similar proof leads us to define the critical quantity $M^{\sigma_{s}}_{c}$ $$M^{\sigma_{s}}_{c} = \sup \bigl\{ A>0,\,\, \sup_{t < {T_{*}}(u_{0}) } ({T_{*}}(u_{0}) - t)\, \| NS(u_{0}) \|^{\sigma_{s}}_{\dot{H}^s} \leqslant A\,\,\,\, \Rightarrow\, {T_{*}}(u_{0}) = +\infty \bigr\}.$$ But unfortunately, such a point of view makes no sense, owing to the meaning of $({T_{*}}(u_{0}) - t)$ when ${T_{*}}(u_{0}) ~= +\infty$. We have to proceed in an other way and it may be removed by defining a new object $M^{\sigma_{s}}_{c}$ $$M^{\sigma_{s}}_{c} \eqdefa \inf_{\substack{u_{0} \in \dot{H}^s \\ {T_{*}}(u_{0}) < \infty }} \bigl\{ \limsup_{t \to {T_{*}}(u_{0})}({T_{*}}(u_{0}) - t)\, \| NS(u_{0})(t) \|^{\sigma_{s}}_{\dot{H}^s} \bigr\}.$$ Clearly, (\[remark c\]) implies that $M^{\sigma_{s}}_{c} $ exists and is positive. As we have decided to work under hypothesis $\mathcal{H}$, *a fortiori*, this implies that $M^{\sigma_{s}}_{c} $ is finite. The definition below is the key notion of critical solution in this context. *[(Sup-critical solution)\ Let $u_{0}$ be an element in $\dot{H}^s$. We say that $u = NS(u_{0})$ is a sup-critical solution if $NS(u_{0})$ satisfies the two following assumptions: $${T_{*}}(u_{0}) < \infty \quad \hbox{and} \quad \limsup_{t \to {T_{*}}(u_{0})} ({T_{*}}(u_{0}) - t)\, \| NS(u_{0})(t) \|^{\sigma_{s}}_{\dot{H}^s} =\, M^{\sigma_{s}}_{c}.$$ ]{}* A natural question is to know if such elements exist. The statement given below gives an affirmative answer and provides a general procedure to build some sup-critical solutions. Our main result follows. *[(Key Theorem)\ \[Big key theorem\] Let us assume that there exists $u_{0}$ in $\dot{H}^s $ and $M$ in $\R^{+}_{*}$ such that $${T_{*}}(u_{0}) < \infty \quad \hbox{and} \quad ({T_{*}}(u_{0}) - t)\, \| NS(u_{0})(t) \|^{\sigma_{s}}_{\dot{H}^s} \leqslant M.$$ Then, there exists $\Phi_{0} \in \dot{H}^s \cap \dot{B}^{\frac{1}{2}}_{2,\infty}$ such that $\Phi \eqdefa NS(\Phi_{0})$ is a sup-critical solution, blowing up at time $1$, such that $$\label{key theorem point 1} \sup_{ \tau < 1} \,(1 - \tau)\, \| NS(\Phi_{0})(\tau) \|^{\sigma_{s}}_{\dot{H}^s} =\, \limsup_{\tau\to 1} (1 - \tau)\, \| NS(\Phi_{0})(\tau) \|^{\sigma_{s}}_{\dot{H}^s} =\, M^{\sigma_{s}}_{c}.$$ In addition, there exists a positive constant $C$ such that $$\label{key theorem point 2} \hbox{and for any} \quad \tau <1, \quad \| NS(\Phi_{0})(\tau) \|_{\dot{B}^{\frac{1}{2}}_{2,\infty}} \leqslant C,$$ where the Besov norm (for regularity index $0<\alpha <1$) is defined by $$\| u \|_{\dot{B}^{\alpha}_{2,\infty}} \eqdefa \sup_{x \in \R^d} \,\, \frac{\| u(\cdotp -x) - u\|_{L^{2}}}{|x| ^{\alpha}}\cdotp$$ ]{}* We postpone the proof of (\[key theorem point 1\]) of the Key Theorem \[Big key theorem\] to the next section. The proof of (\[key theorem point 2\]) will be given in Section $5$. We stress on the fact that (\[key theorem point 2\]) is somewhat close to a question raised by the paper of I. Gallagher, G. Koch and F. Planchon [@GKP2], in which they prove that for any initial data in the critical Besov space $\dot{B}^{-1+ \frac{3}{p}}_{p, q}$, with $3<p,q<\infty$, the $NS$-solution, (the lifespan of which is assumed finite) becomes unbounded at the blow-up time. Let us say a few words about the limit case $\dot{B}^{-1+ \frac{3}{p}}_{p, \infty}$. We may wonder if the result holds in the limit case $q=\infty$. As far as the author is aware, the answer is still open. Actually, if it holds, *a fortiori* it holds in the smaller space $\dot{B}^{\frac{1}{2}}_{2, \infty}$, by vertue of the embedding ${\displaystyle}{\dot{B}^{\frac{1}{2}}_{2, \infty} \hookrightarrow \dot{B}^{-1+ \frac{3}{p}}_{p, \infty}}$. In others terms, it would mean there is no blowing-up solution, bounded in the critical space $\dot{B}^{\frac{1}{2}}_{2, \infty}$. This is related to the concern of our paper since we build some blowing-up solutions bounded in this critical space, under the assumption of blow up at minimal rate. We mention the very interesting work of H. Jia and V. $\breve{\mathrm{S}}$ver$\acute{\mathrm{a}}$k [@JV], where they prove that $-1$-homogeneous initial data generate global $-1$-homogeneous solutions. Unfortunately, the uniqueness of such solutions is not guaranteed. Existence of sup-critical solutions =================================== 0.5cm The goal of this section is to give a partial proof of Key Theorem \[Big key theorem\]. It relies on the two Lemmas below. *[(Existence of sup-critical solutions in $\dot{H}^s $)\ \[general lemma for critical element \] Let $(v_{0,n})_{n \in \N}$ be a bounded sequence in $\dot{H}^s$ such that $$\tau^{*}(v_{0,n}) = 1 \quad \hbox{ and} \quad \hbox{ for any} \,\,\, \tau < 1, \quad (1-\tau)\, \| NS(v_{0,n})(\tau,\cdotp) \|^{\sigma_{s}}_{\dot{H}^s} \leqslant\,\, M^{\sigma_{s}}_{c} + \varepsilon_{n},$$ where $\varepsilon_{n}$ is a generic sequence which tends to $0$ when $n$ goes to $+\infty$.\ Then, there exists $\Psi_{0}$ in $\dot{H}^s$ such that $\Psi \eqdefa NS(\Psi_{0})$ is a sup-critical solution blowing up at time $1$ and satisfies $$\sup_{\tau < 1} (1 - \tau)\, \| NS(\Psi_{0})(\tau) \|^{\sigma_{s}}_{\dot{H}^s} \,=\, \limsup_{\tau \to 1} (1 - \tau)\, \| NS(\Psi_{0})(\tau) \|^{\sigma_{s}}_{\dot{H}^s} =\, M^{\sigma_{s}}_{c}.$$ Moreover, the initial data of such element is a weak limit of the sequence $(v_{0,n})$ translated, e.g $${\displaystyle}{\exists\,\, (x_{0,n})_{n \geqslant 0}, \quad v_{0,n}(\cdotp + x_{0,n} ) \rightharpoonup_{n \to +\infty} \Psi_{0}}.$$ ]{}* The proof of Lemma \[general lemma for critical element \] will be the purpose of Section $4$. It relies essentially on scaling argument and profile theory, which will be introduced in the next Section $3$. *[(Fluctuation estimates)\ \[fluctuation lemma\] Let $u=NS(u_{0})$ be a NS-solution associated with a data $ u_{0} \in \dot{H}^s $, with ${\displaystyle}{\frac{1}{2} < s < \frac{3}{2}}$, such that $$({T_{*}}(u_{0}) - t)^{\frac{1}{\sigma_{s}}}\, \| NS(u_{0})(t) \|_{\dot{H}^s} \leqslant M.$$ Then, the following estimates on the fluctuation part ${\displaystyle}{B(u,u)(t) \eqdefa u - e^{t\, \Delta}u_{0} }$ yield $$\hbox{ for any}\quad s <s' < 2s-\frac{1}{2}, \quad ({T_{*}}(u_{0})- t)^{\frac{1}{\sigma_{s'}}}\, \| B(u,u)(t) \|_{\dot{H}^{s'}}\, \leqslant F_{s'}(M^2)$$ Moreover, for the critical case ${\displaystyle}{ = \frac{1}{2}}$, we have $$\| B(u,u)(t) \|_{\dot{B}^{\frac{1}{2}}_{2,\infty}}\, \leqslant C\,M^2.$$ ]{}* The proof of this lemma is postpone to Section $8$. It merely stems from product laws in Besov spaces, interpolation inequalities and from judicious splitting into low and high frequencies in the following sense $$({T_{*}}-t)2^{2j} \leqslant 1 \quad \hbox{and} \quad ({T_{*}}-t)2^{2j} \geqslant 1.$$ Let us point out that estimates of Lemma \[fluctuation lemma\] do not hold if ${\displaystyle}{0<\alpha < \frac{1}{2}}$, owing to low frequencies. Indeed, arguments similar to the ones used in the proof of Lemma \[fluctuation lemma\] lead only to the following estimate $$\| B(u,u)(t) \|_{\dot{B}^{\alpha}_{2,\infty}}\, \leqslant C\, M^2\, {T_{*}}(u_{0})^{\frac{1}{2}(\alpha-\frac{1}{2})}.$$ 0.3cm *Partial proof of Key Theorem \[Big key theorem\]*\ In all this text, we denote by $(\varepsilon_{n}) $ a non increasing sequence, which tends to $0$, when $n$ tend to $+\infty$.\ $\bullet$ Step $1$ : Existence of sup-critical elements in $ \dot{H}^s$, with ${\displaystyle}{\frac{1}{2} < s < \frac{3}{2}}\cdotp$\ Let us consider the sequence ${\displaystyle}{(M_{c} + \varepsilon_{n})_{n \geqslant 0}}$. By definition of $M_{c}$, there exists a sequence $(u_{0,n})$ belonging to  $\dot{H}^s$, with a finite lifespan ${T_{*}}(u_{0,n})$, such that for any  $t < {T_{*}}(u_{0,n})$ : $$\limsup_{t \to {T_{*}}(u_{0})} ({T_{*}}(u_{0,n}) - t)\, \| NS(u_{0,n}) \|^{\sigma_{s}}_{\dot{H}^s} \leqslant M^{\sigma_{s}}_{c} + \varepsilon_{n}.$$ By definition of ${\displaystyle}{\limsup}$, there exists a nondecreasing sequence of time  $t_{n}$, converging to ${T_{*}}(u_{0})$, such that $$\label{limsup tn} \forall t \geqslant t_{n},\,\, ({T_{*}}(u_{0,n}) - t)\, \| NS(u_{0,n})(t,x) \|^{\sigma_{s}}_{\dot{H}^s} \leqslant M^{\sigma_{s}}_{c} + \varepsilon_{n}.$$ By rescaling, we consider the sequence $$v_{0,n}(y) = \bigl({T_{*}}(u_{0,n}) - t_{n}\bigr)^\frac{1}{2} \, NS(u_{0,n})\bigl( t_{n},({T_{*}}(u_{0,n}) - t_{n}\bigr)^\frac{1}{2}\,y \bigr).$$ and we have $$\label{rescaling donnee initiale} \begin{split} \|v_{0,n} \|^{\sigma_{s}}_{\dot{H}^s} &= \bigl({T_{*}}(u_{0,n})-t_{n}\bigr)\, \|NS(u_{0,n})(t_{n}) \|^{\sigma_{s}}_{\dot{H}^s}. \\ \end{split}$$ By vertue of (\[limsup tn\]), the sequence $(v_{0,n})_{n \geqslant 1}$ is bounded $\bigl(\hbox{by}\,\, {\displaystyle}{M^{\sigma_{s}}_{c} + \varepsilon_{0}}\bigr)$ in the space $\dot{H}^s$. Moreover, such a sequence generates a Navier-Stokes solution, which keeps on living until the time $\tau^* = 1$ and satisfies $$\label{rescaling solution} \begin{split} NS(v_{0,n})(\tau,y)&= \bigl({T_{*}}(u_{0,n})-t_{n}\bigr)\,^\frac{1}{2} \, NS(u_{0,n})\bigl( t_{n} + \tau\,\bigl({T_{*}}(u_{0,n})-t_{n}\bigr)\, ,\bigl({T_{*}}(u_{0,n})-t_{n}\bigr)\,^\frac{1}{2}\,y \bigr). \end{split}$$ We introduce ${\displaystyle}{\widetilde{t_{n}} = t_{n} + \tau\,\bigl({T_{*}}(u_{0,n})-t_{n}\bigr)\,}$. Notice that, because of scaling, an easy computation yields $$\label{proposition comparative} (1-\tau)\, \|NS(v_{0,n})(\tau) \|^{\sigma_{s}}_{\dot{H}^s} = \bigl({T_{*}}(u_{0,n}) - \widetilde{t_{n}}\bigr) \, \|NS(u_{0,n})\bigl( \widetilde{t_{n}} \bigr)\|^{\sigma_{s}}_{\dot{H}^s}.$$ As $\widetilde{t_{n}} \geqslant t_{n}$ for any $n$ (by definition of $\widetilde{t_{n}}$) we combine (\[proposition comparative\]) with (\[limsup tn\]) and we get, for any $\tau \in [0,1[$, $$(1-\tau)\| NS(v_{0,n})(\tau,x) \|^{\sigma_{s}}_{\dot{H}^s} \leqslant M^{\sigma_{s}}_{c} + \varepsilon_{n}.$$ The sequence $(v_{0,n})$ satisfies the hypothesis of Lemma \[general lemma for critical element \]. Applying it, we build a sup-critical solution $ \Phi = NS(\Psi_{0}) $ in $\dot{H}^s$ which blows up at time $1$, e.g $$\limsup_{\tau \to 1} (1 - \tau)\, \| NS(\Psi_{0})(\tau) \|^{\sigma_{s}}_{\dot{H}^s} =\, M^{\sigma_{s}}_{c}.$$ This proves the first part of the statement of Theorem \[Big key theorem\].\ $\bullet$ Step $2$ : Existence of sup-critical elements in $\dot{H}^s \cap \dot{B}^{\frac{1}{2}}_{2,\infty} \cap \dot{H}^{s'} $, with $s$ and $s'$ such that ${\displaystyle}{s<s'< 2s -\frac{1}{2}\cdotp}$\ This will be proved in Section $6$. Notice that proving that $NS(\Psi_{0})$ is bounded in the Besov space $\dot{B}^{\frac{1}{2}}_{2,\infty}$ is equivalent to prove that $\Psi_{0}$ belongs to $\dot{B}^{\frac{1}{2}}_{2,\infty}$, since, by vertue of Lemma \[fluctuation lemma\], the fluctuation part is bounded in $\dot{B}^{\frac{1}{2}}_{2,\infty}$ and obviously we have $$\| NS(\Psi_{0})(t)\|_{\dot{B}^{\frac{1}{2}}_{2,\infty}} \leqslant\| NS(\Psi_{0})(t) \,\,-\,\, e^{t\Delta}\Psi_{0}\|_{\dot{B}^{\frac{1}{2}}_{2,\infty}} \quad + \quad \| e^{t\Delta}\Psi_{0}\|_{\dot{B}^{\frac{1}{2}}_{2,\infty}}.$$ The paper is structured as follows. In Section $3$, we recall the main tools of this paper. Essentially, it deals with the profile theory of P. Gérard [@PG] and a structure lemma concerning a $NS$-solution associated with a sequence which satisfies hypothesis of profile theory. We also recall some basics facts on Besov spaces.\ In Section $4$, we are going to establish the proof of crucial Lemma \[general lemma for critical element \], which provides the proof of the first part of Theorem \[Big key theorem\] : there exists some sup-critical elements in $\dot{H}^s$. The second part of the proof is postponed in Section $6$, where we build some sup-critical elements not only in $\dot{H}^s$, but also in others spaces, such as $\dot{B}^{\frac{1}{2}}_{2,\infty} $ and $\dot{B}^{s'}_{2,\infty} $, with ${\displaystyle}{s<s'< 2s -\frac{1}{2}\cdotp}$ To carry out this, we need some estimates on the fluctuation part of the solution, which will be provided in Section $5$.\ Then in Section $7$, we give an analogue sup-inf critical criteria. It turns out that among sup-critical solutions, there exists some of them which are sup-inf-critical in the sense of they reach the biggest infimum limit. Section $8$ is devoted to the proof of Lemma \[lemme allure de la solution\], which gives the structure of a Navier-Stokes solution associated with a bounded sequence of data in $\dot{H}^s$. We recall to the reader that such structure result has been partially proved in [@P], except for the orthogonality property of Navier-Stokes solution in $\dot{H}^s$-norm. As a result, we give the proof of such a property, after reminding the ideas of the complete proof. Profile theory and Tool Box =========================== We recall the fundamental result due to P. Gérard : the profile decomposition of a bounded sequence in the Sobolev space $\dot{H}^s$. The original motivation of this theory was the desciption, up to extractions, of the defect of compactness in Sobolev embeddings (see for instance the pionneering works of P.-L. Lions in [@PLL], [@PLL2] and H. Brezis, J.-M. Coron in [@BC]. Here, we will use the theorem of P. Gérard [@PG], which gives, up to extractions, the structure of a bounded sequence of $\dot{H}^s$, with $s$ between $0$ and ${\displaystyle}{\frac{3}{2}} \cdotp$ More precisely, the defect of compactness in the critical Sobolev embedding ${\displaystyle}{\dot{H}^s \subset L^p}$ is described in terms of a sum of rescaled and translated orthogonal profiles, up to a small term in $L^p$. For more details about the history of the profile theory, we refer the reader to the paper [@P]. *[(Profile Theorem [@PG])\ \[theo profiles\] Let $(u_{0,n})_{n \in \N}$ be a bounded sequence in $\dot{H}^s$. Then, up to an extraction:\ - There exists a sequence vectors fields, called profiles $(\varphi^{j})_{j \in \N}$ in $\dot{H}^s$.\ - There exists a sequence of scales and cores $(\lambda_{n,j},x_{n,j})_{n,j \in \N}$, such that, up to an extraction $$\forall J \geqslant 0,\,\, u_{0,n}(x) = \sum_{j=0}^{J} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}(x) + \psi_{n}^{J}(x) \quad \hbox{with}\quad \lim_{J \to +\infty}\limsup_{n \to+\infty}\|\psi_{n}^{J}\|_{L^{p}} =0, \quad \hbox{and} \quad p =\frac{6}{3-2s}\cdotp$$ Where, $(\lambda_{n,j},x_{n,j})_{n \in \N,j \in \N^*}$ are sequences of $(\R_{+}^* \times \R^3)^{\N}$ with the following orthogonality property: for every integers $(j,k)$ such that $j \neq k$, we have $$\hbox{either}\lim_{n \to +\infty}\Bigl(\frac{\lambda_{n,j}}{\lambda_{n,k}} + \frac{\lambda_{n,k}}{\lambda_{n,j}}\Bigr) = +\infty \quad\hbox {or} \quad \lambda_{n,j} = \lambda_{n,k} \quad\hbox {and} \quad \lim_{n \to +\infty}\frac{|x_{n,j} - x_{n,k}|}{\lambda_{n,j}} = +\infty.$$ Moreover, for any $J \in \N$, we have the following orthogonality property $$\label{ortho de la norme} \| u_{0,n} \|^2_{\dot{H}^s} = \sum_{j=0}^{J} \| \varphi^{j} \|^2_{\dot{H}^s} + \| \psi_{n}^J \|^2_{\dot{H}^s} + \circ(1), \quad\hbox {when} \quad n \to +\infty.$$ ]{}* 0.3cm Let us recall a structure Lemma, based on the crucial profils theorem of P. Gérard (see [@PG]). Let $(u_{0,n})$ be a bounded sequence in the Sobolev space $\dot{H}^s$, which profile decomposition is given by $$u_{0,n}(x) = \sum_{j \in J} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}} \varphi^{j}(x) + \psi_{n}^{J}(x),$$ with the appropriate properties on the error term  $\psi_{n}^{J}$. By vertue of orthogonality of scales and cores given by Theorem \[theo profiles\], we sort profiles according to their scales $$\label{decomposition 1} \begin{split} u_{0,n}(x) &= \sum_{\stackrel{j \in \mathcal{J}_{1}}{j \leqslant J}} \varphi^{j}(x-x_{n,j}) + \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}(x) + \psi_{n}^{J}(x)\\ \end{split}$$\ where for any $j \in \mathcal{J}_{1}$, for any $n \in \N$, $\lambda_{n,j} \equiv 1$. Under these notations, we claim we have the following structure Lemma of the Navier-Stokes solutions, which proof will be provided in Section $8$. *[(Profile decomposition of a sequence of Navier-Stokes solutions)\ \[lemme allure de la solution\] Let $(u_{0,n})_{n \geqslant 0}$ be a bounded sequence of initial data in $\dot{H}^s$ which profile decomposition is given by $$u_{0,n}(x) = \sum_{j=0}^{J} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}(x) + \psi_{n}^{J}(x).$$ Then, ${\displaystyle}{\liminf_{n \geqslant 0}{T_{*}}(u_{0,n}) \geqslant \widetilde{T} \eqdefa {\displaystyle}{\inf_{j \in \mathcal{J}_{1}}{{T_{*}}(\varphi^{j})}} }$ and for any $t < {T_{*}}(u_{0,n}) $, we have $$\label{decomposition de la solution} \begin{split} NS(u_{0,n})(t,x) &= \sum_{j \in \mathcal{J}_{1}} NS(\varphi^{j})(t,x-x_{n,j})\, +\, e^{t\Delta} \Bigl(\sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}(x) + \psi_{n}^{J}(x) \Bigr)\,+\, R_{n}^{J}(t,x) \end{split}$$ where the remaining term $R_{n}^{J}$ satisfies for any $T < \tilde{T}$, ${\displaystyle}{\lim_{J \to +\infty}\lim_{n \to +\infty} \| R_{n}^{J} \|_{X^{s}_{T}}=0}$.\ Moreover, we have the orthogonality property on the $\dot{H}^s$-norm for any $t < \tilde{T}$ $$\label{Pythagore} \begin{split} \| NS(u_{0,n})(t) \|^{2}_{\dot{H}^s} &= \sum_{j \in \mathcal{J}_{1}} \| NS(\varphi^{j})(t) \|^{2}_{\dot{H}^s} \,+ \, \Bigl\| e^{t\Delta} \Bigl(\sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j} + \psi_{n}^{J} \Bigr) \Bigr\|^{2}_{\dot{H}^s} + \gamma_{n}^{J}(t). \end{split}$$ with ${\displaystyle}{\lim_{J \to +\infty} \limsup_{n \to +\infty} \sup_{t' < t} | \gamma_{n}^{J}(t')| = 0}$. ]{}* For the convenience of the reader, we recall the usual definition of Besov spaces. We refer the reader to [@BCD], from page $63$, for a detailed presentation of the theory and analysis of homogeneous Besov spaces. *[ Let $s$ be in $\R$, $(p,r)$ in $[1,+\infty]^2$ and $u$ in $\mathcal{S'}$. A tempered distribution $u$ is an element of the Besov space $\dot{B}^{s}_{p,r}$ if $u$ satifies ${\displaystyle}{\lim_{j \to \infty}\, || \dot{S}_{j} u ||_{L^\infty} = 0 }$ and $$\| u\|_{\dot{B}^{s}_{p,r}} \eqdefa \Bigl(\sum_{j \in \Z} 2^{jrs}\,\,|| \dot{\Delta}_{j} u ||^{r}_{L^p}\Bigr)^{\frac{1}{r}} < \infty,$$ where $\dot{\Delta}_{j}$ is a frequencies localization operator (called Littlewood-Paley operator), defined by $$\dot{\Delta}_{j}u(\xi) \eqdefa \mathcal{F}^{-1}\bigl(\varphi(2^{-j}|\xi|)\widehat{u}(\xi)\bigr),$$ with $\varphi \in \mathcal{D}([\frac{1}{2},2])$, such that ${\displaystyle}{\sum_{j \in \Z} \varphi(2^{-j}t) = 1}$, for any $t>0$. ]{}* [0.2cm]{} \[equivalence norm besov\] Notice that the characterization of Besov spaces with positive indices in terms of finite differences is equivalent to the above definition (cf [@BCD]). In the case where the regularity index is between $0$ and $1$, one has the following property. Let $s$ be in $]0,1[$ and $(p,r)$ in $[1,\infty]^2$. A constant $C$ exists such that, for any $u \in \mathcal{S{'}}$, $$C^{-1}\, \| u\|_{\dot{B}^{s}_{p,r}} \leqslant \Bigl\| \frac{\| u(\cdotp -y) - u\|_{L^{p}}}{|y| ^s} \Bigr\|_{L^r(\R^d ; \frac{dy}{|y| ^d})} \, \leqslant C\, \| u\|_{\dot{B}^{s}_{p,r}}.$$ Notice that $\dot{H}^s \subset \dot{B}^{s}_{2,2}$ and both spaces coincide if ${\displaystyle}{s < \frac{3}{2}\cdotp}$ We recall an interpolation property in Besov spaces, which will be useful in the sequel. *[ \[interpolation\] A constant $C$ exists which satisifes the following property. If $s_{1}$ and $s_{2}$ are real numbers such that $s_{1} < s_{2}$ and $\theta \in ]0,1[$, then we have for any $p\in [1,+\infty]$\ $$\| u\|_{\dot{B}^{\theta \,s_{1} + (1-\theta)\, s_{2} }_{p,1}} \,\, \leqslant\,\, C(s_{1},s_{2},\theta)\, \| u\|^{\theta}_{\dot{B}^{s_{1}}_{p,\infty}} \,\, \| u\|^{1-\theta}_{\dot{B}^{s_{2}}_{p,\infty}}.$$ ]{}* Application of profile theory to sup-critical solutions ======================================================= This section is devoted to the proof of Lemma \[general lemma for critical element \]. The statement given below is actually a bit stronger and clearly entails Lemma \[general lemma for critical element \]. We shall prove the following proposition. *[ \[proposition critical element \] Let $(v_{0,n})_{n \in \N}$ be a bounded sequence in $\dot{H}^s$ such that $$\tau^{*}(v_{0,n}) = 1 \quad \hbox{ and} \quad \hbox{ for any} \,\,\, \tau < 1, \quad (1-\tau)\, \| NS(v_{0,n})(\tau,) \|^{\sigma_{s}}_{\dot{H}^s} \leqslant\,\, M^{\sigma_{s}}_{c} + \varepsilon_{n},$$ where $\varepsilon_{n}$ is a generic sequence which tends to $0$ when $n$ goes to $+\infty$.\ Then, up to extractions, we get the statements below\ $\bullet$ the profile decomposition of such a sequence of data has a unique profile $\varphi^{j_{0}} $ with constant scale such that $NS(\varphi^{j_{0}} )$ is a sup-critical solution which blows up at time $1$, e.g $$\label{prop point 1} \limsup_{\tau \to 1} (1 - \tau)\, \| NS(\varphi^{j_{0}})(\tau) \|^{\sigma_{s}}_{\dot{H}^s} =\, M^{\sigma_{s}}_{c}.\\$$ $\bullet$ “The limsup is actually a sup” $$\label{prop point 2} \sup_{\tau < 1} (1 - \tau)\, \| NS(\varphi^{j_{0}})(\tau) \|^{\sigma_{s}}_{\dot{H}^s} \, =\, M^{\sigma_{s}}_{c}.\\$$ ]{}* Let $(v_{0,n})_{n \geqslant 1}$ be a bounded sequence in $\dot{H}^s $, satisfiying the assumptions of Proposition \[proposition critical element \]. Therefore, $(v_{0,n})_{n \geqslant 1}$ has the profile decomposition below $$\begin{split} v_{0,n}(x) &= \sum_{\stackrel{j \in \mathcal{J}_{1}}{j \leqslant J}} \varphi^{j}(x-x_{n,j}) + \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}(x) + \psi_{n}^{J}(x). \end{split}$$ We denote by ${\displaystyle}{ \tau^{*}_{j_{0}} \eqdefa \inf_{j \in \mathcal{J}_{1}}{{T_{*}}(\varphi^{j})}}$.\ $\bullet$ Step $1$ : we start by proving by a contradiction argument that $\tau^{*}_{j_{0}} =1$.\ \ We have already known by vertue of Lemma \[lemme allure de la solution\], that $\tau^{*}_{j_{0}} \leqslant1$. Assuming that $\tau^{*}_{j_{0}} <1$, we expect a contradiction. Moreover, orthogonal Estimate (\[Pythagore\]) can be bounded from below by $$\label{equation de reference 2} \begin{split} \| NS(v_{0,n})(\tau) \|^2_{\dot{H}^s} \geqslant \| NS(\varphi^{j_{0}})(\tau) \|^2_{\dot{H}^s} - |\gamma_{n}^{J}(\tau)|. \end{split}$$ On the one hand, it seems clear by assumption that for any $\tau < \tau^{*}_{j_{0}}$, we have $$(1 -\tau^{*}_{j_{0}})^{\frac{2}{\sigma_{s}}} \leqslant (1-\tau)^{\frac{2}{\sigma_{s}}}.$$ On the other hand, hypothesis on $NS(v_{0,n})$ yields $$(1-\tau)^{\frac{2}{\sigma_{s}}} \, \| NS(v_{0,n})(\tau) \|^{2}_{\dot{H}^s} \leqslant\,\, M^{2}_{c} + \varepsilon_{n}.$$ Therefore, from the above remarks, we get $$\| NS(v_{0,n})(\tau) \|^{2}_{\dot{H}^s} \leqslant\,\, \frac{M^{2}_{c} + \varepsilon_{n}}{ (1 -\tau^{*}_{j_{0}})^{\frac{2}{\sigma_{s}}}} \cdotp$$ Combining the above estimate with (\[equation de reference 2\]), we finally get, after multiplication by the factor $(\tau^{*}_{j_{0}} - \tau)^{\frac{2}{\sigma_{s}}} $, $$\label{equation de reference 3} \begin{split} \frac{M^{2}_{c} + \varepsilon_{n}}{ (1 - \tau^{*}_{j_{0}})^{\frac{2}{\sigma_{s}}}} \,\, (\tau^{*}_{j_{0}} - \tau)^{\frac{2}{\sigma_{s}}} \geqslant (\tau^{*}_{j_{0}} - \tau)^{\frac{2}{\sigma_{s}}} \, \| NS(\varphi^{j_{0}})(\tau) \|^2_{\dot{H}^s} - (\tau^{*}_{j_{0}} - \tau)^{\frac{2}{\sigma_{s}}}\, |\gamma_{n}^{J}(\tau)|. \end{split}$$ Notice that $(\tau^{*}_{j_{0}} - \tau)^{\frac{2}{\sigma_{s}}}$ is always less than $1$, which allows us to get rid of it in front of the remaining term $|\gamma_{n}^{J}(\tau)|$. In addition, applying (\[remark c\]) and hypothesis on the sequence $\varepsilon_{n}$, one has $$\begin{split} \frac{M^{2}_{c} + \varepsilon_{0}}{ (1 -\tau^{*}_{j_{0}})^{\frac{2}{\sigma_{s}}}} \,\, (\tau^{*}_{j_{0}} - \tau)^{\frac{2}{\sigma_{s}}} \geqslant c\, - \, |\gamma_{n}^{J}(\tau)|. \end{split}$$ We first choose $\tau = \tau_{c}$ such that $\tau_{c} < \tau^{*}_{j_{0}}$ and ${\displaystyle}{ \frac{M^{2}_{c} + \varepsilon_{0}}{ (1 -\tau^{*}_{j_{0}})^{\frac{2}{\sigma_{s}}}} \,\, (\tau^{*}_{j_{0}} - \tau_{c})^{\frac{2}{\sigma_{s}}} = \frac{c}{4}}\cdotp$ Then, we take $J$ and $n$ large enough such that ${\displaystyle}{ |\gamma_{n}^{J}(\tau_{c})| \leqslant \frac{c}{2}}\cdotp$ Therefore, we get a contradiction, which proves that $\tau^{*}_{j_{0}} = 1$. $\bullet$ Step $2$ : we prove here that $ NS(\varphi^{j_{0}})$ is a sup-critical solution in $\dot{H}^s$.\ \ Let us come back to Inequality (\[equation de reference 2\]), which we multiply by the factor $(1 -\tau)^{\frac{2}{\sigma_{s}}}$. As we have shown that $\tau^{*}_{j_{0}} = 1$, hypothesis on $NS(v_{0,n})$ implies that for any $\tau < 1$, $$M^{2}_{c} + \varepsilon_{n} \geqslant (1 - \tau)^{\frac{2}{\sigma_{s}}} \, \| NS(\varphi^{j_{0}})(\tau) \|^2_{\dot{H}^s}\, -\, |\gamma_{n}^{J}(\tau)|.$$ Our aim is to prove that the particular profile $\varphi^{j_{0}}$ generates a sup-critical solution. If not, it means that $$\exists \alpha_{0} >0, \forall \varepsilon >0, \,\, \exists \tau_{\varepsilon},\,\, \hbox{such that} \,\,0< ( 1 -\tau_{\varepsilon} )^{\frac{2}{\sigma_{s}}}\, < \varepsilon \quad \hbox{and} \quad (1 -\tau_{\varepsilon} )^{\frac{2}{\sigma_{s}}}\, \| NS(u_{0,n})(\tau_{\varepsilon}) \|^{2}_{\dot{H}^s} \geqslant M_{c}^2 + \alpha_{0}.$$ Taking the above inequality at time $\tau_{\varepsilon}$, one has $$M^{2}_{c} + \varepsilon_{n} \geqslant\, M_{c}^2 + \alpha_{0} -\, |\gamma_{n}^{J}(\tau_{\varepsilon})|.$$ Moreover, assumption on the remaining term $\gamma_{n}^{J}$ implies that $$\forall \eta >0,\,\, \exists \widetilde{J}(\eta) \in \N, \,\, \exists N_{\eta} \in \N\,\, \hbox{such that}\,\, \forall J \geq \widetilde{J}(\eta),\,\, \forall n \geqslant N_{\eta},\,\, |\gamma_{n}^{J}(\tau_{\varepsilon})|\leqslant \eta.$$ Let $\eta >0$. For any $J \geq \widetilde{J}(\eta)$ and for any $n \geqslant N_{\eta}\,\, $, we get at time $\tau_{\varepsilon}$, $$M^{2}_{c} \geqslant M_{c}^2 + \alpha_{0} \, - \, \eta.$$ Now, choosing $\eta$ small enough (namely ${\displaystyle}{\eta = \frac{\alpha_{0}}{2}}$) we get a contradiction which proves that $NS(\varphi^{j_{0}})$ is a sup-critical solution. This concludes the proof of step $2$ and thus the point (\[prop point 1\]) is proved.\ $\bullet$ Step $3$ : let us prove the point (\[prop point 2\]) of Proposition \[proposition critical element \]. The proof is a straightforward adaptation of the previous one. We shall use that $NS(\varphi^{j_{0}})$ is a sup-critical solution: $$\limsup_{\tau \to 1} (1 - \tau)\, \| NS(\varphi^{j_{0}})(\tau) \|^{\sigma_{s}}_{\dot{H}^s} =\, M^{\sigma_{s}}_{c}.$$ As we always have ${\displaystyle}{\sup_{\tau < 1}\, (1 - \tau)\, \| NS(\varphi^{j_{0}})(\tau) \|^{\sigma_{s}}_{\dot{H}^s} \geqslant \limsup_{\tau \to 1} (1 - \tau)\, \| NS(\varphi^{j_{0}})(\tau) \|^{\sigma_{s}}_{\dot{H}^s}}$, we get a first inequality : ${\displaystyle}{\sup_{\tau < 1}\, (1 - \tau)\, \| NS(\varphi^{j_{0}})(\tau) \|^{\sigma_{s}}_{\dot{H}^s} \geqslant M^{\sigma_{s}}_{c} }$.\ According to the previous computations, we have, for any $\tau < 1$, $$M^{2}_{c} + \varepsilon_{n} \geqslant (1 - \tau)^{\frac{2}{\sigma_{s}}} \, \| NS(\varphi^{j_{0}})(\tau) \|^2_{\dot{H}^s}\, -\, |\gamma_{n}^{J}(\tau)|.$$ Hypothesis on the remaining term $|\gamma_{n}^{J}|$ implies that ${\displaystyle}{\sup_{\tau < 1}\, (1 - \tau)\, \| NS(\varphi^{j_{0}})(\tau) \|^{\sigma_{s}}_{\dot{H}^s} \leqslant M^{\sigma_{s}}_{c} },$ which provides the second desired inequality. This ends up the proof of (\[prop point 2\]).\ Let us recall some notation and add a few words about profiles with constant scale. Thanks to Lemma \[lemme allure de la solution\] and obvious boundaries from below we get for any ${\displaystyle}{\tau < {\displaystyle}{ \tau^{*}_{j_{0}} \eqdefa \inf_{j \in \mathcal{J}_{1}}{{T_{*}}(\varphi^{j})}} =1 }$ $$\begin{split} \| NS(v_{0,n})(\tau) \|^2_{\dot{H}^s} \geqslant \sum_{j \in \mathcal{J}_{1}} \| NS(\varphi^{j})(\tau) \|^{2}_{\dot{H}^s} - |\gamma_{n}^{J}(\tau)|.\\ \end{split}$$ Among profiles with a scale equal to $1$ (e.g $j \in \mathcal{J}_{1}$), we distinguish profiles with a lifespan equal to $\tau^{*}_{j_{0}} =1$ and profiles with a lifespan $\tau^{*}_{j}$ strictly greater than $1$. In other words, we consider the set $$\tilde{\mathcal{J}_{1}} \eqdefa \{ j \in \mathcal{J}_{1} \, \, | \,\, \tau^{*}_{j} = 1 \}.$$ Therefore, for any $\tau < 1$, $$\begin{split} \| NS(v_{0,n})(\tau) \|^2_{\dot{H}^s} &\geqslant \| NS(\varphi^{j_{0}})(\tau) \|^2_{\dot{H}^s} + \sum_{j \in \tilde{\mathcal{J}_{1}}, \, j \neq j_{0}} \, \| NS(\varphi^{j})(\tau) \|^{2}_{\dot{H}^s}\\ &+ \sum_{j \in \mathcal{J}_{1} \setminus \tilde{\mathcal{J}_{1}}} \,\| NS(\varphi^{j})(\tau) \|^{2}_{\dot{H}^s}\, - |\gamma_{n}^{J}(\tau)|, \end{split}$$ which be bounded from below once again by $$\label{equation de reference} \begin{split} \| NS(v_{0,n})(\tau) \|^2_{\dot{H}^s} \geqslant \| NS(\varphi^{j_{0}})(\tau) \|^2_{\dot{H}^s} + \sum_{j \in \tilde{\mathcal{J}_{1}}, \, j \neq j_{0}} \, \| NS(\varphi^{j})(\tau) \|^{2}_{\dot{H}^s}\, - |\gamma_{n}^{J}(\tau)|, \end{split}$$ since obviously the term ${\displaystyle}{\sum_{j \in \mathcal{J}_{1} \setminus \tilde{\mathcal{J}_{1}}} \,\| NS(\varphi^{j})(\tau) \|^{2}_{\dot{H}^s}\,}$ is positive.\ \ $\bullet$ Step $4$ : in order to complete the proof of Lemma \[general lemma for critical element \], we have to prove that there exists a unique profile with a lifespan $\tau^{*}_{j_{0}} =1$, namely $|\tilde{\mathcal{J}_{1}} | = 1$. Once again, we assume that there exists at least two profiles in $\tilde{\mathcal{J}_{1}}$. We expect a contraction. Arguments of the proof are similar to the ones used in the step $2$. We shall use the fact ${\displaystyle}{(1-\tau)^{\frac{2}{\sigma_{s}}}\, \| NS(\varphi^{j})(\tau) \|^{2}_{\dot{H}^s}}$ can not be small as we want, by vertue of (\[remark c\]). Indeed, let us come back to Inequality (\[equation de reference\]). We have already proved that $ \varphi^{j_{0}}$ generates a sup-critical solution, blowing up at time $1$. It means that for any $\varepsilon >0$, there exists a time $\tau_{\varepsilon}$ such that $$0< ( 1 -\tau_{\varepsilon} )^{\frac{2}{\sigma_{s}}}\, < \varepsilon \quad \hbox{and} \quad M_{c}^2 - \varepsilon \leqslant(1 -\tau_{\varepsilon} )^{\frac{2}{\sigma_{s}}}\, \| NS(\varphi^{j_{0}})(\tau_{\varepsilon}) \|^{2}_{\dot{H}^s} \leqslant M_{c}^2 + \varepsilon.$$ Therefore, Inequality (\[equation de reference\]) becomes at time $\tau_{\varepsilon} $ $$M^2_{c} + \varepsilon_{n} \geqslant M_{c}^2 - \varepsilon \, +\, \sum_{j \in \tilde{\mathcal{J}_{1}}, j \neq j_{0}} (1-\tau_{\varepsilon})^{\frac{2}{\sigma_{s}}}\, \| NS(\varphi^{j})(\tau_{\varepsilon}) \|^{2}_{\dot{H}^s} \, -\, |\gamma_{n}^{J}(\tau_{\varepsilon})|.$$ By vertue of (\[remark c\]), there exists a universal constant $c >0$ such that for any $j \in \tilde{\mathcal{J}_{1}}$ and $j \neq j_{0}$ $$(1-\tau)^{\frac{2}{\sigma_{s}}}\, \| NS(\varphi^{j})(\tau) \|^{2}_{\dot{H}^s} \geqslant c^2.$$ As a result, taking the limit for $n$ and $J$ large enough, we infer that (still under the hypothesis  ${\displaystyle}{|\tilde{\mathcal{J}_{1}}| > 1}$) $$M^{2}_{c} \geqslant M_{c}^2 - \varepsilon \, + (|\tilde{\mathcal{J}_{1}}|-1)\,c^2 - \eta.$$ Choosing $\varepsilon$ small enough, we get a contradiction and as a consequence, ${\displaystyle}{|\tilde{\mathcal{J}_{1}}| = 1}$. It means there exists a unique profile generating a sub-critical solution, blowing up at time $1$. This completes the proof of Proposition \[proposition critical element \], and thus the proof of Lemma  \[general lemma for critical element \]. Fluctuation estimates in Besov spaces ===================================== 0.3cm This section is devoted to the proof of Lemma \[fluctuation lemma\]. We shall prove some estimates on the fluctuation part which is given by the bilinear form $$B(u,u)(t) \eqdefa NS(u_{0})(t) - e^{t\Delta}u_{0} = u - e^{t\Delta}u_{0}.$$ We distinguish the case $\dot{B}^{\frac{1}{2}}_{2,\infty}$ from the case $\dot{B}^{s'}_{2,\infty}$, even if proves ideas are similar : we cut-off according low and high frequencies in the following sense : $$({T_{*}}-t)2^{2j} \leqslant 1 \quad \hbox{and} \quad ({T_{*}}-t)2^{2j} \geqslant 1.$$ Concerning high frequencies, we shall use the regularization effet of the Laplacian. Let us start by proving the critical part of Lemma \[fluctuation lemma\]. \[fluctuation 1/2\]*[ Let ${\displaystyle}{\frac{1}{2} < s< \frac{3}{2}}$ and $u_{0} \in \dot{H}^s $. It exists a positive constant $C_{s}$ such that $$\hbox{If} \quad {T_{*}}(u_{0}) < \infty \quad \hbox{and} \quad M_{u} \eqdefa ({T_{*}}(u_{0}) - t)\, \| NS(u_{0})(t) \|^{\sigma_{s}}_{\dot{H}^s} < \infty,$$ then, we have $$\| u - e^{t\Delta}u_{0} \|_{\dot{B}^{\frac{1}{2}}_{2,\infty}} < C_{s}\, M^2_{u}.$$ ]{}* Duhamel formula gives $$u - e^{t\Delta}u_{0} \eqdefa B(u,u) = - \int_{0}^{t} e^{(t-t')\Delta}\, \P({\mathop{\rm div}\nolimits}(u \otimes u)\, dt'.$$ By vertue of classsical estimates on the heat term (see for instance Lemma $2.4$ in [@BCD]), we have $$\| \Delta_{j}e^{t\Delta}\, a \|_{L^2} \leqslant C\, e^{-ct\, 2^{2j}}\, \| \Delta_{j} a \|_{L^2}.$$ Therefore, the fluctuation part becomes $$\begin{split} \| \Delta_{j} B(u,u)(t) \|_{L^2} &\lesssim \int_{0}^{t}\, e^{-c(t-t')\, 2^{2j}}\, 2^{j}\, \| \Delta_{j} (u \otimes u)(t') \|_{L^2}\, dt'\\ &\lesssim \int_{0}^{t}\, e^{-c(t-t')\, 2^{2j}}\, 2^{j}\, 2^{-j(2s-\frac{3}{2})}\, \| u \otimes u(t') \|_{\dot{B}^{2s-\frac{3}{2}}_{2,\infty}}\, dt'. \end{split}$$ We infer thus, thanks to the product laws in Sobolev spaces $$\begin{split} 2^{\frac{j}{2}}\, \| \Delta_{j} B(u,u) (t)\|_{L^2} &\lesssim \int_{0}^{t}\, e^{-c(t-t')\, 2^{2j}}\, 2^{j(3-2s)}\, \| u(t')\|^{2}_{\dot{H}^s}\, dt'. \end{split}$$ By hypothesis, we have supposed that $$M^2_{u} \eqdefa ({T_{*}}(u_{0}) - t)\,\| NS(u_{0})(t) \|^{\sigma_{s}}_{\dot{H}^s} < \infty .$$ As a result, $$\begin{split} 2^{\frac{j}{2}}\, \| \Delta_{j} B(u,u)(t) \|_{L^2} &\leqslant C_{s}\, \int_{0}^{t}\, e^{-c(t-t')\, 2^{2j}}\, 2^{j(3-2s)}\, \frac{M^2_{u}}{ ({T_{*}}(u_{0})-t')^{\frac{2}{\sigma_{s}}}}\\ &= \int_{0}^{t}\, 1_{\{({T_{*}}(u_{0})-t')2^{2j} \leqslant 1\}}\, e^{-c(t-t')\, 2^{2j}}\, 2^{j(3-2s)}\, \frac{M^2_{u}}{ ({T_{*}}(u_{0})-t')^{\frac{2}{\sigma_{s}}}} \, dt'\\ &\qquad + \, \int_{0}^{t}\, 1_{\{({T_{*}}(u_{0})-t')2^{2j} \geqslant 1\}}\, e^{-c(t-t')\, 2^{2j}}\, 2^{j(3-2s)}\, \frac{M^2_{u}}{ ({T_{*}}(u_{0})-t')^{\frac{2}{\sigma_{s}}}}\, dt'\cdotp \end{split}$$ We apply Young inequality : in the first integral, we consider $L^{\infty} \star L^{1}$, whereas in the second one, we consider $L^{1} \star L^{\infty}$ in order to use the regularization effect of the Laplacian. $$\begin{split} 2^{\frac{j}{2}}\, \| \Delta_{j} B(u,u) (t)\|_{L^2} &\leqslant C_{s}\, M^2_{u} \, \int_{{T_{*}}(u_{0})-2^{-2j}}^{{T_{*}}(u_{0})} \frac{2^{j(3-2s)}\,dt'}{ ({T_{*}}(u_{0})-t')^{\frac{2}{\sigma_{s}}}} + C_{s}\, M^2_{u} \, \int_{0}^{t}\, e^{-c(t-t')\, 2^{2j}}\, 2^{j(3-2s)}\, 2^{2j(s-\frac{1}{2})} \, dt'. \end{split}$$ We recall that ${\displaystyle}{\frac{2}{\sigma_{s}}\eqdefa s-\frac{1}{2}}$ and ${\displaystyle}{s-\frac{1}{2} < 1}$. As a result, $$\begin{split} 2^{\frac{j}{2}}\, \| \Delta_{j} B(u,u)(t) \|_{L^2} &\leqslant\, C_{s}\, M^2_{u} \, \Bigl(2^{j(2s-3)}\, 2^{j(3-2s)}\, \, + \, \frac{1}{2^{2j}}\, 2^{j(3-2s)}\, 2^{2j(s-\frac{1}{2})} \Bigr) \lesssim C_{s}\, M^2_{u}. \end{split}$$ This concludes the proof on the fluctuation estimate in the critical case. 0.5cm The statement given below is a bit more general than the one of Lemma \[fluctuation lemma\], which we deduce immediately by an interpoaltion argument (the same as given at the end of the proof of Theorem \[Big key theorem\]).\ \[fluctuation s’\]*[ Let ${\displaystyle}{\frac{1}{2} < s< \frac{3}{2}}$ and $u_{0} \in \dot{H}^s $. It exists a positive constant $C_{s}$ such that $$\hbox{If} \quad {T_{*}}(u_{0}) < \infty \quad \hbox{and} \quad M_{u} \eqdefa ({T_{*}}(u_{0}) - t)\, \| NS(u_{0})(t) \|^{\sigma_{s}}_{\dot{H}^s} < \infty,$$ then, we have for any ${\displaystyle}{s < s'< 2s-\frac{1}{2}}$ $$({T_{*}}(u_{0})-t)^{\frac{1}{2}(s'-\frac{1}{2})}\,\, \| u(t) - e^{t\Delta}u_{0} \|_{\dot{B}^{s'}_{2,\infty}} < \infty.$$]{}* Same arguments as above yield $$\begin{split} \| \Delta_{j} B(u,u)(t) \|_{L^2} &\lesssim \int_{0}^{t}\, e^{-c(t-t')\, 2^{2j}}\, 2^{j}\, 2^{-j(2s-\frac{3}{2})}\, \| u \otimes u(t') \|_{\dot{B}^{2s-\frac{3}{2}}_{2,\infty}}\, dt'. \end{split}$$ Product laws in Sobolev spaces and hypothesis on $u$ imply $$\begin{split} 2^{js'}\, \| \Delta_{j} B(u,u)(t) \|_{L^2} &\lesssim \int_{0}^{t}\, e^{-c(t-t')\, 2^{2j}}\, 2^{j(\frac{5}{2}-2s+s')}\, \| u(t')\|^{2}_{\dot{H}^s}\, dt'\\ &\lesssim \int_{0}^{t}\, e^{-c(t-t')\, 2^{2j}}\, 2^{j(\frac{5}{2}-2s+s')}\, \frac{C}{({T_{*}}(u_{0})-t')^{s-\frac{1}{2}}}\cdotp \end{split}$$ We split (the same cut off as before) according low and high frequencies. Concerning high frequencies, since ${T_{*}}(u_{0}) -t \leqslant {T_{*}}(u_{0})-t'$, we get $$\begin{split} 2^{js'}\, \| \Delta_{j} B(u,u)(t)\, 1_{\{({T_{*}}-t)2^{2j} \geqslant 1\}} \|_{L^2}\, &\lesssim \int_{0}^{t}\, e^{-c(t-t')\, 2^{2j}}\, 2^{j(\frac{5}{2}-2s+s')}\, \frac{C}{ ({T_{*}}(u_{0})-t)^{s-\frac{1}{2}}}\, dt'\\ & \lesssim \, 2^{j(\frac{1}{2}-2s+s')}\, \frac{C}{ ({T_{*}}(u_{0})-t)^{s-\frac{1}{2}}}\cdotp \end{split}$$ Choosing $s'$ such that ${\displaystyle}{\frac{1}{2}-2s+s' < 0}$, we get $$\begin{split} 2^{js'}\, \| \Delta_{j} B(u,u)(t) \, 1_{\{({T_{*}}-t)2^{2j} \geqslant 1\}} \|_{L^2}\, & \lesssim \, C\, \frac{ ({T_{*}}(u_{0}) - t)^{\frac{1}{2}(-\frac{1}{2}+2s-s')}}{ ({T_{*}}(u_{0})-t)^{s-\frac{1}{2}}} \,= C\,\, ({T_{*}}(u_{0}) - t)^{-\frac{1}{2}(s'-\frac{1}{2})}, \end{split}$$ which yields the desired estimate, as far as high frequencies are concerned.\ Concerning low frequencies, let us come back to the very beginning. $$\begin{split} 2^{js'}\, \| \Delta_{j} B(u,u)(t) \, 1_{\{({T_{*}}(u_{0})-t)2^{2j} \leqslant 1\}} \|_{L^2}\, & \lesssim \, 2^{j(s'-s)}\, 2^{js}\, \| \Delta_{j} B(u,u) \|_{L^2}\\ &\lesssim 2^{j(s'-s)}\, \| u(t) - e^{t\Delta}u_{0} \|_{\dot{B}^{s}_{2,\infty}}. \end{split}$$ As ${\displaystyle}{ \| u(t) - e^{t\Delta}u_{0} \|_{\dot{B}^{s}_{2,\infty}} \leqslant \frac{C}{({T_{*}}(u_{0}) - t)^{\frac{1}{2}(s-\frac{1}{2})}}}$, we infer that $$\begin{split} 2^{js'}\, \| \Delta_{j} B(u,u)(t) \, 1_{\{({T_{*}}(u_{0})-t)2^{2j} \leqslant 1\}}\, \|_{L^2}\, &\lesssim 2^{j(s'-s)}\, \frac{C}{({T_{*}}(u_{0}) - t)^{\frac{1}{2}(s-\frac{1}{2})}}\cdotp \end{split}$$ Hypothesis of low frequencies implies $$\begin{split} 2^{js'}\, \| \Delta_{j} B(u,u)(t) \, 1_{\{({T_{*}}(u_{0})-t)2^{2j} \leqslant 1\}}\, \|_{L^2}\, &\lesssim \, \frac{C}{({T_{*}}(u_{0}) - t)^{\frac{1}{2}(s-\frac{1}{2}) + \frac{1}{2}(s'-s)} }\, = \, \frac{C}{({T_{*}}(u_{0}) - t)^{\frac{1}{2}(s'-\frac{1}{2})}}\cdotp \end{split}$$ which ends up the proof for low frequency part. The proof of Lemma \[fluctuation s’\] is thus complete. Existence of sup-critical solutions bounded in $\dot{B}^{\frac{1}{2}}_{2,\infty}$ ================================================================================= This section is devoted to complete the proof of Theorem \[Big key theorem\], namely the part concerning the $\dot{B}^{\frac{1}{2}}_{2,\infty}$-norm of the sup-critical solutions. We have already built some sup-critical elements in the space $\dot{H}^s$. It turns out that, starting from this statement, we shall prove that data generating a sup-critical element are not only in $\dot{H}^s$, but also in some others spaces such as $\dot{B}^{\frac{1}{2}}_{2,\infty}\cap \dot{B}^{s'}_{2,\infty}$, with $s'$ satisfiying the condition given below, which stems from the proof of Lemma \[fluctuation lemma\].\ The statement given below is actually a bit stronger than the one we want to prove, since we are going to catch some sup-critical solutions not only in $\dot{B}^{\frac{1}{2}}_{2,\infty}$ (as claimed by Theorem \[Big key theorem\]) but also in $\dot{B}^{s'}_{2,\infty}$. The main idea to get such information on the regularity is to focus on the fluctuation part which is more regular than the solution itself. Notice that, in all this section, we use regularity index $s'$ satisfying $$s < s' < 2s- \frac{1}{2}\cdotp$$ \[theorem section5\]*[ There exists a data $\Phi_{0} \in \dot{B}^{\frac{1}{2}}_{2,\infty}\cap \dot{H}^s \cap \dot{B}^{s'}_{2,\infty}$, such that ${T_{*}}(\Phi_{0}) < \infty$ and $$\sup_{t < {T_{*}}(\Phi_{0}) }\, ({T_{*}}(\Phi_{0}) - t)\, \| NS(\Phi_{0})(t) \|^{\sigma_{s}}_{\dot{H}^s} \, =\,\limsup_{t \to {T_{*}}(\Phi_{0})} ({T_{*}}(\Phi_{0}) - t)\, \| NS(\Phi_{0}) \|^{\sigma_{s}}_{\dot{H}^s} = M^{\sigma_{s}}_{c},$$ $$\hbox{and for any} \quad t<{T_{*}}(\Phi_{0}), \quad \| NS(\Phi_{0}) \|_{\dot{B}^{\frac{1}{2}}_{2,\infty}} < \infty.$$ ]{}* The idea of the proof is to start with the existence of sup-sup-critical elements in $\dot{H}^s$. Indeed, we have proved previously that there exists a data $\Psi_{0} \in \dot{H}^s$, such that $\Psi \eqdefa NS(\Psi_{0})$ is sup-critical. Therefore, by definition of $\limsup$, there exists a sequence $t_{n} \nearrow {T_{*}}(\Psi_{0})$ such that $$\lim_{n \to +\infty} ({T_{*}}(\Psi_{0}) - t_{n})\, \| NS(\Psi_{0})(t_{n}) \|^{\sigma_{s}}_{\dot{H}^s} = M^{\sigma_{s}}_{c}.$$ Let us introduce as before the rescaled sequence $$v_{0,n}(y) = \bigl({T_{*}}(\Psi_{0})-t_{n}\bigr)^{\frac{1}{2}}\, NS(\Psi_{0})(t_{n}, \bigl({T_{*}}(\Psi_{0})-t_{n}\bigr)^{\frac{1}{2}}y).$$ Such a sequence generates a solution which keeps on living until the time $1$ and satisfies $$\begin{split} \|v_{0,n} \|^{\sigma_{s}}_{\dot{H}^s} &= \bigl({T_{*}}(\Psi_{0})-t_{n}\bigr)\, \|NS(\Psi_{0,n})(t_{n}) \|^{\sigma_{s}}_{\dot{H}^s}. \\ \end{split}$$ In the sake of simplicity, we note $$\tau_{n} \eqdefa {T_{*}}(\Psi_{0})-t_{n}.$$ Previous computations imply that $(v_{0,n})$ is a bounded sequence of $\dot{H}^s$. Now, inspired by the idea of Y. Meyer (fluctuation-tendancy method, [@YM]), we decomposed the sequence $(v_{0,n})$ into $$v_{0,n}(y) \eqdefa v_{0,n}(y) -\tau_{n}^{\frac{1}{2}}\, e^{t_{n}\Delta}\Psi_{0}( \tau_{n}^{\frac{1}{2}}\,y) \,\,\, + \,\,\, \tau_{n}^{\frac{1}{2}}\, e^{t_{n}\Delta}\Psi_{0}(\tau_{n}^{\frac{1}{2}}\,y),$$ where we have $$v_{0,n}(y) \eqdefa \tau_{n}^{\frac{1}{2}}\, NS(\Psi_{0})(t_{n}, \tau_{n}^{\frac{1}{2}}\, y)$$ It follows $$v_{0,n}(y) \eqdefa \tau_{n}^{\frac{1}{2}}\, \underbrace{\Bigl( NS(\Psi_{0})(t_{n},\cdotp) - e^{t_{n}\Delta}\Psi_{0} \Bigr)}_{B(\Psi,\Psi)(t_{n}) = \hbox{fluctuation part}} (\tau_{n}^{\frac{1}{2}}\, y) \,\,\, + \,\,\, \tau_{n}^{\frac{1}{2}} \underbrace{e^{t_{n}\Delta}\Psi_{0}}_{\hbox{tendancy part}}(\tau_{n}^{\frac{1}{2}}\,y).$$ \[fluctuation bornée dans 3 espaces\]*[ The rescaled fluctuation part $\phi_{n} \eqdefa \tau_{n}^{\frac{1}{2}}\, B(\Psi,\Psi)(t_{n}, \tau_{n}^{\frac{1}{2}}\, \cdotp)$ is bounded in $\dot{H}^s \cap \dot{B}^{\frac{1}{2}}_{2,\infty} \cap \dot{B}^{s'}_{2,\infty}$. ]{}* [0.2cm]{} Indeed, concerning the $\dot{B}^{\frac{1}{2}}_{2,\infty}$-norm, we use firstly the scaling invariance of this norm and then we apply Lemma \[fluctuation lemma\], which gives $$\sup_{n}\, \| \phi_{n} \|_{\dot{B}^{\frac{1}{2}}_{2,\infty}} = \sup_{n}\, \| NS(\Psi_{0})(t_{n},\cdotp) - e^{t_{n}\Delta}\Psi_{0} \|_{\dot{B}^{\frac{1}{2}}_{2,\infty}} < \infty.$$ Concerning the $\dot{H}^s$-norm, we apply successively the following arguments : scaling, triangular inequality and the fact that $NS(\Psi_{0})$ is a sup-critical element in $\dot{H}^s$. $$\begin{split} \| \phi_{n} \|^{\sigma_{s}}_{\dot{H}^{s}} &= \tau_{n} \, \| NS(\Psi_{0})(t_{n},\cdotp) - e^{t_{n}\Delta}\Psi_{0} \|^{\sigma_{s}}_{\dot{H}^{s}}\\ &\lesssim \tau_{n} \, \| NS(\Psi_{0})(t_{n},\cdotp) \|^{\sigma_{s}}_{\dot{H}^{s}} + \tau_{n} \, \| e^{t_{n}\Delta}\Psi_{0} \|^{\sigma_{s}}_{\dot{H}^{s}}\\ &\lesssim \Bigl(M_{c} + \frac{1}{n}\Bigr)^{\sigma_{s}} + \tau_{n} \, \| \Psi_{0} \|^{\sigma_{s}}_{\dot{H}^{s}} < \infty.\\ \end{split}$$ Therefore, ${\displaystyle}{\sup_{n}\, \| \phi_{n} \|^{\sigma_{s}}_{\dot{H}^{s}} < \infty}$.\ Concerning the $\dot{B}^{s'}_{2,\infty}$-norm, scaling argument combinig with Lemma \[fluctuation lemma\] yields $$\begin{split} \| \phi_{n} \|^{\sigma_{s'}}_{\dot{B}^{s'}_{2,\infty}} &= \tau_{n} \, \| NS(\Psi_{0})(t_{n},\cdotp) - e^{t_{n}\Delta}\Psi_{0} \|^{\sigma_{s'}}_{\dot{B}^{s'}_{2,\infty}}.\\ \end{split}$$ This concludes the proof of this Lemma \[fluctuation bornée dans 3 espaces\]. By vertue of profile theory, we perform a profile decomposition of the sequence $\phi_{n}$ in the Sobolev space  $\dot{H}^s$. But in this decomposition, there is only left profiles with constant scale, as Lemma below will prove it. The idea is clear. As $\phi_{n}$ is bounded in the Besov space $\dot{H}^s \cap \dot{B}^{\frac{1}{2}}_{2,\infty}$, big scales vanish. Likewise, the fact that $\phi_{n}$ is bounded in the Besov space $\dot{H}^s \cap \dot{B}^{s'}_{2,\infty}$ implies that small scales vanish. That is the point in the Lemma below. \[petite et grande echelle\]*[ $\bullet$ If $(f_{n})$ is a bounded sequence in $\dot{B}^{\frac{1}{2}}_{2,\infty} \cap \dot{H}^s$ and if ${\displaystyle}{\limsup_{n \to +\infty}\|f_{n}\|_{\dot{B}^{s}_{2,\infty}} = L >0}$, then there is no big scales in the profile decomposition of the sequence $f_{n}$ in $\dot{H}^s$.\ $\bullet$ If $(f_{n})$ is a bounded sequence in $\dot{B}^{s'}_{2,\infty} \cap \dot{H}^s$, with ${\displaystyle}{s' >s>\frac{1}{2}}$ and if ${\displaystyle}{\limsup_{n \to +\infty}\|f_{n}\|_{\dot{B}^{s}_{2,\infty}} = L >0}$, then there is no small scales in the profile decomposition of the sequence $f_{n}$ in $\dot{H}^s$.\ ]{}* We only proof the first part of the Lemma. The other one is similar. If  ${\displaystyle}{\limsup_{n \to +\infty}\|f_{n}\|_{\dot{B}^{s}_{2,\infty}} = L >0}$, it means there exists an extraction $\varphi(n)$ such that ${\displaystyle}{\| f_{\varphi(n)} \|_{\dot{B}^{s}_{2,\infty}} \geqslant \frac{L}{2}} \cdotp$ Otherwise, for any subsequence of $(f_{n})$, we would have $$\| f_{\varphi(n)} \|_{\dot{B}^{s}_{2,\infty}} < \frac{L}{2} \quad \hbox{and thus,} \quad \lim_{n \to +\infty} \| f_{\varphi(n)} \|_{\dot{B}^{s}_{2,\infty}} \leqslant \frac{L}{2}\cdotp$$ As a result, we would have ${\displaystyle}{\limsup_{n \to +\infty}\|f_{n}\|_{\dot{B}^{s}_{2,\infty}} \leqslant \frac{L}{2} < L}$, which is wrong by hypothesis. Moreover, by definition of the Besov norm, we can find a sequence $(k_{n})_{n \in \Z}$, such that $$\lim_{n \to +\infty} 2^{k_{n}s} \| \Delta_{k_{n}}\, f_{\varphi(n)}\|_{L^2} = \| f_{\varphi(n)} \|_{\dot{B}^{s}_{2,\infty}}.$$ Therefore, ${\displaystyle}{\lim_{n \to +\infty} 2^{k_{n}s} \| \Delta_{k_{n}}\, f_{\varphi(n)}\|_{L^2} \geqslant \frac{L}{2} }$.\ Let us introduce the scale ${\displaystyle}{\lambda_{n} \eqdefa 2^{-k_{n}}}$. As (up to extraction) ${\displaystyle}{2^{k_{n}s} \| \Delta_{k_{n}}\, f_{\varphi(n)}\|_{L^2} \geqslant \frac{L}{2}}$, then one has $$2^{k_{n}(s-\frac{1}{2})}\,\, \| f_{\varphi(n)} \|_{\dot{B}^{\frac{1}{2}}_{2,\infty}}\geqslant \frac{L}{2}\cdotp$$ Hence, the infimum limit of the sequence $k_{n}$ is not $-\infty$, otherwise, the term ${\displaystyle}{ 2^{k_{n}(s-\frac{1}{2})}}$ would tend to $0$ and thus $L=0$ (since the sequence $\| f_{\varphi(n)} \|_{\dot{B}^{\frac{1}{2}}_{2,\infty}}$ is bounded by hypothesis), which is false by hypothesis. Therefore, $\lambda_{n} \nrightarrow +\infty$ : big scales are excluded from the profile decomposition of the sequence $f_{n}$. This concludes the proof of Lemma \[petite et grande echelle\]. *Continuation of the proof of Theorem \[theorem section5\]*.\ Let us come back to the proof of sup-critical element in the Besov space $\dot{B}^{\frac{1}{2}}_{2,\infty}\cap \dot{B}^{s'}_{2,\infty}$. Firstly, we check that $\phi_{n}$ satisfies hypothesis of Lemma \[petite et grande echelle\]. As it was already checked previously, $\phi_{n}$ is bounded in $\dot{B}^{\frac{1}{2}}_{2,\infty} \cap \dot{H}^s \cap \dot{B}^{s'}_{2,\infty}$. Concerning assumption ${\displaystyle}{\limsup_{n \to +\infty} \| \phi_{n}\|_{\dot{B}^{s}_{2,\infty}} >0}$, by scaling argument, one has $$\begin{split} \|\phi_{n}\|^{\sigma_{s}}_{\dot{B}^{s}_{2,\infty}} &= \tau_{n} \, \| NS(\Psi_{0})(t_{n},\cdotp) - e^{t_{n}\Delta}\Psi_{0} \|^{\sigma_{s}}_{\dot{B}^{s}_{2,\infty}} = ({T_{*}}(\Psi_{0}) - t_{n}) \| NS(\Psi_{0})(t_{n},\cdotp) - e^{t_{n}\Delta}\Psi_{0} \|^{\sigma_{s}}_{\dot{B}^{s}_{2,\infty}}\\ &\geqslant ({T_{*}}(\Psi_{0}) - t_{n}) \| NS(\Psi_{0})(t_{n},\cdotp)\|^{\sigma_{s}}_{\dot{B}^{s}_{2,\infty}} \,\, - \,\,({T_{*}}(\Psi_{0}) - t_{n}) \| \Psi_{0} \|^{\sigma_{s}}_{\dot{H}^{s}}. \end{split}$$ Obviously, the term ${\displaystyle}{({T_{*}}(\Psi_{0}) - t_{n}) \| \Psi_{0} \|^{\sigma_{s}}_{\dot{H}^{s}}}$ tends to $0$ when $n$ goes to $+\infty$. By vertue of (\[remark c\]) and [@LR], there exists a constant $c>0$ such that ${\displaystyle}{({T_{*}}(\Psi_{0}) - t_{n}) \| NS(\Psi_{0})(t_{n},\cdotp)\|^{\sigma_{s}}_{\dot{B}^{s}_{2,\infty}} \geqslant c}$. Therefore, $$\limsup_{n \to +\infty} \|\phi_{n}\|_{\dot{B}^{s}_{2,\infty}} >0$$ and thus profile decomposition of $\phi_{n}$ in the space $\dot{H}^s$ is reduced to (with notations of Theorem \[theo profiles\]) $$\phi_{n} = \sum_{j \geqslant 0}^{J} V^{j}(\cdotp-x_{n,j}) \, + \, r_{n}^{J}.$$ Moreover, as the sequence $\phi_{n}$ is bounded in $\dot{B}^{\frac{1}{2}}_{2,\infty} \cap \dot{B}^{s'}_{2,\infty}$, profiles $V^{j}$ belong also to $\dot{B}^{\frac{1}{2}}_{2,\infty} \cap \dot{B}^{s'}_{2,\infty}$. That’s the crucial point in the proof. Indeed, each profile $V^{j}$ can be seen as a translated (by $x_{n,j}$) weak limit of the sequence $\phi_{n}$. As a result, we get immediately $$\| V^{j} \|_{\dot{B}^{\frac{1}{2}}_{2,\infty}} \leqslant \liminf_{n \to +\infty} \| \phi_{n} \|_{\dot{B}^{\frac{1}{2}}_{2,\infty}} < \infty \quad \hbox{and} \quad \| V^{j} \|_{\dot{B}^{s'}_{2,\infty}} \leqslant \liminf_{n \to +\infty} \| \phi_{n} \|_{\dot{B}^{s'}_{2,\infty}} < \infty.$$ Let us come back to the sequence $(v_{0,n})$ defined by $$v_{0,n} \eqdefa \phi_{n} \,\, +\,\, \tau_{n}^{\frac{1}{2}}\ e^{t_{n}\Delta}\Psi_{0}(\tau_{n}^{\frac{1}{2}}\,\cdotp).$$ As it has been already underlined previously, the term ${\displaystyle}{\gamma_{n} \eqdefa \tau_{n}^{\frac{1}{2}}\ e^{t_{n}\Delta}\Psi_{0}(\tau_{n}^{\frac{1}{2}}\,\cdotp)}$ tends to $0$ in $\dot{H}^s$-norm (and thus in $L^p$-norm, by Sobolev embedding) since $$\| \tau_{n}^{\frac{1}{2}}\ e^{t_{n}\Delta}\Psi_{0}(\tau_{n}^{\frac{1}{2}}\,\cdotp) \|^{\sigma_{s}}_{\dot{H}^s} = \tau_{n} \, \| \ e^{t_{n}\Delta}\Psi_{0} \|^{\sigma_{s}}_{\dot{H}^s} \leqslant \tau_{n} \, \| \ \Psi_{0}\|^{\sigma_{s}}_{\dot{H}^s}.$$ Combining the profile decomposition of $(\phi_{n})$ with the definition of $(v_{0,n})$, we finally get $$v_{0,n} = \sum_{j \geqslant 0}^{J} V^{j}(\cdotp-x_{n,j}) \, + \, r_{n}^{J} \, +\, \gamma_{n},$$ with ${\displaystyle}{\lim_{J \to +\infty}\limsup_{n \to+\infty}\|r_{n}^{J}\|_{L^{p}} =0}$ and ${\displaystyle}{ \lim_{n \to+\infty}\| \gamma_{n} \|_{L^{p}} =0}$. By vertue of Lemma \[lemme allure de la solution\], one has for any $\tau <1$ $$NS(v_{0,n})(\tau) = \sum_{j \geqslant 0}^{J} NS(V^{j})(\tau,\cdotp-x_{n,j}) + e^{\tau \Delta} (r_{n}^{J} + \gamma_{n} ) + R_{n}^{J}(\tau).$$ By definition of the sequence $(v_{0,n})$, $ NS(v_{0,n})$ is given by $$NS(v_{0,n})(\tau,\cdotp) = \bigl({T_{*}}(\Psi_{0})-t_{n}\bigr)\,^\frac{1}{2} \, NS(\Psi_{0})\bigl( t_{n} + \tau\,\bigl({T_{*}}(\Psi_{0})-t_{n}\bigr)\, ,\, \bigl({T_{*}}(\Psi_{0})-t_{n}\bigr)\,^\frac{1}{2}\,\cdotp \bigr).$$ Once again, we denote ${\displaystyle}{\widetilde{t_{n}} = t_{n} + \tau\,\bigl({T_{*}}(\Psi_{0})-t_{n}\bigr)\,}$ and one has $$(1-\tau)\, \|NS(v_{0,n})(\tau,\cdotp) \|^{\sigma_{s}}_{\dot{H}^s} = \bigl({T_{*}}(\Psi_{0}) - \widetilde{t_{n}}\bigr) \, \|NS(\Psi_{0})\bigl( \widetilde{t_{n}} ,\cdotp \bigr)\|^{\sigma_{s}}_{\dot{H}^s}.$$ As $\widetilde{t_{n}} \geqslant t_{n}$ for any $n$, we get $$(1-\tau)\| NS(v_{0,n})(\tau) \|^{\sigma_{s}}_{\dot{H}^s} = ({T_{*}}(\Psi_{0})-\widetilde{t_{n}})\| NS(\Psi_{0})(\widetilde{t_{n}}) \|^{\sigma_{s}}_{\dot{H}^s} \leqslant M^{\sigma_{s}}_{c} + \frac{2}{n}\cdotp$$ Hence, Proposition \[proposition critical element \] implies there exists some a unique profile $ \Phi_{0} $ in $\dot{B}^{\frac{1}{2}}_{2,\infty}~\cap ~\dot{H}^s~\cap \dot{B}^{s'}_{2,\infty}$ such that the $NS$-solution genrated by this profile is a sup-critical solution. As $ \Phi_{0} $ belongs to $\dot{B}^{\frac{1}{2}}_{2,\infty}$, Lemma \[fluctuation lemma\] implies that $NS(\Phi_{0})$ is bounded in the same space. This ends up the proof of Theorem \[theorem section5\].\ Hence, we claim that the proof of Theorem \[Big key theorem\] is over. Indeed, this stems from an interpolation argument. By vertue of Proposition \[interpolation\], we have for any ${\displaystyle}{s < s_{1} < s'}$ $$\begin{split} \| \Phi_{0} \|_{\dot{H}^{s_{1}}} \leqslant \| \Phi_{0} \|_{\dot{B}^{s_{1}}_{2,1}} \leqslant \, \| \Phi_{0} \|^{\theta}_{\dot{B}^{s}_{2,\infty}} \,\, \| \Phi_{0} \|^{1-\theta}_{\dot{B}^{s'}_{2,\infty}} \,\, \leqslant \, \| \Phi_{0} \|^{\theta}_{\dot{H}^{s}}\,\, \| \Phi_{0} \|^{1-\theta}_{\dot{B}^{s'}_{2,\infty}}. \end{split}$$ This concludes the proof of Theorem \[Big key theorem\]. Another notion of critical solution =================================== In this section, we wonder if among sup-critical solutions, we can find some of them which reach the biggest infimum limit of the quantity ${\displaystyle}{({T_{*}}(u_{0}) - t)\, \| NS(u_{0})(t) \|^{\sigma_{s}}_{\dot{H}^s}}$. We define the following set $\mathcal{E}_{c}$ by $$\begin{split} \mathcal{E}_{c} \eqdefa &\Bigl\{ u_{0} \in \dot{B}^{\frac{1}{2}}_{2,\infty} \cap \dot{H}^s \cap \dot{B}^{s'}_{2,\infty} \quad \hbox{such that} \,\, {T_{*}}(u_{0}) < \infty \,\, ;\\ &\sup_{t <{T_{*}}(u_{0})}\, ({T_{*}}(u_{0}) - t)\, \| NS(u_{0})(t) \|^{\sigma_{s}}_{\dot{H}^s} =\limsup_{t \to {T_{*}}(u_{0})} ({T_{*}}(u_{0}) - t)\, \| NS(u_{0})(t) \|^{\sigma_{s}}_{\dot{H}^s} =\, M^{\sigma_{s}}_{c}\,\, ;\\ & \hbox{for any} \quad t<{T_{*}}(u_{0}), \quad \| NS(u_{0})(t) \|_{\dot{B}^{\frac{1}{2}}_{2,\infty}} < \infty \quad \hbox{and} \quad ({T_{*}}(u_{0}) - t)\,\, \| NS(u_{0})(t) \|^{^{\sigma_{s'}}}_{\dot{B}^{s'}_{2,\infty}} < \infty \Bigr\}. \end{split}$$ Let us introduce the following quantity $m^{\sigma_{s}}_{c}$ $$m^{\sigma_{s}}_{c} \eqdefa \sup_{u_{0} \,\in\, \mathcal{E}_{c} } \bigl\{ \liminf_{t \to {T_{*}}(u_{0})}({T_{*}}(u_{0}) - t)\, \| NS(u_{0})(t) \|^{\sigma_{s}}_{\dot{H}^s} \bigr\}.$$ *[(sup-inf-critical solution)\ A solution $u = NS(u_{0})$ is said to be a sup-inf-critical solution if $u_{0}$ belongs to $\mathcal{E}_{c}$ and $$\begin{split} \liminf_{t \to {T_{*}}(u_{0})} ({T_{*}}(u_{0}) - t)\, \| NS(u_{0})(t) \|^{\sigma_{s}}_{\dot{H}^s} =\, m^{\sigma_{s}}_{c}. \end{split}$$ ]{}* Notice we need to look for such elements among sup-critical solutions, otherwise the definition of $m^{\sigma_{s}}_{c}$ would be meaningless. We claim that there exist such elements. \[sup inf critical lemma\]*[ There exists some elements belonging to $\mathcal{E}_{c}$, which are sup-inf-critical. ]{}* By definition of $m^{\sigma_{s}}_{c}$, we can find a sequence $(u_{0,n}) \in \dot{H}^s $ and a sequence $t_{n} \nearrow {T_{*}}(u_{0,n}) \equiv {T_{*}}$ (we can assume this, up to a rescaling) such that $$\label{liminf tn} m_{c} - \varepsilon_{n} \leqslant ({T_{*}}- t_{n})^{\frac{1}{\sigma_{s}}}\, \| NS(u_{0,n})(t_{n}) \|_{\dot{H}^s} \leqslant m_{c} + \varepsilon_{n}$$ and $$\hbox{For any} \quad t \geqslant t_{n}, \quad m_{c} - \varepsilon_{n} \leqslant ({T_{*}}- t)^{\frac{1}{\sigma_{s}}}\, \| NS(u_{0,n})(t) \|_{\dot{H}^s}.$$ Assume in addition that the sequence $(u_{0,n})$ belongs to the set $\mathcal{E}_{c}$. As a consequence, we have $$\label{hypothese M_c} \hbox{For any} \quad t \geqslant t_{n} \quad, m_{c} - \varepsilon_{n} \leqslant ({T_{*}}- t)^{\frac{1}{\sigma_{s}}}\, \| NS(u_{0,n})(t) \|_{\dot{H}^s} \leqslant M_{c} + \varepsilon_{n}.$$ Considering the rescaled sequence $$v_{0,n}(y) = \bigl({T_{*}}- t_{n}\bigr)^\frac{1}{2} \, NS(u_{0,n})\bigl( t_{n},({T_{*}}- t_{n}\bigr)^\frac{1}{2}\,y \bigr).$$ Hence, $v_{0,n}$ satisfies properties below by scaling argument $$\begin{split} \|v_{0,n} \|^{\sigma_{s}}_{\dot{H}^s} = \bigl({T_{*}}-t_{n}\bigr)\,& \|NS(u_{0,n})(t_{n}) \|^{\sigma_{s}}_{\dot{H}^s}, \quad \|v_{0,n} \|_{\dot{B}^{\frac{1}{2}}_{2,\infty}} = \|NS(u_{0,n})(t_{n}) \|_{\dot{B}^{\frac{1}{2}}_{2,\infty}}\\ &\hbox{and} \quad \|v_{0,n} \|^{\sigma_{s'}}_{\dot{B}^{s'}_{2,\infty}} = \bigl({T_{*}}-t_{n}\bigr)\, \|NS(u_{0,n})(t_{n}) \|^{\sigma_{s'}}_{\dot{B}^{s'}_{2,\infty}}. \end{split}$$ Combining (\[liminf tn\]) with the fact that $(u_{0,n})$ belongs to $\mathcal{E}_{c}$, we infer that the sequence $(v_{0,n})_{n \geqslant 1}$ is bounded in $ \dot{B}^{\frac{1}{2}}_{2,\infty} \cap \dot{H}^s \cap \dot{B}^{s'}_{2,\infty} $. Moreover, concerning the Navier-Stokes solution generated by such a data $NS(v_{0,n}) $, we know that it keeps on living until the time $\tau^* = 1$ and satisfies once again (with ${\displaystyle}{\widetilde{t_{n}} = t_{n} + \tau\,\bigl({T_{*}}-t_{n}\bigr)\,}$) $$(1-\tau)^{\frac{1}{\sigma_{s}}}\, \|NS(v_{0,n})(\tau) \|_{\dot{H}^s} = ({T_{*}}- \widetilde{t_{n}})^{\frac{1}{\sigma_{s}}}\, \|NS(u_{0,n})( \widetilde{t_{n}})\|_{\dot{H}^s}.$$ As $\widetilde{t_{n}} \geqslant t_{n}$ for any $n$, we infer that for any $\tau <1$ $$(1-\tau)^{\frac{1}{\sigma_{s}}}\, \| NS(v_{0,n})(\tau) \|_{\dot{H}^s} \geqslant m_{c} - \varepsilon_{n}.$$ Let us sum up information we have on the sequence $v_{0,n}$. Firstly, the lifespan of the Navier-Stokes associated with the sequence $v_{0,n}$ is equal to $1$. Then, $$\limsup_{\tau \to 1} (1-\tau)^{\frac{1}{\sigma_{s}}}\, \,\| NS(v_{0,n})(\tau) \|_{\dot{H}^s} = \limsup_{ \widetilde{t_{n}} \to {T_{*}}} \,\, ({T_{*}}-\widetilde{t_{n}})^{\frac{1}{\sigma_{s}}}\, \| NS(u_{0,n})(\widetilde{t_{n}}) \|_{\dot{H}^s},$$ which implies, thanks to (\[hypothese M\_c\]) and definition of $M_{c}$ , that for any $\tau <1$, $$\limsup_{\tau \to 1} \,\,(1-\tau)^{\frac{1}{\sigma_{s}}}\,\, \| NS(v_{0,n})(\tau) \|_{\dot{H}^s} = M_{c} \quad \hbox{and} \quad \| NS(v_{0,n})(\tau) \|_{\dot{B}^{\frac{1}{2}}_{2,\infty}} = \| NS(u_{0,n})(\widetilde{t_{n}}) \|_{\dot{B}^{\frac{1}{2}}_{2,\infty}} < \infty.$$ In addition, $$\begin{split} (1-\tau)^{\frac{1}{\sigma_{s'}}}\, \,\| NS(v_{0,n})(\tau) \|_{\dot{B}^{s'}_{2,\infty}} = ({T_{*}}-\widetilde{t_{n}})^{\frac{1}{\sigma_{s'}}}\, \| NS(u_{0,n})(\widetilde{t_{n}}) \|_{\dot{B}^{s'}_{2,\infty}} < \infty. \end{split}$$ To summerize, from the minimizing sequence $(u_{0,n})$ of the set $\mathcal{E}_{c} $, we build another sequence $(v_{0,n})$ (the rescaled sequence of $(u_{0,n})$) which also belongs to the set $\mathcal{E}_{c}$. Moreover, as the sequence $(v_{0,n})$ is bounded in the spaces $ \dot{B}^{\frac{1}{2}}_{2,\infty} \cap \dot{H}^s \cap \dot{B}^{s'}_{2,\infty} $ and satisfies ${\displaystyle}{\limsup_{n \to +\infty}\,\, \| v_{0,n} \|_{\dot{B}^{s}_{2,\infty}} } < \infty$, Lemma \[petite et grande echelle\] implies that profile decomposition in $\dot{H}^s$ of such a sequence is reduced, up to extractions, to a sum of translated profiles and a remaining term (under notations of Theorem \[theo profiles\]) $$v_{0,n} = \sum_{j \in \mathcal{J}_{1}} \varphi^{j}(\cdotp -x_{n,j}) + \psi_{n}^{J}.$$ By vertue of Theorem \[lemme allure de la solution\], combining with Proposition \[proposition critical element \], we infer there exists only one profile $\varphi^{j_{0}}$ which blows up at time $1$ and such that $$NS(v_{0,n})(\tau,\cdotp) = NS(\varphi^{j_{0}})(\tau,\cdotp -x_{n,j_{0}}) \,\, +\, \sum_{\stackrel{j \in \mathcal{J}_{1}, j \neq j_{0}}{ \tau^{j}_{*} > 1}} NS(\varphi^{j})(\cdotp -x_{n,j}) +\, \, e^{\tau \Delta}\psi_{n}^J (\cdotp) \,\, + \,\, R_{n}^J(\tau,\cdotp).$$ By orthogonality, we have $$\label{relation1} \begin{split} \| NS(v_{0,n})(\tau) \|^{2}_{\dot{H}^s} &\geqslant \| NS(\varphi^{j_{0}})(\tau) \|^{2}_{\dot{H}^s} \,\, +\, \sum_{\stackrel{j \in \mathcal{J}_{1}, j \neq j_{0}}{ \tau^{j}_{*} > 1}} \| NS(\varphi^{j})(\tau) \|^{2}_{\dot{H}^s}\, + \, +\|e^{\tau \Delta}\psi_{n}^J \|^{2}_{\dot{H}^s}\,\, + |\gamma_{n}^{J}(\tau)|. \end{split}$$ We want to prove that ${\displaystyle}{\liminf_{\tau \to 1} \,\, (1 -\tau )^{\frac{1}{\sigma_{s}}}\, \| NS(\varphi^{j_{0}})(\tau) \|_{\dot{H}^s} \geqslant m_{c}}$. By definition of $m_{c}$, this will imply that ${\displaystyle}{\liminf_{\tau \to 1} \,\, (1 -\tau )^{\frac{1}{\sigma_{s}}}\, \| NS(\varphi^{j_{0}})(\tau) \|_{\dot{H}^s} = m_{c}}$. Let us assume that is not the case. Therefore, $$\exists \alpha_{0} >0, \forall \varepsilon >0, \,\, \exists \tau_{\varepsilon},\,\,\, \hbox{such that} \,\,\, 0< (1 -\tau_{\varepsilon} )^{\frac{2}{\sigma_{s}}}\, < \varepsilon \,\, \hbox{and} \,\, (1 -\tau_{\varepsilon} )^{\frac{2}{\sigma_{s}}}\, \| NS(u_{0,n})(\tau_{\varepsilon}) \|^{2}_{\dot{H}^s} \leqslant m_{c}^2 - \alpha_{0}.$$ From (\[relation1\]), we deduce that $$\begin{split} (1 -\tau_{\varepsilon} )^{\frac{2}{\sigma_{s}}}\, \| NS(v_{0,n})(\tau_{\varepsilon}) \|^{2}_{\dot{H}^s} &= (1 -\tau_{\varepsilon} )^{\frac{2}{\sigma_{s}}}\, \| NS(\varphi^{j_{0}})(\tau_{\varepsilon}) \|^{2}_{\dot{H}^s} \,\, +\, (1 -\tau_{\varepsilon} )^{\frac{2}{\sigma_{s}}}\, \Bigl\{ \sum_{\stackrel{j \in \mathcal{J}_{1}, j \neq j_{0}}{ \tau^{j}_{*} > 1}} \| NS(\varphi^{j})(\tau_{\varepsilon}) \|^{2}_{\dot{H}^s} \, \\ &\quad+\, \|e^{\tau_{\varepsilon} \Delta}\psi_{n}^J \|^{2}_{\dot{H}^s}\, + \, \vert\gamma_{n}^{J}(\tau_{\varepsilon}) \vert\Bigr\}. \end{split}$$ By hypothesis, ${\displaystyle}{(1 -\tau_{\varepsilon} )^{\frac{1}{\sigma_{s}}}\, \| NS(v_{0,n})(\tau_{\varepsilon}) \|_{\dot{H}^s} \geqslant m_{c} - \varepsilon_{n} }$, and ${\displaystyle}{1 -\tau_{\varepsilon} \leqslant 1}$. Hence, we get $$\begin{split} \bigl(m_{c} - \varepsilon_{n}\bigr)^2 &\leqslant \,\, m_{c}^2 - \alpha_{0} \,\, +\, (1 -\tau_{\varepsilon} )^{\frac{2}{\sigma_{s}}}\, \Bigl\{ \sum_{\stackrel{j \in \mathcal{J}_{1}, j \neq j_{0}}{ \tau^{j}_{*} > 1}} \sup_{\tau \in [0,1]}\, \| NS(\varphi^{j})(\tau) \|^{2}_{\dot{H}^s} \, +\, \|\psi_{n}^J \|^{2}_{\dot{H}^s}\Bigr\}\,\, + \vert\gamma_{n}^{J}(\tau_{\varepsilon})\vert . \end{split}$$ On the one hand, as profiles $\varphi^{j}$ have a lifespan $\tau^{j}_{*} > 1$, the quantity ${\displaystyle}{\sup_{\tau \in [0,1]}\, \| NS(\varphi^{j})(\tau) \|^{2}_{\dot{H}^s} }$ is finite. On the other hand, by vertue of profile decomposition of the sequence $(v_{0,n})$, we have obviously that ${\displaystyle}{ \|\psi_{n}^J \|^{2}_{\dot{H}^s} \leqslant \|v_{0,n} \|^{2}_{\dot{H}^s}}$. As we have proved that $(v_{0,n})$ is an element of the set $\mathcal{E}_{c}$, we get in particular that ${\displaystyle}{\sup_{\tau < 1}\, (1-\tau)^{\frac{1}{\sigma_{s}}}\, \| NS(v_{0,n})(\tau) \|_{\dot{H}^s} = M_{c}}$, which leads to (at $\tau =0$) ${\displaystyle}{\|v_{0,n} \|}_{\dot{H}^s} \leqslant M_{c} $. Finally, for all  $\tau_{\varepsilon}$, $$(1 -\tau_{\varepsilon} )^{\frac{2}{\sigma_{s}}}\, \Bigl\{ \sum_{\stackrel{j \in \mathcal{J}_{1}, j \neq j_{0}}{ \tau^{j}_{*} > 1}} \sup_{\tau \in [0,1]}\, \| NS(\varphi^{j})(\tau) \|^{2}_{\dot{H}^s} \, +\, \|\psi_{n}^J \|^{2}_{\dot{H}^s}\Bigr\} \leqslant \frac{\alpha_{0}}{4},$$ we get $$\begin{split} \bigl(m_{c} - \varepsilon_{n}\bigr)^2 &\leqslant \,\, m_{c}^2 - \alpha_{0} \,\, +\, \frac{\alpha_{0}}{4} + \vert\gamma_{n}^{J}(\tau_{\varepsilon})\vert . \end{split}$$ Now, by assumption of $\gamma_{n}^{J}$, we take the limit for $n$ and $J$ large enough, and we get $$m_{c}^2 \leqslant m_{c}^2 \,-\, \frac{3\,\alpha_{0}}{4} \, +\, \frac{\alpha_{0}}{4},$$ which is obviously absurd. Thus, we have proved that $$\liminf_{\tau \to 1} \,\,\, (1 -\tau )^{\frac{1}{\sigma_{s}}}\, \| NS(\varphi^{j_{0}})(\tau) \|_{\dot{H}^s} = m_{c}.$$ This concludes the proof of Lemma \[sup inf critical lemma\]. Structure Lemma for Navier-Stokes solutions with bounded data ============================================================= The sequence $(v_{0,n})_{n \geqslant 0}$ be a bounded sequence of initial data in $\dot{H}^s$. Thanks to Theorem \[theo profiles\], $(v_{0,n})_{n \geqslant 0}$ can be written as follows, up to an extraction $$v_{0,n}(x) = \sum_{j=0}^{J} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}(x) + \psi_{n}^{J}(x),$$ which can be written as follows $$\begin{split} v_{0,n}(x) &= \sum_{\stackrel{j \in \mathcal{J}_{1}}{j \leqslant J}} \varphi^{j}(x-x_{n,j}) + \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}(x) + \psi_{n}^{J}(x).\\ \end{split}$$ Let $\eta >0$ be the parameter of rough cutting off frequencies. We define by $w_{\eta}(x)$ and $w_{{}^{c}\eta}(x)$ the elements which Fourier transform is given by $$\label{notations cut off} \widehat{w_{\eta}}(\xi) = \widehat{w}(\xi) 1_{\{\frac{1}{\eta} \leqslant |\xi| \leqslant \eta \}} \quad \hbox{and} \quad \widehat{w_{{}^{c}\eta}}(\xi) = \widehat{w}(\xi) \bigl(1 -1_{\{\frac{1}{\eta} \leqslant |\xi| \leqslant \eta \}}\bigr).$$ [0.2cm]{} After rough cutting off frequencies with respect to the notations $(\ref{notations cut off})$ and sorting profiles supported in the annulus $1_{\{\frac{1}{\eta}\leqslant |\xi| \leqslant \eta\}}$ according to their scale (thanks to the orthogonality property of scales and cores, given by Theorem \[theo profiles\]). We get the following profile decomposition\ $$\label{decomposition apres regularisation} \begin{split} v_{0,n}(x) &= \sum_{j \in \mathcal{J}_{1}} \varphi^{j}(x-x_{n,j}) + \sum_{j \in \mathcal{J}_{0}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}_{\eta}(x) + \sum_{j \in \mathcal{J}_{\infty}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}_{\eta}(x) + \psi_{n,\eta}^{J}(x)\\ &\hbox{where} \quad \psi_{n,\eta}^{J}(x) \eqdefa \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1} \equiv \mathcal{J}_{0} \cup \mathcal{J}_{\infty} }{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}V^{j}_{{}^{c}\eta}(x)\, +\, \psi_{n}^{J}(x), \end{split}$$\ for any $j$ in $\mathcal{J}_{1} \subset J$, $\lambda_{n,j} =1$, for any $j$ in $\mathcal{J}_{0}$, $\displaystyle{\lim_{n \to +\infty} \lambda_{n,j} = 0}$ and for any  $j$ in $\mathcal{J}_{\infty}$,  $\displaystyle{\lim_{n \to +\infty} \lambda_{n,j} = +\infty}$.\ As mentionned in the introduction, the whole Lemma \[lemme allure de la solution\] has been already proved in [@P], except for the orthogonality property of the Navier-stokes solution associated with such a sequence of initial data. Therefore, we refer the reader to [@P] for details of the proof and here, we focus on the “Pythagore property”. Let us recall the notations $$U^{0}_{n,\eta}\eqdefa \sum_{j \in \mathcal{J}_{0}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}_{\eta} \quad \hbox{and} \quad U^{\infty}_{n,\eta}\eqdefa \sum_{j \in \mathcal{J}_{\infty}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}_{\eta}.$$ We recall some properties on profiles with small and large scale and remaining term. We refer the reader to [@P] to the proof of the two propositions below. *[ \[smallbigscaling\] $$\hbox{For any} \,\, s_{1}<s,\,\,\hbox{for any} \,\, \eta >0, \,\, \hbox{for any} \,\, j \in \mathcal{J}_{0}, \,\, (\hbox{e.g} \,\, \lim_{n \to +\infty} \lambda_{n,j} = 0), \,\, \hbox{then} \,\, \lim_{n \to +\infty}\bigl\|U^{0}_{n,\eta}\bigr\|_{\dot{H}^{s_{1}}} = 0.$$ $$\hbox{For any} \,\, s_{2}>s,\,\,\,\,\hbox{for any} \,\, \eta >0, \,\, \hbox{for any} \,\, j \in \mathcal{J}_{\infty}, \,\, (\hbox{e.g} \,\, \lim_{n \to +\infty} \lambda_{n,j} = +\infty), \,\, \hbox{then} \,\, \lim_{n \to +\infty}\bigl\|U^{\infty}_{n,\eta}\bigr\|_{\dot{H}^{s_{2}}} = 0.$$ ]{}* Concerning the remaining term, we can show it tends to $0$, thanks to Lebesgue Theorem. *[ \[reste perturbé petit\] $$\lim_{J \to +\infty} \lim_{\eta \to +\infty} \limsup_{n \to +\infty} \| \psi_{n,\eta}^{J}\|_{L^p} = 0.$$ ]{}* *Continuation of Proof of Lemma \[lemme allure de la solution\].* By vertue of (\[decomposition de la solution\]) in Lemma \[lemme allure de la solution\], it seems clear that for any $t < \tilde{T}$ $$\begin{split} \label{Pyt 2} \| NS(v_{0,n})(t,\cdot) \|^2_{\dot{H}^s} &= \Bigl\| \sum_{j \in \mathcal{J}_{1}} NS(\varphi^{j})(t,\cdot-x_{n,j}) \Bigr\|^{2}_{\dot{H}^s} + \Bigl\| e^{t\Delta}\Bigl( \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}(x) + \psi_{n}^{J}\Bigr)\Bigr\|^{2}_{\dot{H}^s}\\ &+ \| R_{n}^{J}(t,\cdot) \|^{2}_{\dot{H}^s}\, + 2\, \Bigl( \sum_{j \in \mathcal{J}_{1}} NS(\varphi^{j})(t,\cdot-x_{n,j}) \mid e^{t\Delta}\Bigl( \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}(x) + \psi_{n}^{J}\Bigr)\Bigr)_{\dot{H}^s}\\& + 2\, \Bigl( \sum_{j \in \mathcal{J}_{1}} NS(\varphi^{j})(t,\cdot-x_{n,j}) \mid R_{n}^{J} \Bigr)_{\dot{H}^s} + 2\, \Bigl( e^{t\Delta}\Bigl( \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}(x) + \psi_{n}^{J}\Bigr) \, \mid \, R_{n}^{J} \Bigr)_{\dot{H}^s}. \end{split}$$ Therefore, proving (\[Pythagore\]) is equivalent to prove Propositions \[prop orthogonalié profiles echelle 1\] and \[prop orthogonalié profiles echelle 0 et infini\] below. Both of them essentially stem from the orthogonality of cores and a compactness argument. *[ Let $\varepsilon >0$. Then, for any $t \in [0,\tilde{T}-\varepsilon] $, \[prop orthogonalié profiles echelle 1\] $$\Bigl\| \, \sum_{j \in \mathcal{J}_{1}} NS(\varphi^{j})(t,\cdotp - x_{n,j}) \Bigr\|^{2}_{\dot{H}^s} = \sum_{j \in \mathcal{J}_{1}} \bigl \| NS(\varphi^{j})(t,\cdotp) \bigr\|^{2}_{\dot{H}^s} + \gamma_{n,\varepsilon}(t),$$ with ${\displaystyle}{\lim_{n \to +\infty} \sup_{t \in [0,\tilde{T}-\varepsilon]} \vert \gamma_{n,\varepsilon}(t) \vert =0.}$ ]{}* Once again, we developp the square of ${\displaystyle}{\dot{H}^s}$-norm and we get for any ${\displaystyle}{t < \tilde{T}}$ $$\begin{split} &\Bigl\| \sum_{j \in \mathcal{J}_{1}} NS(\varphi^{j})(t, \cdotp-x_{n,j}) \Bigr\|^{2}_{\dot{H}^s} = \sum_{j \in \mathcal{J}_{1}} \bigl\| NS(\varphi^{j})(t, \cdotp-x_{n,j}) \bigr\|^{2}_{\dot{H}^s}\\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad + 2\, \sum_{\stackrel{(j,k) \in \mathcal{J}_{1} \times \mathcal{J}_{1}}{j \neq k}} \left( \Lambda^s\, NS(\varphi^{j})(t, \cdotp-x_{n,j}) \mid \Lambda^s\, NS(\varphi^{k})(t, \cdotp-x_{n,k}) \right)_{L^2},\\ \end{split}$$ where $\Lambda = \sqrt{-\Delta}$. Let $\varepsilon>0$. Then, for any t in $[0, \widetilde{T} - \varepsilon]$, we get $$\begin{split} \Bigl\| \sum_{j \in \mathcal{J}_{1}} NS(\varphi^{j})(t, \cdotp-x_{n,j}) \Bigr\|^{2}_{\dot{H}^s} &= \sum_{j \in \mathcal{J}_{1}} \bigl\| NS(\varphi^{j})(t, \cdotp) \bigr\|^{2}_{\dot{H}^s} + 2\, \sum_{\stackrel{(j,k) \in \mathcal{J}_{1} \times \mathcal{J}_{1}}{j \neq k}} \Gamma^{s,j,k}_{\varepsilon,n},\\ \end{split}$$ where ${\displaystyle}{\Gamma^{s,j,k}_{\varepsilon,n} \eqdefa \left( \Lambda^s\, NS(\varphi^{j})(t, \cdotp-x_{n,j}) \mid \Lambda^s\, NS(\varphi^{k})(t, \cdotp-x_{n,k}) \right)_{L^2}}$.\ We denote by $$K_{\varepsilon}^{J} \eqdefa \bigcup_{j \in J}\, \Lambda^s \,NS(\varphi^{j})([0, \widetilde{T} - \varepsilon]).$$ By vertue of the continuity of the map ${\displaystyle}{ t \in [0, \widetilde{T} - \varepsilon] \mapsto \Lambda^s \,NS(\varphi^{j})(t, \cdotp)\, \in L^2 }$, we deduce that $K_{\varepsilon}^{J}$ is compact (and thus precompact) in $L^2$. It means that it can be covered by a finite open ball with an arbitrarily radius $\alpha >0$. Let $\alpha$ be a positive radius. There exists an integer $N_{\alpha}$, and there exists $(\theta_{\ell})_{1 \leqslant \ell \leqslant N_{\alpha}}$ some elements of $\mathcal{D}(\R^3)$, such that $${\displaystyle}{K_{\varepsilon}^{J} \subset \bigcup_{\ell =1}^{ N_{\alpha}} \, B(\theta_{\ell},\alpha)}.$$ Let us come back to the proof of \[prop orthogonalié profiles echelle 1\]. Thanks to the previous remark, we approach each profil $\Lambda^s \,NS(\varphi^{j})(t, \cdotp)$ (resp. $\Lambda^s \,NS(\varphi^{k})(t, \cdotp)$) by a smooth function: e.g there exists a integer $\ell \in \{1,\cdotp\cdotp\cdotp N_{\alpha} \} $ and there exists a function $\theta_{\ell(j,t)}$ (resp. $\theta_{\ell(k,t)}$) in $\mathcal{D}(\R^3)$ and we get $$\begin{split} \Gamma^{s,j,k}_{\varepsilon,n} &= \left( \Lambda^s\, NS(\varphi^{j})(t, \cdotp-x_{n,j}) - \theta_{\ell(j,t)}(\cdotp-x_{n,j}) \mid \Lambda^s\, NS(\varphi^{k})(t, \cdotp-x_{n,k}) - \theta_{\ell(k,t)}(\cdotp-x_{n,k}) \right)_{L^2}\\ &+ \left( \Lambda^s\, NS(\varphi^{j})(t, \cdotp-x_{n,j}) - \theta_{\ell(j,t)}(\cdotp-x_{n,j}) \mid \theta_{\ell(k,t)}(\cdotp-x_{n,k}) \right)_{L^2}\\ &+ \left( \theta_{\ell(j,t)}(\cdotp-x_{n,j}) \mid \Lambda^s\, NS(\varphi^{k})(t, \cdotp-x_{n,k}) - \theta_{\ell(k,t)}(\cdotp-x_{n,k}) \right)_{L^2}\\ &+ \left( \theta_{\ell(j,t)}(\cdotp-x_{n,j}) \mid \theta_{\ell(k,t)}(\cdotp-x_{n,k}) \right)_{L^2}.\\ \end{split}$$ The three first terms in the right-hand side of the above estimate tend uniformly (in time) to $0$, by vertue of Cauchy-Schwarz and the translation-invariance of the $\dot{H}^s$-norm (we just perform the estimate for the first term, the others are similar). For any $t \in [0,\tilde{T}-\varepsilon]$ $$\begin{split} \Bigl( \Lambda^s\, NS(\varphi^{j})(t, \cdotp-x_{n,j}) - \theta_{\ell(j,t)}(\cdotp-x_{n,j}) &\mid \Lambda^s\, \bigl( NS(\varphi^{k})(t, \cdotp-x_{n,k}) - \theta_{\ell(k,t)}(\cdotp-x_{n,k}) \bigl)\Bigr)_{L^2}\\ &\leqslant \| \Lambda^s\, NS(\varphi^{j})(t) - \theta_{\ell(j,t)} \|_{L^2} \,\, \| \Lambda^s\, NS(\varphi^{k})(t) - \theta_{\ell(k,t)} \|_{L^2}\\ &\leqslant \alpha^2. \end{split}$$ Therefore, for any $\alpha >0$, we have $$\sup_{t \in [0,\tilde{T}-\varepsilon]} \, \left( \Lambda^s\, NS(\varphi^{j})(t, \cdotp-x_{n,j}) - \theta_{\ell(j,t)}(\cdotp-x_{n,j}) \mid \Lambda^s\, NS(\varphi^{k})(t, \cdotp-x_{n,k}) - \theta_{\ell(k,t)}(\cdotp-x_{n,k}) \right)_{L^2} \leqslant \alpha^2.$$ For the last term ${\displaystyle}{\left( \theta_{\ell(j,t)}(\cdotp-x_{n,j}) \mid \,\theta_{\ell(k,t)}(\cdotp-x_{n,k}) \right)_{L^2}}$, we have $$\left( \theta_{\ell(j,t)}(\cdotp-x_{n,j}) \mid \,\theta_{\ell(k,t)}(\cdotp-x_{n,k}) \right)_{L^2} = \int_{\R^3}\, \theta_{\ell(j,t)}(x)\, \theta_{\ell(k,t)}(x + x_{n,j} - x_{n,k})\, dx.$$ It follows immediately that the above term tends to $0$, when $n$ tend to $+\infty$, by vertue of Lebesgue theorem combining with the orthogonality property of cores(e.g. ${\displaystyle}{\lim_{n \to \infty} |x_{n,j}-x_{n,k}| = +\infty}$). To sum up, we have proved that $\Gamma^{s,j,k}_{\varepsilon,n}$ tends to $0$ when $n$ tends to $+\infty$, uniformly in time. This concludes the proof of Proposition \[prop orthogonalié profiles echelle 1\]. Concerning the crossed-terms in the profile decomposition, we have to prove they are also negligable, uniformly in time. That is the point in the following proposition. *[ Let $\varepsilon>0$, We denote by $$I_{n}(t,\cdotp) \eqdefa \Bigl( \sum_{j \in \mathcal{J}_{1}} NS(\varphi^{j})(t,\cdot-x_{n,j}) \mid e^{t\Delta}\Bigl( \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}(x) + \psi_{n}^{J}\Bigr)\Bigr)_{\dot{H}^s},$$ \[prop orthogonalié profiles echelle 0 et infini\] $$\label{estimate5} \begin{split} \hbox{then, one has} \quad \lim_{J \to +\infty}\lim_{\eta \to +\infty} \lim_{n \to +\infty} \, \sup_{t \in [0, \tilde{T}- \varepsilon] } \, I_{n}(t,\cdotp) = 0,\\ \end{split}$$ $$\label{estimate6} \begin{split} \lim_{J \to +\infty} \lim_{n \to +\infty} \, \sup_{t \in [0, \tilde{T}- \varepsilon] }\Bigl( \sum_{j \in \mathcal{J}_{1}} NS(\varphi^{j})(t,\cdot-x_{n,j}) \mid R_{n}^{J}(t) \Bigr)_{\dot{H}^s} = 0, \end{split}$$ $$\label{estimate7} \begin{split} \lim_{J \to +\infty}\lim_{\eta \to +\infty} \lim_{n \to +\infty} \, \sup_{t \in [0, \tilde{T}- \varepsilon] } \, \Bigl( e^{t\Delta}\Bigl( \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}(x) + \psi_{n}^{J}\Bigr)\mid R_{n}^{J}(t) \Bigr)_{\dot{H}^s} = 0. \end{split}$$ ]{}* Let us start by proving (\[estimate5\]). We shall use once again an approximation argument. Let us define $$\Lambda_{\varepsilon}^{J} \eqdefa \bigcup_{j \in J}\, \,NS(\varphi^{j})([0, \widetilde{T} - \varepsilon]).$$ By vertue of the continuity of the map ${\displaystyle}{ t \in [0, \widetilde{T} - \varepsilon] \mapsto NS(\varphi^{j})(t, \cdotp)\, \in \dot{H}^s }$, we deduce that $\Lambda_{\varepsilon}^{J}$ is compact (and thus precompact) in $\dot{H}^s$. It means that it can be covered by a finite open ball with an arbitrarily radius $\beta >0$. Let $\beta$ be a positive radius. There exists an integer $N_{\beta}$, and there exists $(\chi_{\ell})_{1 \leqslant \ell \leqslant N_{\beta}}$ some elements of $\mathcal{D}(\R^3)$, such that $${\displaystyle}{\Lambda_{\varepsilon}^{J} \subset \bigcup_{\ell =1}^{ N_{\beta}} \, B(\chi_{\ell},\beta)}.$$ Let us come back to the proof of (\[estimate5\]). Same arguments as previously imply there exists an integer $\ell \in \{ 1 \cdotp \cdotp \cdotp N_{\beta} \}$ and a smooth function $\chi_{\ell(t,j)}$ in $\mathcal{D}(\R^3)$ such that $$\begin{split} I_{n}(t,\cdotp) &\eqdefa \Bigl( \sum_{j \in \mathcal{J}_{1}} NS(\varphi^{j})(t,\cdot-x_{n,j}) \mid e^{t\Delta}\Bigl( \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j} + \psi_{n}^{J}\Bigr)\Bigr)_{\dot{H}^s}\\ &= \Bigl( \sum_{j \in \mathcal{J}_{1}} NS(\varphi^{j})(t,\cdot-x_{n,j}) - \chi_{\ell(t,j)}(\cdotp - x_{n,j}) \mid e^{t\Delta}\Bigl( \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}+ \psi_{n}^{J}\Bigr)\Bigr)_{\dot{H}^s}\\ &\qquad+ \Bigl( \sum_{j \in \mathcal{J}_{1}} \chi_{\ell(t,j)}(\cdotp - x_{n,j}) \mid e^{t\Delta}\Bigl( \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j} + \psi_{n}^{J}\Bigr)\Bigr)_{\dot{H}^s}. \end{split}$$ As ${\displaystyle}{ \bigl\| e^{t\Delta}\Bigl( \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j} + \psi_{n}^{J}\Bigr) \bigr\|_{\dot{H}^s} \leqslant \| v_{0,n} \|_{\dot{H}^s} }$, we infer that $$\begin{split} I_{n}(t,\cdotp) &\leqslant |\mathcal{J}_{1}|\, \beta \, \bigl\| v_{0,n} \bigr\|_{\dot{H}^s} +\, \Bigl( \sum_{j \in \mathcal{J}_{1}} \chi_{\ell(t,j)}(\cdotp - x_{n,j}) \mid e^{t\Delta}\Bigl( \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j} + \psi_{n}^{J}\Bigr)\Bigr)_{\dot{H}^s}\\ \end{split}$$ Concerning the second part of above inequality, we shall use the splitting with respect to the parameter of cut off $\eta$. We refer the reader to the beginning of this section for notations. $$\Bigl( \sum_{j \in \mathcal{J}_{1}} \chi_{\ell(t,j)}(\cdotp - x_{n,j}) \mid e^{t\Delta}\Bigl( \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j} + \psi_{n}^{J}\Bigr)\Bigr)_{\dot{H}^s} = I^{1}_{n}(t,\cdotp) + I^{2}_{n}(t,\cdotp) + I^{3}_{n}(t,\cdotp),$$ $$\begin{split} \hbox{where} \quad &I^{1}_{n}(t,\cdotp) = \sum_{j \in \mathcal{J}_{1}} \left(\chi_{\ell(t,j)}(\cdotp - x_{n,j}) \mid e^{t\Delta} U^{0}_{n,\eta}\right)_{\dot{H}^s}\quad \hbox{;} \quad I^{2}_{n}(t,\cdotp) = \sum_{j \in \mathcal{J}_{1}} \left(\chi_{\ell(t,j)}(\cdotp - x_{n,j}) \mid e^{t\Delta} U^{\infty}_{n,\eta}\right)_{\dot{H}^s}\\&\qquad \qquad \qquad \qquad \hbox{and} \quad I^{3}_{n}(t,\cdotp) = \sum_{j \in \mathcal{J}_{1}} \left(\chi_{\ell(t,j)}(\cdotp - x_{n,j}) \mid e^{t\Delta} \psi_{n, \eta}^{J}\right)_{\dot{H}^s}. \end{split}$$ Let us start with $I^{1}_{n}(t,\cdotp)$. One has $$\begin{split} |I^{1}_{n}(t,\cdotp)| &\leqslant |\mathcal{J}_{1}|\, \| \chi_{\ell(t,j)} \|_{\dot{H}^{2s-s_{1}}} \, \| e^{t\Delta} U^{0}_{n,\eta} \|_{\dot{H}^{s_{1}}}\\ &\leqslant |\mathcal{J}_{1}|\,\| \chi_{\ell(t,j)} \|_{\dot{H}^{2s-s_{1}}} \, \| U^{0}_{n,\eta}\|_{\dot{H}^{s_{1}}}. \end{split}$$ Proposition \[smallbigscaling\] (for $\eta$ and $j \in \mathcal{J}_{1}$ fixed) implies thus ${\displaystyle}{\lim_{n \to +\infty} \, \sup_{t \in [0, \tilde{T}- \varepsilon] } |I^{1}_{n}(t,\cdotp)| = 0}$.\ Concerning profiles with large scale, the proof is similar and we get for any $t \in [0, \tilde{T}- \varepsilon]$ $$\begin{split} |I^{2}_{n}(t,\cdotp)| &\leqslant\, |\mathcal{J}_{1}|\, \| \chi_{\ell(t,j)} \|_{\dot{H}^{2s-s_{2}}} \, \bigl\| U^{\infty}_{n,\eta}(x) \,\bigr\|_{\dot{H}^{s_{2}}}. \end{split}$$ Once again, Proposition \[smallbigscaling\] implies the result : ${\displaystyle}{\lim_{n \to +\infty} \, \sup_{t \in [0, \tilde{T}- \varepsilon] } |I^{2}_{n}(t,\cdotp)| = 0}$.\ Concerning the last term $I^{3}_{n}$, Hölder inequality with ${\displaystyle}{\frac{1}{p} + \frac{1}{p'} =1}$ yields $$\begin{split} |I^{3}_{n}(t,\cdotp)| &\leqslant \bigl|\left(\Lambda^{2s}\,\chi_{\ell(t,j)} \mid e^{t\Delta} \psi_{n, \eta}^{J}(\cdotp + x_{n,j})\right)_{L^2}\bigr| \\ &\leqslant \| \Lambda^{2s}\, \chi_{\ell(t,j)} \|_{L^{p'}} \, \| e^{t\Delta} \psi_{n, \eta}^{J}(\cdotp + x_{n,j}) \|_{L^p} \bigr).\\ \end{split}$$ By translation invariance of the $L^p$-norm and estimate on the heat equation, we get $$\begin{split} |I^{3}_{n}(t,\cdotp)| &\leqslant \| \Lambda^{2s}\, \chi_{\ell(t,j)} \|_{L^{p'}} \, \| \psi_{n, \eta}^{J}\|_{L^p}. \end{split}$$ Obviously the term ${\displaystyle}{\| \psi_{n, \eta}^{J}\|_{\dot{H}^s}}$ is bounded by profiles hypothesis and the term ${\displaystyle}{\| \Lambda^{2s}\, \chi \chi_{\ell(t,j)} \|_{L^{p'}}}$ is bounded too, since the function $\chi$ is as regular as we need. By vertue of Proposition \[reste perturbé petit\], the term ${\displaystyle}{\| \psi_{n, \eta}^{J}\|_{L^p}}$ is small in the sense of for any $\varepsilon >0$, there exists an integer $N_{0} \in N$, such that for any $n \geqslant N_{0}$, there exists $\tilde{\eta} >0$ and $\tilde{J} \geqslant 0$, such that for any $\eta \geqslant \tilde{\eta}$ and for any $J \geqslant \tilde{J}$, we have ${\displaystyle}{\| \psi_{n, \eta}^{J}\|_{L^p} \leqslant \varepsilon}$. As a result, we get for any $$\lim_{J \to +\infty}\lim_{\eta \to +\infty} \lim_{n \to +\infty} \, \sup_{t \in [0, \tilde{T}- \varepsilon] } |I^{3}_{n}(t,\cdotp)| = 0.$$ This ends up the proof of estimate (\[estimate5\]). Concerning the proof of (\[estimate6\]) and (\[estimate7\]), the proof is very close in both cases and relies on the fact that the error term $R_{n}^{J}$ tends to $0$ in the $L^{\infty}_{T}(\dot{H}^s)$-norm. For any $t \in [0, \tilde{T}- \varepsilon]$, we have $$\begin{split} \bigl| \bigl( \sum_{j \in \mathcal{J}_{1}} NS(\varphi^{j})(t,\cdot-x_{n,j}) \mid R_{n}^{J} \bigr)_{\dot{H}^s} \bigr| &\leqslant \sum_{j \in \mathcal{J}_{1}} \bigl| \bigl(NS(\varphi^{j})(t,\cdotp) \mid R_{n}^{J}(t,\cdotp + x_{n,j}) \bigr)_{\dot{H}^s} \bigr |\\ &\leqslant |\mathcal{J}_{1}|\, \| NS(\varphi^{j})(t,\cdotp) \|_{L^{\infty}_{T}(\dot{H}^s)} \, \| R_{n}^{J}(t,\cdotp) \|_{L^{\infty}_{T}(\dot{H}^s)}. \end{split}$$ Obviously, the term ${\displaystyle}{\| NS(\varphi^{j})(t,\cdotp) \|_{L^{\infty}_{T}(\dot{H}^s)}}$ is bounded since $t \in [0, \tilde{T}- \varepsilon]$. As a result, Lemma \[lemme allure de la solution\] implies that $$\lim_{J \to +\infty} \lim_{n \to +\infty} \, \sup_{t \in [0, \tilde{T}- \varepsilon] } \mid\bigl( \sum_{j \in \mathcal{J}_{1}} NS(\varphi^{j})(t,\cdot-x_{n,j}) \mid R_{n}^{J} \bigr)_{\dot{H}^s}\mid = 0.$$ As far as estimate (\[estimate7\]) is concerned, the idea is the same. For any $t \in [0, \tilde{T}- \varepsilon]$, $$\begin{split} \bigl| \bigl( e^{t\Delta}\Bigl( \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}(x) + \psi_{n}^{J}\Bigr)\mid R_{n}^{J} \bigr)_{\dot{H}^s} \bigr| &\leqslant \bigl|\bigl( e^{t\Delta}\Bigl( \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}(x) + \psi_{n}^{J}\Bigr) \mid R_{n}^{J} \bigr)_{\dot{H}^s} \bigr |\\ &\leqslant \| e^{t\Delta}\Bigl( \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}(x) + \psi_{n}^{J}\Bigr) \|_{L^{\infty}_{T}(\dot{H}^s)} \| R_{n}^{J} \|_{L^{\infty}_{\tilde{T}- \varepsilon}(\dot{H}^s)} \\ &\leqslant \|U^{0}_{n,\eta} + U^{\infty}_{n,\eta} + \psi_{n, \eta}^{J}\|_{\dot{H}^s} \, \| R_{n}^{J} \|_{L^{\infty}_{\tilde{T}- \varepsilon}(\dot{H}^s)}. \end{split}$$ Thanks to profile decomposition (\[decomposition apres regularisation\]), we get $$\begin{split} \|U^{0}_{n,\eta} + U^{\infty}_{n,\eta} + \psi_{n, \eta}^{J}\|^{2}_{\dot{H}^s} &\leqslant \| v_{0,n} \|^{2}_{\dot{H}^s} + \circ(1). \end{split}$$ Thus, finally we get $$\begin{split} \bigl| \bigl( e^{t\Delta}\Bigl( \sum_{\stackrel{j \in \mathcal{J}^{{}{c}}_{1}}{j \leqslant J}} \Lambda^{\frac{3}{p}}_{\lambda_{n,j},x_{n,j}}\varphi^{j}(x) + \psi_{n}^{J}\Bigr)\mid R_{n}^{J} \bigr)_{\dot{H}^s} | &\leqslant C\, \bigl( \bigl\| v_{0,n} \bigr\|^{2}_{\dot{H}^s} + \circ(1) \bigr) \, \| R_{n}^{J} \|_{L^{\infty}_{\tilde{T}- \varepsilon}(\dot{H}^s)}. \end{split}$$ We end up the proof as before, thanks to the hypothesis on $R_{n}^{J}$. This completes the proof of Proposition \[prop orthogonalié profiles echelle 0 et infini\] and thus Lemma \[lemme allure de la solution\]. [99]{} H. Bahouri, J.-Y. Chemin, R. Danchin: *Fourier Analysis and Nonlinear Partial Differential Equations*, Springer, **343**, 2011. H. Bahouri, A. Cohen, G. Koch: A general wavelet-based profile decomposition in the critical embedding of function spaces, *Confluentes Mathematici*, **3**, 2011, pages 1-25. H. Bahouri and I. Gallagher: On the stability in weak topology of the set of global solutions to the Navier-Stokes equations, *Archive for Rational Mechanics and Analysis*, **209**, 2013, pages 569-629. H. Bahouri, P. Gérard: High frequency approximation of solutions to critical nonlinear wave equations, *American Journal of Math*, **121**, 1999, pages 131-175. H. Bahouri, M. Majdoub and N. Masmoudi: Lack of compactness in the 2D critical Sobolev embedding, the general case, to appear in *Journal de Mathématiques Pures et Appliquées*. H. Brézis and J.-M. Coron: Convergence of solutions of H-Systems or how to blow bubbles, *Archive for Rational Mechanics and Analysis*, **89**, 1985, pages 21-86. J.-Y. Chemin: Jean Leray et Navier-Stokes, *Gazette des mathématiciens*, **84**, 2000, pages 7-82, supplément à la mémoire de Jean Leray. J.-Y. Chemin: Remarques sur l’existence globale pour le systéme de Navier-Stokes incompressible, SIAM, *Journal on Mathematical Analysis*, **23**, 1992, pages 20-28. J.-Y. Chemin: Théorèmes d’unicité pour le système de Navier-Stokes tridimensionnel, *Journal d’Analyse Mathématique*, **77**, 1999, pages 27-50. J.-Y. Chemin and I. Gallagher: Large, global solutions to the Navier-Stokes equations, slowly varying in one direction, *Transactions of the American Mathematical Society*, **362**, 2010, pages 2859-2873. J.-Y. Chemin, I. Gallagher: Wellposedness and stability results for the Navier-Stokes equations in $\R^3$, *Ann. I. H. Poincaré - AN*, **26**, 2009, pages 599-624. J.-Y. Chemin and F. Planchon: Self-improving bounds for the Navier-Stokes equations. *Bull. Soc. Math. France*, **140(4)**, (2013), 2012, pages 583-597. L. Escauriaza, G. Seregin, and V. $\breve{\mathrm{S}}$ver$\acute{\mathrm{a}}$k: $L_{3,\infty}$-solutions of Navier-Stokes equations and backward uniqueness. *Uspekhi Mat. Nauk*, **58(2(350))**,2003, pages 3-44. I. Gallagher: Profile decomposition for solutions of the Navier-Stokes equations, *Bull. Soc. Math. France*, **129** (2), 2001, pages 285-316. I. Gallagher, D. Iftimie and F. Planchon: Asymptotics and stability for global solutions to the Navier-Stokes equations, [*Annales de l’Institut Fourier*]{}, [**53**]{}, 2003, pages 1387-1424. I. Gallagher, G. Koch, F. Planchon: A profile decomposition approach to the $L^{\infty}_{t}(L^3_{x})$ Navier-Stokes regularity criterion, to appear, *Mathematische Annalen*, [**355**]{}, 2013, no. 4, pages 1527-1559 I. Gallagher, G. Koch, F. Planchon: Blow-up of critical Besov norms at a Navier-Stokes singularity, to appear, *Communications in Mathematical Physics*, 2015. P. Gérard: Description du défaut de compacité de l’injection de Sobolev, *ESAIM Contrôle Optimal et Calcul des Variations*, **vol. 3**, Mai 1998, pages 213-233. P. Gérard, Microlocal defect measures, *Communications in Partial Differential Equations*, **16**, 1991, pages 1761-1794. H. Jia and V. $\breve{\mathrm{S}}$ver$\acute{\mathrm{a}}$k: Local-in-space estimates near initial time for weak solutions of the Navier-Stokes equations and forward self-similar solutions, *Invent math*, **196**, 2014, pages 233-265. C. Kenig, G. Koch: An alternative approach to the Navier-Stokes equations in critical spaces, *Ann. I. H. Poincaré - AN*, 2010. G. Koch: Profile decompositions for critical Lebesgue and Besov space embeddings, Indiana University, [Mathematical Journal]{}, **59**, 2010, pages 1801-1830. P.G. Lemarié-Rieusset: [Recent Developments in the Navier-Stokes Problem]{}, Chapman & Hall/CRC Res. Notes Math., **vol. 431**, Chapman & Hall/CRC, Boca Raton, FL, 2002, pages 148-151. P.-L. Lions, The concentration-compactness principle in the calculus of variations. The limit case I, *Revista. Matematica Iberoamericana* **1** (1), 1985, pages 145-201. P.-L. Lions, The concentration-compactness principle in the calculus of variations. The limit case II, *Revista. Matematica Iberoamericana* **1** (2), 1985, pages 45-121. Y. Meyer: Wavelets, paraproducts, and Navier-Stokes equations, *Current developments in mathematics*, 1996 *(Cambridge, MA)*, Int. Press, Boston, MA, 1997, pages. 105-212. E. Poulon, About the behaviour of regular Navier-Stokes solutions near the blow up, *submitted hal-01010898v2*, 2014.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Knowledge extraction is used to convert neural networks into symbolic descriptions with the objective of producing more comprehensible learning models. The central challenge is to find an explanation which is more comprehensible than the original model while still representing that model faithfully. The distributed nature of deep networks has led many to believe that the hidden features of a neural network cannot be explained by logical descriptions simple enough to be comprehensible. In this paper, we propose a novel layerwise knowledge extraction method using *M-of-N* rules which seeks to obtain the best trade-off between the complexity and accuracy of rules describing the hidden features of a deep network. We show empirically that this approach produces rules close to an optimal complexity-error tradeoff. We apply this method to a variety of deep networks and find that in the internal layers we often cannot find rules with a satisfactory complexity and accuracy, suggesting that rule extraction as a general purpose method for explaining the internal logic of a neural network may be impossible. However, we also find that the softmax layer in Convolutional Neural Networks and Autoencoders using either *tanh* or *relu* activation functions is highly explainable by rule extraction, with compact rules consisting of as little as 3 units out of 128 often reaching over $99\%$ accuracy. This shows that rule extraction can be a useful component for explaining parts (or modules) of a deep neural network.' author: - | Simon Odense\ Department of Computer Science\ City, University of London\ London, EC1V 0HB, UK\ `{simon.odense}@city.ac.uk`\ Artur d’Avila Garcez\ Department of Computer Science\ City, University of London\ London, EC1V 0HB, UK\ `{a.garcez}@city.ac.uk`\ bibliography: - 'LayerwiseExtraction.bib' title: Layerwise Knowledge Extraction from Deep Convolutional Networks --- Introduction ============ Recently there has been an increase in interest in explainable Artificial Intelligence (AI). Although in the past decade there have been major advances in the performance of neural network models, these models tend not to be explainable [@NIPS2017]. In large part, this is due to the use of very large networks, specifically deep networks, which can contain thousands or even millions of hidden neurons. In contrast with symbolic AI, in which specific features are often hand picked for a problem, or symbolic Machine Learning (ML), which takes a localist approach [@Markov], the hidden neurons in a deep neural network do not necessarily correlate with obviously identifiable features of the data that a human would recognise. Knowledge extraction seeks to increase the explainability of neural networks by attempting to uncover the knowledge that a neural network has learned implicitly in its weights. One way of doing this is to translate trained neural networks into a set of symbolic rules or decision trees similar to the ones found in symbolic AI, ML and logic programming [@textbook]. Over the years, many rule extraction techniques have been developed [@Towell1993][@id2-of-3] [@Craven:1996:ECM:924305] [@Trann:2016] [@arturruleextraction] but none have been able to completely solve the black box problem for neural networks. The main barrier to comprehensible rule extraction is the complexity of the extracted rules. Even if it is possible to find a symbolic system which exactly describes a neural network, it may contain too many rules to be understandable. Perhaps the main reason this has proved to be a difficult problem is that reasoning in neural networks takes place in a *distributed* fashion [@DeepLearning:2016]. It has been argued that one of the fundamental properties of neural networks is that any abstract concepts it uses are represented in a distributed way, that is as patterns of activations across many hidden neurons rather than with a single hidden neuron [@proper_connectionism]. The distributed nature of neural networks has led many to conclude that attempting to explain the hidden neurons of large neural networks using symbolic knowledge extraction is a dead end [@SoftTree:2017]. Instead, alternative approaches to explanation have grown in popularity (see [@Survey] for a survey). Such approaches are so varied that four distinct explainability problems have been identified: *global explanations*, which attempt to give an explanation of a black box, *local explanations*, which attempt to give an explanation for a particular output of a black box, *visualization*, which gives a visual explanation of a latent feature or output, and *transparent box design*, which seeks to create new models which have some inherent explainability. Recent trends have favoured *model-agnostic* methods which opt to use the input-output relationship of a model to generate an explanation rather than assigning any meaning to hidden variables. From the point of view of transparency this may be adequate, but understanding the exact reasoning that a neural network uses with respect to its representation could shine new light into the kinds of knowledge that a deep neural network learns and how it uses that knowledge [@Garcez:2008:NCR:1478768]. This has the potential to accelerate the development of more robust models by illuminating any deficiencies that exist in current models and their learning algorithms. In this paper, we develop a rule extraction method that can control for the complexity of a rule via the scaling of an objective function. We do this by performing a parallel search through the space of *M-of-N* rules [@Towell1993] and measuring the error and complexity of each rule. By restricting our search space and using parallel techniques we are able to apply our algorithm to much larger networks than more exhaustive search techniques. We evaluate our algorithm against an optimal search technique (CORELS [@CORELS]) on a series of small networks before applying it to the layers of deep convolutional networks. By selecting various error/complexity trade-offs, we are able to map out a rule extraction landscape which shows the relationship between how complex the extracted rules are allowed to be and how accurately they capture the behaviour of a network. We find that the relative explainability between layers differs greatly and that changes to the network such as activation function can affect whether or not rule extraction will be useful in certain layers. In Section $2$, we provide an overview of previous algorithms used for knowledge extraction. In Section $3$, we give definitions of accuracy and complexity for *M-of-N* rules and present the extraction algorithm. In Section $4$, experimental results are reported and discussed. Section $5$ concludes and discusses directions for future work. Background and Related Work =========================== Approaches to rule extraction can, in general, be identified as *decompositional*, in which the parameters of the network are used to generate rules, *pedagogical*, in which the behaviour of the network is used to generate rules, or *eclectic* which are techniques with both decompositional and pedagogical components [@taxonomy]. One of the first attempts at knowledge extraction used a decompositional approach applied to feedforward networks, in particular the Knowledge-based Artificial Neural Networks (KBANN) [@KBANN]. This algorithm used the weights of a hidden variable to extract symbolic rules of the form *IF M out of a set of N neurons (or concepts) are activated (or hold) THEN a given neuron (concept) is activated (holds)*, called *M-of-N* rules [@Towell1993]. This was followed by more sophisticated algorithms which generate binary trees in which each node is an *M-of-N* rule [@id2-of-3] [@Craven:1996:ECM:924305] (Notice that these binary trees can be reduced to IF-THEN propositional logic sentences as before). These more recent algorithms are pedagogical in that they select an *M-of-N* rule using the input units as the concepts (called *literals* in logic), based on the maximum information gain with respect to the output. Other algorithms extract rules in the form of decision sets which are another rule based structure equivalent to decision trees. Two level decision sets have been used to generate both local explanations [@IDS] and global explanations [@DBLP:BDL][@CORELS] but have only been done in a model-agnostic way with no attempt to explain the internal variables of a model such as the hidden neurons in a deep network. Other methods abandon the knowledge extraction paradigm and opt for alternative techniques. In the context of computer vision, the use of visual importance methods might be preferred [@LIME][@importance]. Another approach is to design models which are explainable by design [@VisualRecurrent] [@SoftTree:2017] [@Anchors] [@DBLP:journals/corr/CourbariauxB16]. In the last example, we note the similarity of the restricted model to *M-of-N* rules, each hidden neuron in this case can be thought of as an *M-of-N* rule. Most decompositional rule extraction has been applied only to shallow networks. The multiple hidden layers in a deep network mean that in order to explain an arbitrary hidden feature in terms of the input, a decompositional technique has to produce a hierarchy of rules (see [@Trann:2016] for an example of hierarchical rule extraction). With many hidden layers, the extracted rules can quickly grow far too complex for a human to understand, unless each constituent of the rule hierarchy is exceedingly simple. Thus, the use of decompositional techniques to explain the features of a deep network end-to-end seems impractical, as argued in [@SoftTree:2017]. Nevertheless, experiments reported in this paper show that some layers of a deep network are associated with highly explainable rules opening up the possibility of rule extraction being used as a component in a modular explanation of network models. Layerwise Knowledge Extraction ============================== *M-of-N* Rules for Knowledge Representation: -------------------------------------------- In logic programming, a logical rule is an implication of the form $A \leftarrow B$, called $A$ *if* $B$. The literal $A$ is called the *head* of the rule and $B$ stands for a conjunction of literals, $B_1 \wedge B_2 \wedge ... \wedge B_n$ called the *body* of the rule. Disjunctions in the body can be modelled simply as multiple rules having the same head. Most logic programs adopt a *negation by failure* approach whereby $A$ is $true$ if and only if $B$ is $true$ [@LogicProgramming]. When using rules to explain a neural network, the literals will refer to the states of neurons. For example, if a neuron $x$ takes binary values {0,1} then we define the literal $X$ by $X=True$ if $x=1$, and $X=False$ if $x=0$. For neurons with continuous activation values, we can define a literal by including a threshold $a$ such that $X=True$ if $x > a$, and $X=False$ otherwise. In other words, the literal $X$ is shorthand for the statement $x>a$. In neural networks, a hidden neuron is usually poorly described by a single conjunctive rule since there are many different input configurations which will activate a neuron. Rather than simply adding a rule for each input pattern that activates a neuron (which essentially turns the network into a large lookup table), we look for *M-of-N* rules which have been commonly used in rule extraction starting with [@Towell1993]. *M-of-N* rules soften the conjunctive constraint on the body of logical rules by requiring only $M$ of the variables in the body to be true for some specific value of $M<N$ (notice that when $M=N$ we are left with a conjunction). For example, the rule $H \leftarrow 2-of-\{X_1,X_2,\neg X_3\}$ is equivalent to $H \leftarrow (X_1 \wedge X_2)$ or $(X_2 \wedge \neg X_3)$ or ($X_1 \wedge \neg X_3)$, where $\neg$ stands for negation by failure. *M-of-N* rules are an attractive candidate for rule extraction because they share a structural similarity with neural networks. Indeed every *M-of-N* rule can be thought of as a simple perception with binary weights and a threshold $M$. *M-of-N* rules have been used in the early days of knowledge extraction but have since been forgotten. This paper brings *M-of-N* rules to the forefront of the debate on explainability again. When networks have continuous activation values, in order to define the literals to use for rule extraction we must choose a splitting value $a$ for each neuron which will lead to a literal of the form $x>a$. In order to choose such values for continuous neurons we use *information gain* [@InormationGainDecisionTree][@informationgain] Given a target neuron $h$ that we wish to explain, we generate a literal for the target neuron by selecting a split based on the information gain with respect to the output labels of the network. That is, given a set of test examples, choose the value of the target neuron $h$ which splits the examples in such a way as to result in the maximum decrease in entropy of the network outputs on the test examples. The input literals are then generated from the inputs to the target neuron by choosing splits for each input which maximize the information gain with respect to the target literal generated in the previous step. In practice this means that each target literal in a layer will have its own set of input literals, each corresponding to the same set of input neurons but with different splits. In the case that the layer is convolutional, each feature map corresponds to a group of neurons, each with a different input patch. Rather than test every single neuron in the feature map we only test the one whose optimal split has the maximum information gain with respect to the network output. This gives us a single rule for each feature map rather than a collection of rules. Generate a split, $s$, for $h$ by choosing the value which maximizes the information gain with respect to the network output. Use this to define the literal $H$ Order the input literals by the magnitude of their weights Soundness and Complexity Trade-off ---------------------------------- The two metrics we are concerned with in rule extraction are comprehensibility and accuracy. For a given rule we can define accuracy in terms of a *soundness* measure. This is simply the expected difference between the predictions made by the rules and the network. More concretely given a neuron $h$ in a neural network with input neurons $x_i$, we can use the network to compute the state of $h$ from the state of the input neurons which then determines the truth of literal $H$. Thus we can use the network to determine the truth of $H$, call this $N(x)$. Furthermore, if we have some rule $R$ relating variables $H$ and $X_i$, we can use the state of the input $x$ to determine the value of the variables $X_i$, and then use $R$ to determine the value of $H$, call this $R(x)$. Given a set of input configurations to test $I$ (not necessarily from the test set of the network) we can measure the discrepancy between the output of the rules and the network as $$E(R):=\frac{1}{|I|}\sum\limits_{x\in I} |R(x)-N(x)|$$ In other words, we measure the average error of the rules when trying to predict the output of the network over a test set. Comprehensibility is more difficult to define as there is a degree of subjectivity. The approach we take is to look at the *complexity* of a rule. Here, we think of complexity analogously to the Kolmogorov complexity which is determined by a minimal description. Thus we determine the complexity of a rule by the length of its body when expressed by a (minimal) rule in disjunctive normal form (DNF). For an *M-of-N* rule, the complexity is simply $M{{N}\choose{M}}$, where $\choose$ denotes the binomial coefficient. For our experiments we measure complexity in relative terms by normalizing w.r.t. a maximum complexity. Given $N$ possible input variables, the maximum complexity is $\lceil{\frac{N+1}{2}}\rceil {{N}\choose {\lceil{\frac{N+1}{2}}\rceil}}$, where $\lceil \rceil$ denotes the ceiling function (rounding to the next highest integer). Finally in order to control for growth we take the logarithm giving the following normalized complexity measure. $$C(R) :=\frac{\log(M{{N}\choose{M}})}{\log(\lceil{\frac{N+1}{2}}\rceil {{N}\choose {\lceil{\frac{N+1}{2}}\rceil}})}$$ As an example, suppose we have a simple perceptron with two binary visible units with weights $w_{1,1}=1$ and $w_{2,1}=-0.5$ and whose output has a bias of $1$. Then consider the rule $h=1 \leftarrow 1$-of-$\{x_1=1,\neg (x_2=1)\}$. Over the entire input space we see that $R(x) \neq N(x)$ only when $x_1=0$ and $x_2=1$ giving us an error of $0.25$. Furthermore, a $1-of-2$ rule is the most complex rule possible for $2$ variables as it has the longest DNF of any *M-of-N* rule giving us a complexity of $1$. Using Eqs. $(1)$ and $(2)$ we define a loss function for a rule $R$ as a weighted sum in which a parameter $\beta\in \mathbb{R}^+$ determines the trade-off between soundness and complexity. $$L(R):=E(R)+ \beta C(R)$$ By using a brute force search with various values of $\beta$ we are able to explicitly determine the relationship between the allowed complexity of a rule and its maximum accuracy. For $\beta=0$ the rule with the minimum loss will simply be the rule with minimum error regardless of complexity, and for $\beta$ large enough the rule with the minimum loss will be a rule with $0$ complexity, either a $1-of-1$ rule or one of the trivial rules which either always predicts true or always predicts false (these can be represented as *M-of-N* rules by $0-of-N$ and $N+1-of-N$ respectively). Layerwise *M-of-N* Rule Extraction Algorithm -------------------------------------------- Given a neuron $h_j$ with $n$ input neurons $x_i$, we generate splits for each neuron using the technique just described to obtain a set of literals $H_j$ and $X_i$. Then, we negate the literals corresponding to neurons which have a negative weight to $h_j$. Using these we search through $\mathcal{O}(n^2)$ *M-of-N* rules with variables $X_i$ in the body and $H_j$ in the head, which minimize $L(R)$. To do this, as a heuristic, we reorder the variables according to the magnitude of the weight connecting $x_i$ to $h_j$ (such that we have $|w_{1,j}|\geq |w_{2,j}| \geq ... \geq |w_{n,j}|$). Then we consider the rule $M-of-\{X_1,...,X_N\}$ for each $1\leq N \leq n$ and each $0 \leq M \leq N+1$. The search procedure only relies on the ordering of the variables $X_i$. By ordering the literals according to the magnitude of their weights we reduce an exponential search space to a polynomial one. In the ideal case the set of possible input values to a hidden neuron is $X^n$ (where $X$ is the set of values that each input neuron can possibly take); it can be easily proved that the weight-ordering will find an optimal solution. In practice however, certain inputs may be highly correlated. When this is the case there is no guarantee that the weight-ordering will find the optimal *M-of-N* rule. Thus in the general case the search procedure is heuristic. This heuristic allows us to run our search in parallel. We do this by using Spark in IBM Watson studio. To illustrate the entire process, let us examine rule extraction from the first hidden layer in the CNN trained on the fashion MNIST data set. First we randomly select a set of examples and use them to compute the activations of each neuron in the CNN as well as the predicted labels of the network. With padding there are $28\times28=784$ neurons per feature in the first hidden layer, each corresponding to a different $5\times 5$ patch of the input image. We then find the optimal splitting value of each neuron by computing the information gain of each splitting choice with respect to the network’s predicted labels. We find that the neuron with the maximum information gain is neuron $96$ which has an information gain of $0.015$ when split on the value $0.0004$. This neuron corresponds to the image patch centered at $(3,12)$. With this split we define the variable $H$ as $H:= 1$ iff $h_{96} \geq 0.0004$. Using this variable we define the input splits by choosing the values which result in the maximum information gain with respect to $H$. We then search through the *M-of-N* rules whose bodies consist of the input variables defined by the splits to determine an optimal *M-of-N* rule explaining $H$ for various error-complexity tradeoffs. As we increase the complexity, three different rules are extracted which can be visualized in Figure 1. As can be seen, many of the weights are filtered out by the rules. The most complex rule is a $5$-of-$13$ rule which has a $0.025 \%$ error. A mild complexity penalty changes the optimal rule to the much simpler $3$-of-$4$ rule, but raises the error to $0.043\%$. And a heavy complexity penalty produces a $1$-of-$1$ rule which has the significantly higher error of $0.13\%$. ![The leftmost image represents the weights of neuron $96$. The next three images are obtained from rules of decreasing complexity extracted from the CNN explaining that neuron. If a literal is true (resp. false) it is shown in white (resp. black). Grey indicates that the input feature is not present in the *M-of-N* rule. Notice how a rule can be seen as a discretization of the network into a three-valued logic, similar to what is proposed by binarized networks [@DBLP:journals/corr/CourbariauxB16] but without constraining the network training a priori.](weights "fig:") ![The leftmost image represents the weights of neuron $96$. The next three images are obtained from rules of decreasing complexity extracted from the CNN explaining that neuron. If a literal is true (resp. false) it is shown in white (resp. black). Grey indicates that the input feature is not present in the *M-of-N* rule. Notice how a rule can be seen as a discretization of the network into a three-valued logic, similar to what is proposed by binarized networks [@DBLP:journals/corr/CourbariauxB16] but without constraining the network training a priori.](rule_1 "fig:") ![The leftmost image represents the weights of neuron $96$. The next three images are obtained from rules of decreasing complexity extracted from the CNN explaining that neuron. If a literal is true (resp. false) it is shown in white (resp. black). Grey indicates that the input feature is not present in the *M-of-N* rule. Notice how a rule can be seen as a discretization of the network into a three-valued logic, similar to what is proposed by binarized networks [@DBLP:journals/corr/CourbariauxB16] but without constraining the network training a priori.](rule_2 "fig:") ![The leftmost image represents the weights of neuron $96$. The next three images are obtained from rules of decreasing complexity extracted from the CNN explaining that neuron. If a literal is true (resp. false) it is shown in white (resp. black). Grey indicates that the input feature is not present in the *M-of-N* rule. Notice how a rule can be seen as a discretization of the network into a three-valued logic, similar to what is proposed by binarized networks [@DBLP:journals/corr/CourbariauxB16] but without constraining the network training a priori.](rule_3 "fig:") Experimental Results ==================== Small Fully Connected Networks ------------------------------ In order to compare our search procedure with CORELS as an optimal baseline [@CORELS], we evaluate both methods on a series of small fully connected networks. The first is a deep neural network with $2$ fully connected layers of $16$ and $8$ hidden neurons, respectively, with a rectified linear (ReLu) activation function, on the car evaluation dataset [@car]. The second is a single layer network with $100$ hidden neurons with ReLU activations trained on the E. Coli dataset [@ECOLI]. The final network is a single layer $25$ hidden unit network trained on the DNA promoter dataset [@DNA]. Because the DNA promoter dataset is quite small, we produce $10,000$ synthetic examples to evaluate our rule extraction methods on the final network. We simply use the entire dataset for the other two networks.\ \ CORELS produces optimal rules for a given set of parameters (maximum cardinality, minimum support and a regularization parameter) also seeking to penalize complexity. Maximum cardinality refers to the maximum number of literals in the body of a rule, the minimum support refers to the minimum number of training examples an antecedent must capture to be considered in the search, finally the regularization parameter is a scalar penalty on the complexity, equivalent to the parameter $\beta$ used in our *M-of-N* search. Because our extraction algorithm uses an ordering on the literals, each rule can be evaluated independently so that the search procedure can run in parallel. This greatly speeds up the search compared to CORELS, which requires a sequential search. This faster search will allow us to apply the extraction algorithm to larger networks and to use more test examples. However, since we only search over *M-of-N* rules we are not guaranteed to find an optimal solution. For this reason we compare our layerwise results with CORELS to see how far from optimal our search method is. Since CORELS has multiple parameters to penalize complexity we run CORELS multiple times with different parameters to generate a set of rules with higher complexity and one with lower complexity and then compare these rules to rules of similar complexity found by our parallel search. In Table 1 we can see that rules found via our *M-of-N* search are only marginally worse than a set of optimal rules with similar complexity found by CORELS and that CORELS can become quite slow when using too broad a search on a dataset with many inputs. When applied to the DNA promoter network CORELS runs out of memory and we were unable to produce a result showing that even for this relatively small network CORELS is too computationally demanding. Notice also that in this example the second hidden layer is much more explainable than the first, c.f. the large difference in accuracy between layers. **Method** **Comp** **Acc** **Network** **Time** --------------------- ---------- --------- ---------------------- ---------- CORELS(1/0.01/0.01) n/a n/a DNA promoter Layer 1 n/a Parallel *M-of-N* 0.239 89% DNA promoter Layer 1 700s CORELS 0.124 93.4% Cars Layer 1 $1s$ CORELS 0.04 87.3% Cars Layer 1 $1800s$ Parallel *M-of-N* 0.131 90.3% Cars Layer 1 $1s$ Parallel *M-of-N* 0.031 85.4% Cars Layer 1 $1s$ CORELS 0.053 99.05% Cars Layer 2 $1s$ CORELS 0.079 99.42% Cars Layer 2 $1s$ Parallel *M-of-N* 0.057 98.4% Cars Layer 2 $1s$ Parallel *M-of-N* 0.069 98.6% Cars Layer 2 $1s$ CORELS 0.165 91.6% E.COLI Layer 1 $1s$ CORELS 0.287 92.6 % E.COLI Layer 1 $10s$ Parallel *M-of-N* 0.132 89.4% E.COLI Layer 1 $1s$ Parallel *M-of-N* 0.189 90.2% E.COLI Layer 1 $1s$ : Comparison of rules extracted from different layers of networks trained on various datasets using (sequential) CORELS with different values for cardinality/support/regularization and our Parallel *M-of-N* extraction using different values for $\beta$. At a similar level of complexity (Comp), rules extracted by CORELS are only marginally more accurate (c.f. Acc) than *M-of-N* rules, despite CORELS searching over a much larger sequential rule space; refer to computation time (Time). Below, n/a is used when CORELS exits without terminating. Finally, the rate of accuracy decrease vs. complexity of Parallel *M-of-N* seems to be lower than that of CORELS; this deserves further investigation. In summary, the above results show that a parallel *M-of-N* search can provide a good approximation of the complexity/error trade-off for the rules describing the network. Next, we apply Parallel *M-of-N* to much larger networks for which sequential or exhaustive methods become intractable. Deep Convolutional Networks --------------------------- In order to evaluate the capability of compact *M-of-N* rules at explaining hidden features, we now apply the extraction algorithm to the hidden layers of three different networks trained on MNIST and compare results. Since applying extraction hierarchically can cause an accumulation of errors from previous layers, we use the network to compute the values of the inputs to the hidden layer that we wish to extract rules from. Hence, the errors from the extracted rules correspond to rule extraction at that layer. This allows us to examine the relative explainability at each layer. In practice, one could extract a hierarchical set of rules by choosing a single splitting value for each neuron. Our three networks are identical save for the activation function and training procedure. The network architecture consists of two convolutional layers with $16$ and $8$ filters respectively, each with a $3\times3$ convolutional window and using max pooling. This is followed by a $128$-unit densely connected layer with linear activation followed by a softmax layer. The first network uses ReLu units in the first two layers and is trained end-to-end. The second network is trained identically to the first but uses the hyperbolic tangent (Tanh) activation function in the first two layers. The third network uses an autoencoder to train the first three layers unsupervised before training the final softmax layer separately. We evaluate the rules using $5000$ examples from the test set Comparing the network using ReLu to the one using Tanh shows that in both cases the minimum error for each layer remains approximately the same. However, the explainability in the Tanh network is greatly increased in the first three layers, rules extracted from the Tanh network can be made much less complex without significantly increasing the error. This applies not only to the first two layers but also to layer 3 which uses a linear activation in both cases. In both cases the third layer is much less explainable than the first two and the only layer which we are truly able to produce an acceptably accurate and comprehensible explanation is the final one in which we see rules with an average complexity of $0.087$ achieving an average error of $0.013\%$. In the third layer we believe that the higher minimum error is mainly the result of the number of input units. In these layers there appear to be a lot of input units which are not relevant enough alone to be included in an *M-of-N* rule, but collectively they add enough noise to have a significant effect on the output. Because our search procedure is heuristic, it’s possible that a more thorough search could produce rules which are simpler and more accurate but our results at least tentatively back up the idea that the distributed nature of neural networks makes rule extraction from the hidden layers impractical if not infeasible. We hypothesize that the difference in complexity between rules extracted from the Tanh network and the Relu network is due to the saturating effect of the tanh function. A hidden neuron in the tanh network may have fewer ‘marginally relevant’ features than in the Relu network. This would explain the steep decline in accuracy found in the Tanh network and the more gradual decline found in the Relu network. The autoencoder has hidden features which are in general more explainable than either of the two previous networks. Compared to the ReLu network, the error of the extracted rules in the second layer is lower at every level of complexity. Compared to the Tanh network, the autoencoder has more accurate rules at medium levels of complexity ($6.1\%$ error at $0.144$ complexity vs. $6.6\%$ error at $0.18$ complexity). However, as complexity is reduced the extracted rules in the Tanh network remain accurate for longer ($9.6\%$ error at $0.053$ complexity vs. $8.4\%$ at $0.048$ complexity). Interestingly, in the autoencoder the second layer is slightly less explainable than the first. The third layer is more explainable than it is in the other two networks with significant increases in error only being seen with rules of average complexity less than $0.08$. In the softmax layer trained on top of the autoencoder we see that one cannot extract accurate rules of any complexity. This points to something fundamentally different from the previous two networks in the way that softmax uses the representations from the final layer to predict the output. This is the subject of further investigation. Our results indicate that, at least when it comes to extracting *M-of-N* rules with an assumption of weight-ordering, there are hard limitations to representing hidden units that cannot be overcome with any level of complexity. These limitations seem to be the result of the internal representations determined by the training procedure. Whether these limitations can be overcome by refining rule extraction methods or whether they are a fundamental part of the network is to be determined. However, we also find that the final layer of a CNN may be a promising target for rule extraction. We verify this by training $2$ more 4-layer CNNs on the Olivetti faces and fashion MNSIT dataset. The network trained on the Olivetti faces dataset consists of two convolutional layers with $20$ and $10$ filters respectively each with a $3 \times 3$ window and followed by $2 \times 2$ max pooling. Then a $256$ unit fully connected hidden layer with a linear activation followed by the softmax layer. The fashion MNIST network is larger. It has two convolutional layers with $32$ and $64$ filters with a $5 \times 5$ window followed by $2 \times 2$ max pooling. Then a $1024$ unit fully connected layer followed by the softmax. Olivetti faces is evaluated using the entire dataset and fashion MNIST is evaluated with $1000$ samples. In Table 2 we can see that the Olivetti Faces dataset had the most accurate and interpretable rules of all, this is probably at least partially due to the smaller size of the dataset. In all cases one can see a large drop in the complexity with only a penalty of $\beta=0.1$ resulting in a less than $1\%$ decrease in accuracy. This suggests that in the softmax layer, relatively few of the input neurons are being used to determine the output. This shows that rule extraction, and in particular *M-of-N* rule extraction can be an effective component in a multi-pronged approach to explainability. By extracting *M-of-N* rules from the final layer and using importance methods to explain the relevant hidden units, one should be able to reason about a network’s structure in ways that cannot be achieved with a strictly model-agnostic approach. Such a hybrid approach is expected to create explanations which can be accurate and yet less complex. **Dataset** **Comp. ($\beta=0$)** **Acc.($\beta=0$)** **Comp. ($\beta=0.1$)** **Acc.($\beta=0.1$)** ---------------- ----------------------- --------------------- ------------------------- ----------------------- Olivetti Faces 0.03 100% 0.024 99.9% MNIST 0.7 99.6% 0.06 98.7% Fashion MNIST 0.28 99.3% 0.06 98.8% : Comparison of the complexity (Comp), and accuracy (Acc) of rules extracted from the final layer of three CNNs trained on different datasets. Repeated for complexity penalties of $\beta=0$ and $\beta=0.1$ Conclusion and Future Work ========================== The black box problem has been an issue for neural networks since their creation. As neural networks become more integrated into society, explainability has attracted considerably more attention. The success of knowledge extraction in this endeavor has overall been mixed with most large networks today remaining difficult to interpret and explain. Traditionally, rule extraction has been a commonly used paradigm and it has been applied to various tasks. Critics, however, point out that the distributed nature of neural networks makes the specific method of decompositional rule extraction unfeasible as individual latent features are unlikely to represent anything of significance. We test this claim by applying a novel search method of *M-of-N* rule extraction to generate explanations of varying complexity for hidden neurons in a deep network. We find that the complexity of neural representations does provide a barrier to comprehensible rule extraction from deep networks. However we also find that within the softmax layer rule extraction can be both highly accurate and simple to understand. This shows that rule extraction, including *M-of-N* rule extraction can be a useful tool to help explain parts of a deep network. As future work, softmax layer rule extraction will be combined with local explainability techniques. Additionally, our preliminary experiments suggest that replacing the output layer of a network with *M-of-N* rules may be more robust to certain adversarial attacks. Out of $1000$ adversarial examples generated using FGSM [@FGSM] for the CNN trained on MNIST, $376$ were classified correctly by the *M-of-N* rules with maximum complexity by contrast with none classified correctly by the CNN. This is to be investigated next in comparison with various other defense methods.
{ "pile_set_name": "ArXiv" }
--- author: - 'Richard J. Szabo' title: 'D-BRANES AND BIVARIANT K-THEORY' --- We review various aspects of the topological classification of D-brane charges in K-theory, focusing on techniques from geometric K-homology and Kasparov’s KK-theory. The latter formulation enables an elaborate description of D-brane charge on large classes of noncommutative spaces, and a refined characterization of open string T-duality in terms of correspondences and KK-equivalence. The examples of D-branes on noncommutative Riemann surfaces and in constant $H$-flux backgrounds are treated in detail. Mathematical constructions include noncommutative generalizations of Poincaré duality and K-orientation, characteristic classes, and the Riemann-Roch theorem. Based on invited lectures given at the workshop “Noncommutative Geometry and Physics 2008 – K-Theory and D-Brane –”, February 18–22 2008, Shonan Village Center, Kanagawa, Japan. To be published in the volume [*Noncommutative Geometry and Physics III*]{} by World Scientific. HWM–08–10 , EMPG–08–16 ; September 2008 Introduction {#sec:1} ============ The subject of this paper concerns the intriguing relationship between D-branes and K-theory. As is by now well-known, D-brane charges in string theory are classified by the K-theory of the spacetime $X$ [@MM1]–[@RS1], or equivalently (in the absence of $H$-flux) by the $\operatorname{K}$-theory of the $C^*$-algebra $C_0(X)$ of continuous functions on $X$ vanishing at infinity. D-branes are sources for Ramond-Ramond fields, which are differential forms on spacetime and are correspondingly classified by a smooth refinement of K-theory called the differential K-theory of $X$ [@MW1]–[@SV1]. This topological classification has been used to explain a variety of effects in string theory that ordinary homology or cohomology alone cannot explain, such as the existence of stable non-BPS branes with torsion charges, the self-duality and quantization of Ramond-Ramond fields, and the appearence of certain subtle worldsheet anomalies and Ramond-Ramond field phase factors in the string theory path integral. It has also been used to predict many novel phenomena such as the instability of D-branes wrapping non-contractible cycles, and obstructions to the simultaneous measurement of electric and magnetic Ramond-Ramond fluxes. The classification of D-branes can be posed as the following problem. Given a closed string background $X$ (a Riemannian spin manifold with possibly other form fields), find all possible states of D-branes in $X$. At the worldsheet level, these states are described as consistent boundary conditions in an underlying boundary superconformal field theory. However, many of these states have no geometrical description. It has therefore proven useful in a variety of contexts to regard D-branes as objects in a suitable category. The classic example of this is in conjunction with topological string theory and Kontsevich’s homological mirror symmetry conjecture, in which B-model D-branes live in a bounded derived category of coherent sheaves, while A-model D-branes are objects in a certain Fukaya category [@Douglas1]. A more recent example has been used to clarify the relationship between boundary conformal field theory and K-theory, and consists in regarding open string boundary conditions in the category of a two-dimensional open/closed topological field theory [@MS1]. In the following we will argue that when one combines the worldsheet description with the target space classification in terms of Fredholm modules, one is led to regard D-branes as objects in a certain category of separable $C^*$-algebras [@BMRS1]. This is the category underlying Kasparov’s *bivariant K-theory* (or KK-theory), and it is related to the open string algebras which arise in string field theory [@BMRS2; @BMRS3]. The advantages of using the bivariant extension of K-theory are abundant and will be described thoroughly in what follows. It unifies the K-theory and K-homology descriptions of D-branes. It possesses an intersection product which provides the correct framework for formulating notions of duality between generic separable $C^*$-algebras, such as Poincaré duality. This can be used to explain the equivalence of the K-theory and K-homology descriptions of D-brane charge. It also leads to a new characterization of open string T-duality as a certain categorical KK-equivalence, which refines and generalizes the more commonly used characterizations in terms of Morita equivalence [@Schwarz1]–[@SW1]. The formalism is also well equipped to deal with examples of “non-geometric” backgrounds which have appeared recently in the context of flux compactifications [@STW1]. In certain instances, the noncommutative spacetimes can be viewed [@GSN1] as globally defined open string versions of Hull’s *T-folds* [@Hull1], which are backgrounds that fail to be globally defined Riemannian manifolds but admit a local description in which open patches are glued together using closed string T-duality transformations. KK-theory also provides us with a noncommutative version of K-orientation, which generalizes the Freed-Witten anomaly cancellation condition [@FW1] and enables us to select the consistent sets of D-branes from our category. Finally, bivariant K-theory yields a noncommutative version of the D-brane charge vector [@MM1]. In formulating the notions of D-brane charge and Ramond-Ramond fields on arbitrary $C^*$-algebras, one is faced with the problem of developing Poincaré duality and constructing characteristic classes in these general settings. From the mathematical perspective of noncommutative geometry alone, the formalism thus enables us to develop more tools for dealing with noncommutative spaces in the purely algebraic framework of separable $C^*$-algebras. These include noncommutative versions of Poincaré duality and orientation, topological invariants of noncommutative spaces such as the Todd genus, and a noncommutative version of the Grothendieck-Riemann-Roch theorem which is intimately tied to the formulation of D-brane charge. D-branes and K-homology {#sec:2} ======================= We will begin by explaining the topological classification of D-branes using techniques of geometric K-homology [@BD1; @Jakob1], following refs. [@RS1; @RSV1]. In this setting, brane charges are expressed in terms of the Chern character in $\operatorname{K}$-homology formulated topologically by the Baum-Douglas construction. Using the Fredholm module description available in analytic K-homology, this will lead to a description of brane charges later on more complicated spaces, in particular on noncommutative spacetime manifolds. Earlier work in this context can be found in refs. [@AST1; @Szabo1]. D-branes and K-cycles {#sec:2.1} --------------------- Throughout this paper we will work in the context of Type II superstring theory. Let $X$ be a compact spin$^c$-manifold, with no background $H$-flux (we will explain in detail later on what we mean precisely by this condition). A D-brane in $X$ may then be defined to be a Baum-Douglas K-cycle $(W,E,f)$ [@BD1], where $f:W\hookrightarrow X$ is a closed spin$^c$ submanifold called the worldvolume of the brane, and $E\rightarrow W$ is a complex vector bundle with connection called the Chan-Paton gauge bundle. The crucial feature about the Baum-Douglas construction is that $E$ defines a *stable* element of the K-theory group $\operatorname{K}^0(W)$. The set of all K-cycles forms an additive category under disjoint union. The quotient of the set of all K-cycles by Baum-Douglas “gauge equivalence” is isomorphic to the K-homology of $X$, defined as the collection of stable homotopy classes of Fredholm modules over the commutative $C^*$-algebra $\alg=C(X)$ of continuous functions on $X$. The isomorphism sends a K-cycle $(W,E,f)$ to the unbounded Fredholm module $(\hil,\rho,\Dirac_E^{(W)})$, where $\hil=L^2(W,S\otimes E)$ is the separable Hilbert space of square integrable $E$-valued spinors on $W$, $\rho(\phi)=m_{\phi\circ f}$ is the $*$-representation of $\phi\in\alg$ on $\hil$ by pointwise multiplication with the function $\phi\circ f$, and $\Dirac_E^{(W)}$ is the $E$-twisted Dirac operator associated to the spin$^c$ structure on $W$. The K-homology class $[W,E,f]$ of a D-brane depends only on the K-theory class $[E]\in\operatorname{K}^0(W)$ of its Chan-Paton bundle [@RS1]. Actually, to make this map surjective one has to work with more general K-cycles wherein $W$ is not necessarily a submanifold of spacetime. We will return to this point later on. It follows that D-branes naturally provide K-homology classes on $X$, dual to K-theory classes $f_!(E)\in\operatorname{K}^d(X)$, where $f_!$ is the K-theoretic Gysin map and $d=\dim(X)-\dim(W)$ is the codimension of the brane worldvolume in spacetime. The natural ${{\mathbb Z}}_2$-grading on K-homology $\operatorname{K}_\bullet(X)$ is by parity of dimension $\dim(W)=p+1$, and the K-cycle $(W,E,f)$ then corresponds to a D$p$-brane. Following ref. [@RS1], we will now describe the Baum-Douglas gauge equivalence relations explicitly, together with their natural physical interpretations. #### Bordism Two K-cycles $(W_1,E_1,f_1)$ and $(W_2,E_2,f_2)$ are said to be bordant if there exists a K-cycle with boundary $(M,E,f)$ such that $$\big(\partial M\,,\,E|_{\partial M}\,,\,f|_{\partial M}\big)~\cong~ \big(W_1\amalg(-W_2)\,,\,E_1\amalg E_2\,,\,f_1\amalg f_2\big) \ ,$$ where $-W_2$ denotes the manifold $W_2$ with the opposite spin$^c$ structure on its tangent bundle $TW_2$. If $X$ is locally compact, this relation generates a boundary condition which guarantees that D-branes have finite energy. In particular, it ensures that any K-cycle $(W,E,f)$ is equivalent to the closed string vacuum $(\emptyset,\emptyset,\emptyset)$ (with no D-branes) at “infinity” in $X$. #### Direct sum If $E_i$, $i=1,2$ are complex vector bundles over $W$, then we identify the K-cycles $$(W,E_1\oplus E_2,f)\sim(W,E_1,f)\amalg(W,E_2,f) \ .$$ This relation reflects gauge symmetry enhancement for coincident branes. The bundle $E=\bigoplus_i\,E_i$ is the Chan-Paton bundle associated to a bound state of D-branes with Chan-Paton bundles $E_i\rightarrow W$, bound by open string excitations given by classes of bundle morphisms $[\phi_{ij}]\in\operatorname{Hom}(E_i,E_j)$. Other open string degrees of freedom correspond to classes in $\operatorname{Ext}^p(E_i,E_j)$, $p\geq1$. #### Vector bundle modification Let $(W,E,f)$ be a K-cycle and let $F\rightarrow W$ be a real spin$^c$ vector bundle of rank $2n$, with associated bundles of Clifford modules $S_0(F),S_1(F)\rightarrow W$ and their pullbacks $S_\pm(F)\rightarrow F$ of rank $2^{n-1}$. Clifford multiplication induces a bundle map $\sigma:S_+(F)\rightarrow S_-(F)$ which is an isomorphism outside of the zero section. If ${{1\!\!1}}^{\mathbb{R}}$ denotes the trivial real line bundle over $W$, then upon choosing a Hermitean metric on the fibres of $F$ we can define the unit sphere bundle  :=  (F[[11]{}]{}\^) \_+(F)\_[(F)]{}\_-(F) \[unitspherebun\]with bundle projection $$\pi\,:\,\widehat{W}~\longrightarrow~ W \ ,$$ where ${\mathbb{B}}_\pm(F)$ are two copies of the unit ball bundle ${\mathbb{B}}(F)$ of $F$ whose boundary is the unit sphere bundle ${\mathbb{S}}(F)$. We can glue ${{\mathcal S}}_\pm(F)=S_\pm(F)\big|_{{\mathbb{B}}_\pm}$ together by Clifford multiplication to define the bundle $$H(F)\={{\mathcal S}}_+(F)\cup_\sigma{{\mathcal S}}_-(F) \ .$$ The restriction $H(F)\big|_{\pi^{-1}(w)}$ is the Bott generator of the $2n$-dimensional sphere $\pi^{-1}(w)={\mathbb{S}}^{2n}$ for each $w\in W$. We impose the equivalence relation $$(W,E,f)~\sim~\big(\,\widehat{W}\,,\,H(F)\otimes\pi^*(E)\,,\,f\circ \pi\big) \ ,$$ where the right-hand side is called the vector bundle modification of $(W,E,f)$ by $F$. This relation can be understood as the K-homology version of the well-known dielectric effect in string theory [@Myers1]. To understand this point, consider the simple K-cycle $(W,E,f)=(\pt,{\mathbb{C}},\iota)$, where $\iota$ is the inclusion of the point $\pt$ in $X$. Let $F={\mathbb{R}}^{2n}$, $n\geq1$. Then, with the definitions above, one has $\widehat{W}\cong{\mathbb{S}}^{2n}$ with $\pi:{\mathbb{S}}^{2n}\rightarrow\pt$ the collapsing map $\varepsilon$. Moreover, $H(F)=H(F)\big|_{{\mathbb{S}}^{2n}}$ is the Bott generator of the K-theory group $\operatorname{K}^0({\mathbb{S}}^{2n})$. By vector bundle modification, one has an equality of classes of K-cycles given by $$[\pt,{\mathbb{C}},\iota]\=\big[{\mathbb{S}}^{2n}\,,\,H(F)\otimes{\mathbb{C}}\,,\,\iota\circ \pi\big] \=\big[{\mathbb{S}}^{2n}\,,\,H(F)\,,\,\varepsilon\big] \ .$$ This equality represents the polarization or “blowing up” of a D0-brane (on the left) into a collection of spherical D$(2n)$-branes (on the right), together with “monopole” gauge fields corresponding to connections on the vector bundles $H(F)\to{\mathbb{S}}^{2n}$. It is essentially the statement of Bott periodicity. Tachyon condensation and the Sen-Witten construction {#sec:2.2} ---------------------------------------------------- The Sen-Witten construction [@Witten1; @Sen1] is the classic model establishing that D-brane charge is classified by K-theory. It relies on the physics of tachyon condensation and the realization of stable D-branes as decay products in unstable systems of spacetime filling branes and antibranes. In K-homology, this construction utilizes the fact that not all K-cycles are associated with submanifolds of spacetime, and correspond generically to non-representable D-branes arising as conformal boundary conditions with no direct geometric realization. For definiteness, let $X$ be a locally compact spin manifold of dimension $\dim(X)=10$. Let $W\subset X$ be a spin$^c$ submanifold of dimension $p+1$. Then the normal bundle $\nu_W\rightarrow W$ to $W$ in $X$ is a real spin$^c$ vector bundle of rank $9-p$. A D-brane $[M,E,\phi]\in\operatorname{K}_\bullet(X)$ is said to *wrap* $W$ if $\dim(M)=p+1$ and $\phi(M)\subset W$. The group of charges of Type IIB D$p$-branes ($p$ odd) wrapping $W$ may then be computed as the compactly supported K-theory group $$\begin{aligned} \operatorname{K}^0(\nu_W)&:=&\operatorname{K}^0\big({\mathbb{B}}(\nu_W)\,,\,{\mathbb{S}}(\nu_W)\big) \nonumber \\[4pt]&\cong&\operatorname{K}_{10}(\nu_W) \nonumber \\[4pt] &\cong&\operatorname{K}_{p+1}(W) \ , \label{IIBiso}\end{aligned}$$ where the first isomorphism follows from Poincaré duality and the second from the K-homology Thom isomorphism. Upon identifying the total space of $\nu_W$ with a tubular neighbourhood of $W$ in $X$ with respect to a chosen Riemannian metric on $X$, the group $\operatorname{K}_{10}(\nu_W)$ classifies 9-branes in $X$. The isomorphism (\[IIBiso\]) then asserts that this group coincides with the group of D$p$-branes $[M,E,\phi]$ wrapping $W$. The same calculation carries through for Type IIA D$p$-branes with $p$ even, starting from the pertinent K-theory group $\operatorname{K}^{-1}(\nu_W)$. The original example [@Witten1; @Sen1] concerns the charge group of D$p$-branes in Type IIB string theory on flat space $X={\mathbb{R}}^{10}$ given by $$\operatorname{K}_{p+1}({\mathbb{R}}^{p+1})~\cong~\operatorname{K}_0({\mathbb{R}}^{10})~:=~ \widetilde{\operatorname{K}}_0({\mathbb{S}}^{10}) \ ,$$ where we have used Bott periodicity. To make this relationship more explicit, we can adapt the *Atiyah-Bott-Shapiro (ABS) construction* [@ABS1] to the setting of geometric K-homology. Given a K-cycle $(W,E,f)$ in $X$, the vector bundle modification relation for $F=\nu_W$ reads $$\big[\,\widehat{W}\,,\,H(\nu_W)\otimes\pi^*(E)\,,\,f\circ\pi\big]\= [W,E,f]$$ with $\widehat{W}$ diffeomorphic to $X$. Generally, the nowhere vanishing section given by $s:W\rightarrow F\oplus{{1\!\!1}}^{\mathbb{R}}$, $x\mapsto0_x\oplus1$ induces a Gysin homomorphism on K-theory $s_!:\operatorname{K}^\bullet(W)\rightarrow\operatorname{K}^\bullet\big(\,\widehat{W}\,\big)$ with $$s_!(E)\=\big[\pi^*(E)\otimes H(F)\big]$$ by the K-theory Thom isomorphism. Let $W'\cong{\mathbb{B}}(\nu_W)\setminus{\mathbb{S}}(\nu_W)$ be a tubular neighbourhood of $f(W)$ with closure $\overline{W}$, retraction $\rho:W'\rightarrow W$, and twisted spinor bundles ${{\mathcal S}}_E^\pm:={{\mathcal S}}^\pm(\nu_W)\otimes\rho^*(E)\rightarrow\overline{W}$. After a possible K-theoretic stabilization, we can extend the spinor bundles over the complement $X\setminus W'$ to bundles ${{\mathcal S}}_E^\pm\rightarrow X$ with K-theory class [@Witten1; @OS1; @ABS1] $$\big[{{\mathcal S}}_E^+\big]-\big[{{\mathcal S}}_E^-\big]\=s_!(E) \ ,$$ which vanishes over $X\setminus W'$ by Clifford multiplication. Putting everything together finally gives $$\big[X\,,\,{{\mathcal S}}_E^+\,,\,\Id_X\big]-\big[X\,,\,{{\mathcal S}}_E^-\,,\,\Id_X\big] \=\pm\,[W,E,f] \ ,$$ where the sign depends on whether or not the spin$^c$ structures on $\widehat{W}$ and $X$ coincide. This equation is simply the statement of tachyon condensation on the unstable spacetime-filling brane-antibrane system (on the left) to a stable D-brane wrapping $W$ (on the right). Holonomy on D-branes {#sec:2.3} -------------------- In order to cancel certain worldvolume anomalies, it is necessary to introduce Ramond-Ramond flux couplings in the path integral for Type II string theory [@DMW1]. The formalism of geometric K-homology nicely achieves this via a spin$^c$ cobordism invariant as follows. Introduce “background” D-branes $(W,E,f)$ as *K-chains* $\big(\,\widetilde{W}\,,\,\widetilde{E}\,,\,\widetilde{f}\,\big)$ with boundary $$\partial\big(\,\widetilde{W}\,,\,\widetilde{E}\,,\,\widetilde{f}\, \big)~:=~\big(\partial\widetilde{W}\,,\,\widetilde{E}\, \big|_{\partial\widetilde{W}}\,,\, \widetilde{f}\,\big|_{\partial\widetilde{W}}\big)\=(W,E,f) \ .$$ By bordism, such branes have trivial K-homology class and so carry no charge. The reduced eta-invariant of a K-chain is defined by $$\Xi\big(\,\widetilde{W}\,,\,\widetilde{E}\,,\,\widetilde{f}\,\big)\= \mbox{$\frac12$}\,\big(\dim\hil_E^{(W)}+\eta(\Dirac_E^{(W)})\big)~\in~ {\mathbb{R}}/{{\mathbb Z}}\ ,$$ where $\hil_E^{(W)}$ is the space of harmonic $E$-valued spinors on $W$, and $\eta\big(\Dirac_E^{(W)}\big)$ is the (regulated) spectral asymmetry of the Dirac operator $\Dirac_E^{(W)}$. This invariant is defined up to compact perturbation of the Dirac operator and hence is ${\mathbb{R}}/{{\mathbb Z}}$-valued. The map $\Xi$ from K-chains to the group ${\mathbb{R}}/{{\mathbb Z}}$ respects disjoint union, direct sum and vector bundle modification, but *not* spin$^c$ cobordism. To rectify this problem, we introduce the holonomy over the given D-brane background with flat Ramond-Ramond flux $\xi=[E_0]-[E_1]\in\operatorname{K}^{-1}(X,{\mathbb{R}}/{{\mathbb Z}})\cong\operatorname{Hom}\big(\operatorname{K}_{\rm odd}(X)\,,\,{\mathbb{R}}/{{\mathbb Z}}\big)$ by $$\Omega\big(\,\widetilde{W}\,,\,\widetilde{\xi}\,,\,\widetilde{f}\, \big)\=\exp\Big[2\pi\ii\Big(\Xi\big(\,\widetilde{W}\,,\, \widetilde{f^*E_0}\,,\,\widetilde{f}\,\big)-\Xi \big(\,\widetilde{W}\,,\, \widetilde{f^*E_1}\,,\,\widetilde{f}\,\big)\Big)\Big] \ .$$ This quantity is the desired spin$^c$ cobordism invariant. Brane stability {#sec:2.4} --------------- We will now illustrate some of the predictive power of the K-homology classification through two novel sets of examples of brane stability which contradict what ordinary homology theory alone would predict. The first set consists of trivial K-homology classes $[W,{{1\!\!1}}^{\mathbb{C}},f]=0$ in $\operatorname{K}_\bullet(X)$, even though the worldvolume homology cycle $[W]\neq0$ in $\operatorname{H}_\bullet(X,{{\mathbb Z}})$. The obstructions to extending the homology class $[W]$ to a K-homology class are measured by the Atiyah-Hirzebruch-Whitehead spectral sequence $$\operatorname{E}_{p,q}^2\=\operatorname{H}_p\big(X\,,\,\operatorname{K}_q(\pt)\big)\quad \Longrightarrow \quad\operatorname{K}_{p+q}(X) \ .$$ With respect to a cellular decomposition of the spacetime manifold $X$, for each $p$ the corresponding filtration groups classify D-branes wrapping $W$ on the $p$-skeleton of $X$ with no lower brane charges. The $r$-th term in the spectral sequence is determined as the homology of certain differentials $\dd^r$. Cycles for which $[W]\notin\ker(\dd^r)$ for all $r$ correspond to Freed-Witten anomalous D-branes [@MMS1]. On the other hand, if $[W]\in{\rm im}(\dd^r)$ for some $r$, then the K-homology “lift” of the cycle $[W]$ vanishes and the D-brane is unstable. Cycles contained in the image of $\dd^r$ correspond to D-brane instantons [@MMS1] whose charge is not conserved in time along the trajectories of the worldvolume renormalization group flow. The extension problem for the spectral sequence at each term identifies the lower brane charges carried by stable D-branes. The second set is opposite in character to the first in that now $[W,E,f]\neq0$ in $\operatorname{K}_\bullet(X)$ even though $[W]=0$ in $\operatorname{H}_\bullet(X,{{\mathbb Z}})$. This occurs by the process of flux stabilization [@MMS1]–[@BRS1] on spacetimes which are the total spaces of topologically non-trivial fibre bundles $X\xrightarrow{F}B$. Worldvolume “flux” in this instance corresponds to the characteristic class of the fibration, which provides a conserved charge preventing the D-brane from decaying to the vacuum. Although $W$ is contractible in $X$, its class may be non-trivial as an element of $\operatorname{H}_\bullet(B,{{\mathbb Z}})$. The obstructions to lifting homology cycles from the base space $B$ are measured by the Leray-Serre spectral sequence $$\operatorname{E}_{p,q}^2\=\operatorname{H}_p\big(B\,,\,\operatorname{K}_q(F)\big) \quad \Longrightarrow \quad \operatorname{K}_{p+q}(X,F) \ .$$ Let us examine the original example of this phenomenon, that of D-branes in the group manifold of $SU(2)$ [@BDS1], in this language. Spacetime in this case is the total space of the Hopf fibration ${\mathbb{S}}^3\xrightarrow{{\mathbb{S}}^1}{\mathbb{S}}^2$, and the spectral sequence computes the K-homology as $\operatorname{K}_i(X,{\mathbb{S}}^1)\cong \operatorname{H}_2({\mathbb{S}}^2,{{\mathbb Z}})={{\mathbb Z}}$. The stable branes are spherical D2-branes, and the stabilizing flux is provided by the first Chern class of the monopole line bundle over ${\mathbb{S}}^2$. This example readily generalizes to the other Hopf fibrations [@RS1], and the K-homology framework nicely extends the examples of refs. [@MMS1; @BRS1] to spaces with less symmetry. D-brane charges {#sec:2.5} --------------- We will now describe the cohomological formula for the charge of a D-brane [@MM1]. The mathematical structure of this formula can be motivated by the following simple observation, which we will generalize later on to certain classes of noncommutative spacetimes. The natural bilinear pairing in cohomology is given by (x,y)\_=xy,\[X\]\[cohpairing\]for cohomology classes $x,y\in\operatorname{H}^\bullet(X,{{\mathbb Z}})$ in complimentary degrees. Upon choosing de Rham representatives $\alpha,\beta$ for $x,y$, this formula corresponds to integration of the product of differential forms $\int_X\,\alpha\wedge\beta$. Nondegeneracy of this pairing is the statement of Poincaré duality in cohomology. On the other hand, the natural bilinear pairing in K-theory is provided on complex vector bundles $E,F\to X$ by the index of the twisted Dirac operator (E,F)\_=(\_[EF]{})  . \[indexpairing\]The ${{\mathbb Z}}_2$-graded Chern character ring isomorphism :\^(X)[[Q]{}]{}   \^(X,[[Q]{}]{}) \[chiso\]is not compatible with these two pairings. However, by the Atiyah-Singer index theorem (\_[EF]{}) = (X) (EF), \[X\] \[ASindexthm\]we get an isometry with the ${{\mathbb Z}}_2$-graded modified Chern character group isomorphism $$\operatorname{ch}~\longrightarrow~\sqrt{\Todd(X)}\,\smile\,\operatorname{ch}\ ,$$ twisted by the square root of the invertible Todd class $\Todd(X)\in\operatorname{H}^{\rm even}(X,{{\mathbb Q}})$ of the tangent bundle $TX$. This almost trivial observation motivates the definition of the *Ramond-Ramond charge* of a D-brane $(W,E,f)$ as [@MM1] Q(W,E,f)=(f\_!(E))   \^(X,[[Q]{}]{})  . \[RRcharge\]In topological string theory, this rational charge vector coincides with the zero mode part of the associated boundary state in the Ramond-Ramond sector. In the D-brane field theory, ${Q}(W,E,f)=f_*\big({D}_{\rm WZ}(W,E,f)\big)$ is the cohomological Gysin image of the *Wess-Zumino class* (for vanishing $B$-field) \_[WZ]{}(W,E,f)=(E)   \^(W,[[Q]{}]{})  . \[WZclass\]This formula interprets the Ramond-Ramond charge as the anomaly inflow on the D-brane worldvolume $W$. The equivalence of these two formulas follows from the Grothendieck-Riemann-Roch formula (f\_!(E)) (X) = f\_\*( (E)(W)) \[GRRthm\]together with naturality of the Todd characteristic class. Compatibility with the equivalence relations of geometric K-homology follows easily by direct calculation. In particular, invariance under vector bundle modification is a simple computation showing that the charge of the polarized D-brane $\big(\,\widehat{W}\,,\,s_!(E)\,,\,f\circ\pi\big)$ equals $Q(W,E,f)$. D-branes and KK-theory {#sec:3} ====================== By merging the worldsheet and target space descriptions of D-branes, we will now motivate a categorical framework for the classification of D-branes using Kasparov’s KK-theory groups. This will set the stage for a noncommutative description of D-branes in a certain category of separable $C^*$-algebras. We will then explain various important features of the bivariant version of K-theory, and use them for certain physical and mathematical constructions. The material of this section is based on refs. [@BMRS1]–[@BMRS3]. Algebraic characterization of D-branes {#sec:3.1} -------------------------------------- The worldsheet description of a D-brane with worldvolume $W\subset X$ is provided by open strings, which may be defined to be relative maps $(\Sigma,\partial\Sigma)\rightarrow(X,W)$ from an oriented Riemann surface $\Sigma$ with boundary $\partial\Sigma$. In the boundary conformal field theory on $\Sigma={\mathbb{R}}\times[0,1]$, solutions of the Euler-Lagrange equations require the imposition of suitable boundary conditions, which we will label by $a,b,\dots$. These boundary conditions are not arbitrary and compatibility with superconformal invariance severely constrains the possible worldvolumes $W$. For example, in the absence of background $H$-flux, $W$ must be spin$^c$ in order to ensure the cancellation of global worldsheet anomalies [@FW1]. The problem which now arises is that while this is more or less understood at the classical level, there is no generally accepted definition of what is meant by a *quantum* D-brane. Equivalently, it is not known in general how to define consistent boundary conditions after quantization of the boundary conformal field theory. To formulate our conjectural description of this, we will take a look at the generic structure of open string field theory. The basic observation is that the concatenation of open string vertex operators defines algebras and bimodules. An $a$-$a$ open string, one with the same boundary condition $a$ at both of its ends, defines a noncommutative algebra $\dalg_a$ of open string fields. The opposite algebra $\dalg_a^\op$, with the same underlying vector space as $\dalg_a$ but with the product reversed, is obtained by reversing the orientation of the open string. On the other hand, an $a$-$b$ open string, with generically distinct boundary conditions $a,b$ at its two ends, defines a $\dalg_a$-$\dalg_b$ bimodule $\ealg_{ab}$, with the rule that open string ends can join only if their boundary labels are the same. The dual bimodule $\ealg_{ab}^\vee=\ealg_{ba}$ is obtained by reversing orientation, and $\ealg_{aa}=\dalg_a$ is defined to be the trivial $\dalg_a$-bimodule on which $\dalg_a$ acts by (left and right) multiplication. We would now like to use these ingredients to define a “category of D-branes” whose objects are the boundary conditions, and whose morphisms $a\to b$ are precisely the bimodules $\ealg_{ab}$. This requires an associative ${\mathbb{C}}$-bilinear composition law $$\ealg_{ab}\times\ealg_{bc}~\longrightarrow~\ealg_{ac} \ .$$ The problem, however, in the way that we have set things up, is that the operator product expansion of the open string fields is not always well-defined. Elements of the open string bimodule $\ealg_{ab}$ are vertex operators $$V_{ab}\,:\,[0,1]~\longrightarrow~\operatorname{End}(\hil_{ab})$$ acting on a separable Hilbert space $\hil_{ab}$. The structure of the vertex operator algebra is encoded in the singular operator product expansion $$V_{ab}(t)\cdot V_{bc}(t'\,)\=\sum_{j=1}^N\,\frac1{(t-t'\,)^{h_j}}~ W_{abc;j}(t,t'\,) \ , \quad t>t' \ ,$$ where $W_{abc;j}:[0,1]\times[0,1]\to\operatorname{End}(\hil_{ac})$ and $h_j\geq0$ are called conformal dimensions. When $h_j>0$, the leading singularities of the operator product expansion do not give an associative algebra in the usual sense. Seiberg-Witten limit {#sec:3.2} -------------------- Seiberg and Witten [@SW1] found a resolution to this problem in the case where spacetime is an $n$-dimensional torus $X={{\mathbb T}}^n$ with a constant $B$-field. They introduced a scaling limit wherein both the $B$-field and the string tension $T$ are scaled to infinity in such a way that their ratio $B/T$ remains finite, while the closed string metric $g$ on ${{\mathbb T}}^n$ is scaled to zero. In this limit the Hilbert space $\hil_a$ of the point particle at an open string endpoint is a module for a noncommutative torus algebra $\dalg_a$, which forms the complete set of observables for boundary conditions of maximal support. The product $\dalg_a\otimes\dalg_b$ acts irreducibly on the $\dalg_a$-$\dalg_b$ bimodule $\ealg_{ab}=\hil_a\otimes\hil_b^\vee$. In this case, the composition law $$V_{ac}(t'\,)\=\lim_{t\to t'}\,V_{ab}(t)\cdot V_{bc}(t'\,)$$ is well-defined since the conformal dimensions scale to zero in the limit as $h_j\sim g/T\to0$. It extends by associativity of the operator product expansion in the limit to a map $$\ealg_{ab}\otimes_{\dalg_b}\ealg_{bc}~\longrightarrow~\ealg_{ac} \ .$$ Furthermore, there are natural identifications of algebras $\dalg_a\cong\ealg_{ab}\otimes_{\dalg_b}\ealg_{ba}$ and $\dalg_b\cong\ealg_{ba}\otimes_{\dalg_a}\ealg_{ab}$. These results all mean that $\ealg_{ab}$ is a Morita equivalence bimodule, reflecting a *T-duality* between the noncommutative tori $\dalg_a$ and $\dalg_b$. KK-theory {#sec:3.3} --------- The construction of Section \[sec:3.2\] above motivates a conjectural framework in which to move both away from the dynamical regime dictated by the Seiberg-Witten limit and into the quantum realm. We will suppose that the appropriate modification consists in replacing $\ealg_{ab}$ by Kasparov bimodules $(\ealg_{ab},F_{ab})$, which generalize Fredholm modules. They coincide with the “trivial” bimodule $(\ealg_{ab},0)$ when $\ealg_{ab}$ is a Morita equivalence bimodule. We will not enter into a precise definition of these bimodules, which is somewhat technically involved (see ref. [@BMRS1], for example). As we move our way deeper into our treatment we will become better acquainted with the structures inherent in Kasparov’s theory. Stable homotopy classes of Kasparov bimodules define the ${{\mathbb Z}}_2$-graded KK-theory group $\operatorname{KK}_\bullet(\dalg_a,\dalg_b)$. Classes in this group can be thought of as “generalized” morphisms $\dalg_a\rightarrow\dalg_b$, in a way that we will make more precise as we go along. In particular, if $\phi:\alg\rightarrow\balg$ is a homomorphism of separable $C^*$-algebras, then it determines a canonical class $[\phi]\in\operatorname{KK}_\bullet(\alg,\balg)$ represented by the “Morita-type” bimodule $(\balg,\phi,0)$. The group $\operatorname{KK}_\bullet(\alg,{\mathbb{C}})=\operatorname{K}^\bullet(\alg)$ is the K-homology of the algebra $\alg$, since in this case Kasparov bimodules are the same things as Fredholm modules over $\alg$. On the other hand, the group $\operatorname{KK}_\bullet({\mathbb{C}},\balg)=\operatorname{K}_\bullet(\balg)$ is the K-theory of $\balg$. One of the most powerful aspects of Kasparov’s theory is the existence of a bilinear, associative *composition* or *intersection product* $$\otimes_{\balg}\,:\,\operatorname{KK}_i(\alg,\balg)\times\operatorname{KK}_j(\balg,\calg)~ \longrightarrow~\operatorname{KK}_{i+j}(\alg,\calg) \ .$$ We will not attempt a general definition of the intersection product, which is notoriously difficult to define. Later on we will see how it is defined on specific classes of $C^*$-algebras. The product is compatible with the composition of morphisms, in that if $\phi:\alg\rightarrow\balg$ and $\psi:\balg\rightarrow\calg$ are homomorphisms of separable $C^*$-algebras then $$[\phi]\otimes_\balg[\psi]\=[\psi\circ\phi] \ .$$ The intersection product makes $\operatorname{KK}_0(\alg,\alg)$ into a unital ring with unit $1_\alg=[\Id_\alg]$, the class of the identity morphism on $\alg$. It can be used to define Kasparov’s bilinear, associative *exterior product* $$\begin{aligned} \otimes \,:\, \operatorname{KK}_i(\alg_1, \balg_1) \times \operatorname{KK}_j(\alg_2, \balg_2) & \longrightarrow & \operatorname{KK}_{i+j}(\alg_1\otimes \alg_2, \balg_1\otimes \balg_2) \ , \\[4pt] x_1\otimes x_2 &=& (x_1 \otimes 1_{\alg_2} )\otimes_{\balg_1\otimes \alg_2} (1_{\balg_1}\otimes x_2) \ .\end{aligned}$$ This definition also uses *dilation*. If $x=[\phi]\in\operatorname{KK}_j(\alg,\balg)$ is the class of the morphism $\phi:\alg\rightarrow\balg$, then $x\otimes1_\calg=[\phi\otimes\Id_\calg]\in \operatorname{KK}_j(\alg\otimes\calg,\balg\otimes\calg)$ is the class of the morphism $\phi\otimes\Id_\calg:\alg\otimes\calg\rightarrow \balg\otimes\calg$. The KK-theory groups have some nice properties, described in refs. [@Higson1]–[@Meyer1], which enable us to define our *D-brane categories*. There is an additive category whose objects are separable $C^*$-algebras and whose morphisms $\alg\to\balg$ are precisely the classes in $\operatorname{KK}_\bullet(\alg,\balg)$. This category is a *universal* category, in the sense that $\operatorname{KK}_\bullet(-,-)$ can be characterized as the unique bifunctor on the category of separable $C^*$-algebras and $*$-homomorphisms which is homotopy invariant, compatible with stabilization of $C^*$-algebras, and respects split exactness. The composition law in this category is provided by the intersection product. The category is not abelian, but it is *triangulated*, like other categories of relevance in D-brane physics. It further admits the structure of a “weak” monoidal category, with multiplication given by the spatial tensor product on objects, the external Kasparov product on morphisms, and with identity the one-dimensional $C^*$-algebra ${\mathbb{C}}$. A diagrammatic calculus in this tensor category is developed in refs. [@BMRS1; @BMRS2]. KK-equivalence {#sec:3.4} -------------- As our first application of the bivariant version of K-theory, we introduce the following notion which will be central to our treatment later on. Any given fixed element $\alpha\in\operatorname{KK}_d(\alg,\balg)$ determines homomorphisms on K-theory and K-homology given by taking intersection products $$\otimes_\alg\alpha\,:\,\operatorname{K}_j(\alg)~\longrightarrow~\operatorname{K}_{j+d}(\balg) \qquad \mbox{and} \qquad \alpha\otimes_\balg\,:\,\operatorname{K}^j(\balg)~\longrightarrow~\operatorname{K}^{j+d}(\alg) \ .$$ If $\alpha$ is *invertible*, i.e., there exists an element $\beta\in\operatorname{KK}_{-d}(\balg,\alg)$ such that $\alpha\otimes_\balg\beta=1_\alg$ and $\beta\otimes_\alg\alpha=1_\balg$, then we write $\beta=:\alpha^{-1}$ and there are isomorphisms $$\operatorname{K}_j(\alg)~\cong~\operatorname{K}_{j+d}(\balg) \qquad \mbox{and} \qquad\operatorname{K}^j(\balg)~\cong~\operatorname{K}^{j+d}(\alg) \ .$$ In this case the algebras $\alg,\balg$ are said to be *KK-equivalent*. From a physical perspective, algebras $\alg,\balg$ are KK-equivalent if they are isomorphic as objects in the D-brane category described at the end of Section \[sec:3.3\] above. Such D-branes have the same K-theory and K-homology. For example, Morita equivalence implies KK-equivalence, since the discussion of Section \[sec:3.2\] above shows that the element $\alpha=\big[(\bun_{ab},0)\big]$ is invertible. However, the converse is not necessarily true. On the class of $C^*$-algebras which are KK-equivalent to commutative algebras, one has the universal coefficient theorem for KK-theory given by the exact sequence [@RSch1] $$\begin{aligned} 0 &\longrightarrow&{\operatorname{Ext}_{{\mathbb Z}}}\bigl(\operatorname{K}_{\bullet+1}(\alg)\,,\, \operatorname{K}_\bullet(\balg)\bigr) ~\longrightarrow~ \operatorname{KK}_\bullet\bigl(\alg\,,\, \balg\bigr)~\longrightarrow ~ \nonumber \\ &\longrightarrow& {\operatorname{Hom}_{{\mathbb Z}}}\bigl(\operatorname{K}_\bullet(\alg)\,,\, \operatorname{K}_\bullet(\balg) \bigr)~\longrightarrow~ 0 \ . \label{UCTKK}\end{aligned}$$ We will make extensive use of this exact sequence in the following. Poincaré duality {#sec:3.5} ---------------- The noncommutative version of Poincaré duality was introduced by Connes [@Connes1] and further developed in refs. [@KP1]–[@Tu1]. Our treatment is closest to that of Emerson [@Emerson1]. Let $\alg$ be a separable $C^*$-algebra, and let $\alg^\op$ be its opposite algebra. The opposite algebra is introduced in order to regard $\alg$-bimodules as $(\alg\otimes\alg^\op)$-modules. We say that $\alg$ is a *Poincaré duality (PD) algebra* if there is a *fundamental class* $\Delta\in\operatorname{KK}_d(\alg\otimes\alg^\op,{\mathbb{C}})= \operatorname{K}^d(\alg\otimes\alg^\op)$ with inverse $\Delta^\vee\in\operatorname{KK}_{-d}({\mathbb{C}},\alg\otimes\alg^\op)= \operatorname{K}_{-d}(\alg\otimes\alg^\op)$ such that $$\begin{aligned} \Delta^\vee\otimes_{\alg^\op}\Delta&=&1_\alg\in\operatorname{KK}_0(\alg,\alg) \ , \\[4pt] \Delta^\vee\otimes_{\alg}\Delta&=&(-1)^d~ 1_{\alg^\op}\in\operatorname{KK}_0(\alg^\op,\alg^\op)\end{aligned}$$ for some $d=0,1$. The subtle sign factor in this definition reflects the orientation of the Bott element $\Delta^\vee$. This definition determines inverse isomorphisms $$\begin{aligned} \operatorname{K}_i(\alg)&\xrightarrow{\otimes_\alg\Delta}&\operatorname{K}^{i+d}(\alg^\op)= \operatorname{K}^{i+d}(\alg) \ , \\[4pt] \operatorname{K}^i(\alg)=\operatorname{K}^{i}(\alg^\op)& \xrightarrow{\Delta^\vee\otimes_{\alg^\op}}& \operatorname{K}_{i-d}(\alg) \ ,\end{aligned}$$ which is the usual requirement of Poincaré duality. More generally, by replacing the opposite algebra $\alg^\op$ in this definition with an arbitrary separable $C^*$-algebra $\balg$, we get the notion of *PD pairs* $(\alg,\balg)$. Although the class of PD algebras is quite restrictive, PD pairs are rather abundant [@BMRS2]. As a simple example, consider the commutative algebra $\alg=C_0(X)=\alg^\op$ of continuous functions vanishing at infinity on a complete oriented manifold $X$. Let $\balg=C_0(T^*X)$ or $\balg=C_0\big(X\,,\,\Cl(T^*X)\big)$, where $T^*X$ is the cotangent bundle over $X$ and $\Cl(T^*X)$ is the Clifford algebra bundle of $T^*X$. Then $(\alg,\balg)$ is a PD pair, with $\Delta$ given by the Dirac operator on $\Cl(T^*X)$. If in addition $X$ is a spin$^c$ manifold, then $\alg$ is a PD algebra. In this case, the fundamental class $\Delta$ is the Dirac operator $\Dirac$ on the diagonal of $X\times X$, i.e., the image of the Dirac class $[\Dirac]\in\operatorname{K}^\bullet(\alg)$ under the group homomorphism $$m^*\,:\,\operatorname{K}^\bullet(\alg)~\longrightarrow~\operatorname{K}^\bullet(\alg\otimes\alg)$$ induced by the product homomorphism $m:\alg\otimes\alg\rightarrow\alg$, while its inverse $\Delta^\vee$ is the Bott element. Thus in this case the noncommutative version of Poincaré duality agrees with the classical one. We will encounter some purely noncommutative examples later on. See ref. [@BMRS1] for further examples. In general, the moduli space of fundamental classes of an algebra $\alg$ is isomorphic to the group of invertible elements in the ring $\operatorname{KK}_0(\alg,\alg)$ [@BMRS1]. When $\alg=C_0(X)$, this space is in general larger than the space of spin$^c$ structures or K-orientations usually considered in the literature. This follows from the universal coefficient theorem (\[UCTKK\]), which shows that the moduli space is an extension of the automorphism group ${\rm Aut}(\operatorname{K}^0(X))$. Similarly, if $\alg$ and $\balg$ are $C^*$-algebras that are KK-equivalent, then the space of all KK-equivalences $\alpha$ is a torsor with associated group the invertible elements of $\operatorname{KK}_0(\alg,\alg)$. K-orientation and Gysin homomorphisms {#sec:3.6} ------------------------------------- We can treat generic K-oriented maps by generalizing a construction due to Connes and Skandalis in the commutative case [@CSk1]. Let $f:\alg\rightarrow\balg$ be a $*$-homomorphism of separable $C^*$-algebras in a suitable category. Then a *K-orientation* for $f$ is a functorial way of associating a class $f!\in\operatorname{KK}_d(\balg,\alg)$ for some $d=0,1$. This element determines a *Gysin “wrong way” homomorphism* on K-theory through $$f_!=\otimes_\balg (f!)\,:\,\operatorname{K}_\bullet(\balg)~\longrightarrow~ \operatorname{K}_{\bullet+d}(\alg) \ .$$ If the $C^*$-algebras $\alg$ and $\balg$ are both PD algebras with fundamental classes $\Delta_\alg\in\operatorname{KK}_{d_\alg}(\alg\otimes\alg^\op,{\mathbb{C}})$ and $\Delta_\balg\in\operatorname{KK}_{d_\balg}(\balg\otimes\balg^\op,{\mathbb{C}})$, respectively, then any morphism $f:\alg\rightarrow\balg$ is K-oriented with K-orientation given by $$f!\=(-1)^{d_\alg}~\Delta_\alg^\vee\otimes_{\alg^\op}[f^\op] \otimes_{\balg^\op}\Delta_\balg$$ and $d=d_\alg-d_\balg$. This construction uses the fact [@BMRS1] that the involution $\alg\to\alg^\op$, $f\mapsto f^\op:\alg^\op\to\balg^\op$ on the stable homotopy category of separable $C^*$-algebras and $*$-homomorphisms passes to the D-brane category. Functoriality $$g!\otimes_\balg f!=(g\circ f)!$$ for any other $*$-homomorphism of separable $C^*$-algebras $g:\balg\to\calg$ follows by associativity of the Kasparov intersection product. More general constructions of K-orientations will be encountered later on. The following construction demonstrates that any D-brane $(W,E,f)$ in $X$ determines a canonical KK-theory class $f!\in\operatorname{KK}_d\big(C(W)\,,\,C(X)\big)$. Recall that in this instance the normal bundle $\nu_W=f^*(TX)/TW$ is a spin$^c$ vector bundle. Let $i^W!:=\big[(\ealg,F)\big]\in\operatorname{KK}_d\big(C(W)\,,\,C(X)\big)$ be the invertible element associated to the ABS representative of the Thom class of the zero section $i^W:W\hookrightarrow\nu_W$. Let $[\Phi]\in\operatorname{KK}_0\big(C_0(\nu_W)\,,\,C_0(W'\,)\big)$ be the invertible element induced by the isomorphism $\Phi$ identifying $W'$ with a neighbourhood of $i^W(W)$ in $X$. Let $j!\in\operatorname{KK}_0\big(C_0(W'\,)\,,\,C(X)\big)$ be the class induced by the extension by zero of the open subset $j:W'\hookrightarrow X$. Then a K-orientation for $f$ is given by $$f!\=i^W!\otimes_{C_0(\nu_W)}[\Phi]\otimes_{C_0(W'\,)}j! \ .$$ In this way our notion of K-orientation coincides with the Freed-Witten anomaly cancellation condition [@FW1]. This construction extends to arbitrary smooth proper maps $\phi:M\rightarrow X$, corresponding generally to non-representable D-branes, for which $TM\oplus\phi^*(TX)$ is a spin$^c$ vector bundle over $M$. Cyclic theory {#sec:4} ============= The definition of D-brane charge given in Section \[sec:2.5\] relied crucially on the connection between the topological $\operatorname{K}$-theory of a spacetime $X$ and its cohomology through the rational isomorphism provided by the Chern character (\[chiso\]). In the generic noncommutative settings that we are interested in, we need a more general cohomological framework in which to express the D-brane charge. The appropriate receptacle for the Chern character in analytic $\operatorname{K}$-theory is the cyclic cohomology of the given (noncommutative) algebra $\alg$ [@Connes1]. As it is not commonly known material in string theory, in this section we will present a fairly detailed overview of the general aspects of cyclic homology and cohomology. Then we will specialize to the specific bivariant cyclic theory that we will need in subsequent sections. This general formulation will provide a nice intrinsic definition of the D-brane charge, suitable to our noncommutative situations. Cyclic homology {#sec:4.1} --------------- Let $\alg$ be a unital algebra over ${{\mathbb C}}$. The [*universal differential graded algebra $\Omega^\bullet(\alg)$*]{} is the universal algebra generated by $\alg$ and the symbols $\dd a$, $a\in\alg$ with the following properties: 1. $\dd:\alg\to\Omega^1(\alg)$ is linear; 2. $\dd$ obeys the Leibniz rule $\dd(a\,b)=\dd(a)~b+a~\dd(b)$; 3. $\dd(1)=0$; and 4. $\dd^2=0$. These conditions imply that $\dd$ is a linear derivation, and elements of $\Omega^\bullet(\alg)$ are called [*noncommutative differential forms*]{} on $\alg$, or more precisely on the tensor algebra $T\alg=\bigoplus_{n\geq0}\,\alg^{\otimes n}$ of $\alg$. We define $\Omega^0(\alg)=\alg$. In degree $n>0$, the space of $n$-forms is the linear span $$\Omega^n(\alg)={\rm Span}_{{\mathbb C}}\big\{a_0~\dd a_1\cdots\dd a_n~ \big|~a_0,a_1,\dots,a_n\in\alg\big\} \ ,$$ which under the isomorphism $a_0~\dd a_1\cdots\dd a_n\leftrightarrow a_0\otimes a_1\otimes\cdots\otimes a_n$ may be presented explicitly as a vector space by $$\Omega^n(\alg)\cong\alg\otimes(\alg/{{\mathbb C}})^{\otimes n} \ .$$ The graded vector space $\Omega^\bullet(\alg)$ then becomes a graded algebra by using the Leibniz rule to define multiplication of forms by $$\begin{aligned} && (a_0~\dd a_1\cdots\dd a_n)\cdot(a_{n+1}~\dd a_{n+2}\cdots\dd a_p) \label{formmultdef} \\ && \qquad \= (-1)^n\,a_0\,a_1~\dd a_2\cdots\dd a_p+\sum_{i=1}^n\,(-1)^{n-i}\,a_0~ \dd a_1\cdots\dd(a_i\,a_{i+1})\cdots\dd a_p \ . \nonumber\end{aligned}$$ Using this definition the operator $\dd$ may be extended to a graded derivation on $\Omega^\bullet(\alg)$. When the algebra $\alg$ is not unital, we apply the above construction to the unitization $\widetilde{\alg}=\alg\oplus{{\mathbb C}}$ of $\alg$, with multiplication given by $$(a,\lambda)\cdot(b,\mu)=(a\,b+\lambda\,b+\mu\,a,\lambda\,\mu) \ .$$ Thus in degree $n>0$ we have \^n() := \^n() =\^[(n+1)]{}\^[n]{}  . \[Omeganonunital\]In degree $0$ we define $\Omega^0({\alg}) = \alg$. The cohomology of the differential $\dd$ on $\Omega^\bullet(\alg)$ is trivial in positive degree and equal to ${{\mathbb C}}$ in degree $0$. To get interesting homology theory, we need to introduce two other differentials. Let us first define the boundary map $$\bb\,:\,\Omega^n(\alg)~\longrightarrow~\Omega^{n-1}(\alg)$$ by the formula $$\bb (\omega ~\dd a) = (-1)^{|\omega|} \,[\omega, a]$$ where $|\omega|=n-1$ is the degree of the form $\omega\in\Omega^{n-1}(\alg)$. This definition uses the structure of a differential graded algebra on $\Omega^\bullet (\alg)$. Using the explicit formula (\[formmultdef\]) for the product of two forms and assuming that $\omega = a_0~\dd a_1 \cdots \dd a_{n-1}$, this definition may be rewritten in the form $$\begin{aligned} \bb(a_0\otimes a_1\otimes\cdots\otimes a_n)&=&\sum_{i=0}^{n-1}\,(-1)^i\,a_0\otimes\cdots \otimes a_i\,a_{i+1}\otimes\cdots\otimes a_n\nonumber\\ &&+\,(-1)^n\,a_n\,a_0\otimes a_1\otimes\cdots\otimes a_{n-1} \ . \label{bbdef}\end{aligned}$$ The Karoubi operator is the degree $0$ operator $\kappa: \Omega^n (\alg) \to \Omega^n( \alg)$ defined by $$\kappa (\omega~ \dd a) = (-1)^{|\omega|} ~\dd a~\omega$$ where $\omega \in \Omega^{n-1}(\alg)$. Explicitly, this operator is given by the formula on $\kappa:\alg^{\otimes(n+1)}\to\alg^{\otimes(n+1)}$ through $$\kappa(a_0\otimes a_1\otimes\cdots\otimes a_n)= (-1)^n\,a_n\otimes a_0\otimes\cdots\otimes a_{n-1}+(-1)^n\, 1\otimes a_n\,a_0\otimes\cdots\otimes a_{n-1} \ .$$ On the image $\dd\Omega^\bullet (\alg)$ of the differential $\dd$, this operator is precisely the generator (with sign) of cyclic permutations. With this in mind we introduce the remaining differential $$\BB=\sum_{i=0}^n\,\kappa^i~\dd\,:\,\Omega^n(\alg)~\longrightarrow~ \Omega^{n+1}(\alg) \ .$$ It is easy to check that the two operators $\bb$ and $\BB$ anticommute and are nilpotent, $$\bb^2\=\bb~\BB+\BB~\bb\=\BB^2\=0 \ .$$ The two differentials $\BB$ and $\bb$ give $\Omega^\bullet (\alg)$ the structure of a *mixed* complex $(\Omega^\bullet( \alg), \bb, \BB)$, which can be organised into a double complex given by the diagram \[bicomplex\]which in bidegree $(p,q)$ contains $\Omega^{p-q}(\alg)$. The columns in this complex are repeated and we declare all spaces located at $(p,q)$ with $p-q < 0$ or $p<0$ to be trivial. Thus this double complex occupies one octant in the $(p,q)$-plane. There is a canonical isomorphism $S$ which by definition is the identity map sending the space $\Omega^n(\alg)$ located at $(p+1, q+1)$ to itself located at $(p,q)$. The column at $p=0$ is by definition annihilated by $S$. This operator is Connes’ periodicity operator. It follows from its definition that $S$ is of degree $-2$. The [*total complex $({\rm Tot}\,\Omega^\bullet(\alg),\bb+\BB)$*]{} of the bicomplex $(\Omega^\bullet(\alg),\bb,\BB)$ is defined in degree $n$ by the finite sum $${\rm Tot}_n\,\Omega^\bullet(\alg)=\bigoplus_{p\geq0}\,\Omega^{n-2p}(\alg) \ .$$ The [*Hochschild homology*]{} $\operatorname{HH}_\bullet(\alg)$ of the algebra $\alg$ is defined to be the homology of the complex $(\Omega^\bullet(\alg),\bb)$, \_()=\_(\^(),) . \[HHdef\]The [*cyclic homology $\operatorname{HC}_\bullet(\alg)$*]{} of the algebra $\alg$ is defined to be the homology of the total complex $({\rm Tot}\,\Omega^\bullet(\alg),\bb+\BB)$, $$\operatorname{HC}_\bullet(\alg)=\operatorname{H}_\bullet\big({\rm Tot}\,\Omega^\bullet(\alg)\,,\,\bb+\BB\big) \ .$$ If we denote by $I: \Omega^\bullet (\alg) \to {\rm Tot}\, \Omega^\bullet(\alg)$ the inclusion of the first column into the double complex (\[bicomplex\]), then by using the definition of the Connes periodicity operator it is not difficult to deduce the fundamental relation between Hochschild and cyclic homology given by the long exact sequence $$\begin{aligned} \cdots~\longrightarrow~ \operatorname{HH}_{n+2}(\alg)~ \stackrel{I}{\longrightarrow}~ \operatorname{HC}_{n+2}(\alg)~ &\stackrel{S} {\longrightarrow}& ~\operatorname{HC}_{n}(\alg)~ \stackrel{\BB}{\longrightarrow}~ \label{homlongexact} \\ &\stackrel{\BB}{\longrightarrow}&~ \operatorname{HH}_{n+1}(\alg)~ \longrightarrow~\cdots \ . \nonumber\end{aligned}$$ The map $S$ in this sequence is induced by the periodicity operator which gives rise to a surjection $S: \text{Tot}_{n+2}\,\Omega^\bullet(\alg) \to \text{Tot}_{n}\,\Omega^\bullet(\alg)$. Finally, we define the periodic cyclic homology. For this, we need to consider a complex that is a completion, in a certain sense, of the total complex used in the construction of cyclic homology. Thus we put $$\widehat{\Omega}{}^\bullet(\alg) = \prod_{n\geq 0}\, \Omega^n(\alg) \ .$$ Elements of this space are inhomogenous forms $(\omega_0, \omega_1, \dots , \omega_n, \dots)$, where $\omega_n\in \Omega^n(\alg)$, with possibly infinitely many non-zero components. We shall regard this space as being ${{\mathbb Z}}_2$-graded with the decomposition into even and odd degree forms given by $$\widehat{\Omega}^{\rm even}(\alg)\=\prod_{n\geq0}\, {\Omega}^{2n}(\alg) \qquad \mbox{and} \qquad \widehat{\Omega}^{\rm odd}(\alg)\=\prod_{n\geq0}\, {\Omega}^{2n+1}(\alg) \ .$$ A typical element of $\widehat{\Omega}^{\text{even}}(\alg)$ is a sequence $(\omega_0, \omega_2, \dots , \omega_{2n}, \dots )$, and similarly for $\widehat{\Omega}^{\text{odd}}(\alg)$. Then the [ *periodic cyclic homology*]{} $\operatorname{HP}_\bullet(\alg)$ of the algebra $\alg$ is defined to be the homology of the ${{\mathbb Z}}_2$-graded complex $$\cdots~{\xrightarrow}{\bb+\BB}~\widehat{\Omega}^{\rm even}(\alg)~ {\xrightarrow}{\bb+\BB}~\widehat{\Omega}^{\rm odd}(\alg)~ {\xrightarrow}{\bb+\BB}~\widehat{\Omega}^{\rm even}(\alg)~ {\xrightarrow}{\bb+\BB}~\cdots \ .$$ The Connes operator $S$ also provides a relation between the cyclic and periodic cyclic homology in the following way. For every $n$, there is a surjection $$T_{2n}\,:\, \widehat{\Omega}^{\text{even}}(\alg) ~\longrightarrow~ \text{Tot}_{2n}\,\Omega^\bullet (\alg)$$ which sends a form $(\omega_0, \omega_2, \dots)$ to its truncation $(\omega_0, \omega_2, \dots, \omega_{2n})$. For various values of $n$ these surjections are compatible with the periodicity operator $S$ in the sense that there is a commutative diagram \[commdiagS\]An even periodic cycle is a sequence of the type described above which is annihilated by the operator $\bb + \BB$, i.e., applying this operator creates the zero chain in $\widehat{\Omega}^{\text{odd}} (\alg)$ as in the diagram $$\begin{CD} \vdots\\ @V{\bb}VV \\ 0 @<{\BB}<< \omega_{4} \\ @. @V{\bb}VV \\ @. 0 @<{\BB}<< \omega_2 \\ @. @. @V{\bb}VV \\ @. @. 0 @<{\BB}<< \omega_0 \\ @. @. @. @V{0}VV \\ @. @. @. ~~ \ 0 \ . \end{CD}$$ The vertical map in degree $0$ is the zero map. The truncation of this cycle in, say, degree $2$ creates an element of $\text{Tot}_2\,\Omega^\bullet (\alg)$ which is a cycle in the *cyclic* complex $$\begin{CD} 0 @<{0}<< \omega_2 \\ @. @V{\bb}VV \\ @. 0 @<{\BB}<< \omega_0 \\ @. @. @V{0}VV \\ @. @. ~~ \ 0 \ . \end{CD}$$ The zero map in the upper left corner appears due to the definition of the differential in the cyclic complex. It kills the leftmost column (where $p=0$), which in this case is the column where $\omega_2$ is located. Thus, for any $n$ the truncation map $T_{2n}$ sends an even periodic cycle to a cyclic $2n$-cycle and so induces a map $T_{2n} : \operatorname{HP}_{\rm even}(\alg) \rightarrow\operatorname{HC}_{2n}(\alg)$. From the diagram (\[commdiagS\]) it follows that these maps are compatible with the periodicity operator $S$ and we obtain a surjection $$\operatorname{HP}_{\rm even}(\alg) ~\longrightarrow~ \lim_{\stackrel{\scriptstyle \longleftarrow}{\scriptstyle S}}\,\operatorname{HC}_{2n}(\alg) \ .$$ There is a complementary map in odd degree whose construction is identical to the one just described. This map is not an isomorphism in general. Its kernel is equal to $\displaystyle{\lim_{\longleftarrow}}{}^1\,\operatorname{HC}_{\bullet+2n+1}(\alg)$, where $\displaystyle{\lim_{\longleftarrow}}{}^1$ is the first derived functor of the inverse limit functor. We will now consider a key example which illustrates the importance of these constructions. Let $\alg=C^\infty(X)$ be the algebra of smooth functions on a smooth paracompact spacetime manifold $X$. Then the action of the boundary map (\[bbdef\]) is trivial and the mixed complex $(\Omega^\bullet(\alg),\bb,\BB)$ reduces to the complexified de Rham complex $(\Omega^\bullet (X),\dd)$, where $\dd$ is the usual de Rham exterior derivative on $X$. Equivalently, there is a natural surjection $\mu:(\Omega^\bullet(\alg),\bb,\BB)\to(\Omega^\bullet(X),0,\dd)$ of mixed complexes. The Connes-Pflaum version of the Hochschild-Kostant-Rosenberg theorem asserts that the map $\mu$ is a quasi-isomorphism, i.e., it induces equality of the Hochschild homology (\[HHdef\]) with the de Rham complex. Explicitly, the map $\mu :\Omega^n(\alg)\to\Omega^n(X)$ is implemented by sending a noncommutative $n$-form to a differential $n$-form as $$\mu(f^0~\dd f^1\cdots\dd f^n)=\mbox{$\frac1{n!}$}\,f^0~\dd f^1\wedge\cdots\wedge\dd f^n$$ for $f^i\in C^\infty(X)$. It follows that the Hochschild homology of the algebra $C^\infty(X)$ gives the de Rham complex, $$\operatorname{HH}_n\bigl(C^\infty(X)\bigr)\cong\Omega^n(X) \ ,$$ which implies that the periodic cyclic homology computes the periodic de Rham cohomology as \_[even]{}(C\^(X))  \_[dR]{}\^[even]{}(X) \_[odd]{}(C\^(X))   \_[dR]{}\^[odd]{}(X)  . \[HPHdR\]It is in this sense that we may regard cyclic homology as a generalization of de Rham cohomology to other (possibly noncommutative) settings. Cyclic cohomology {#sec:4.2} ----------------- As one would expect, by considering the duals of the chain spaces introduced in Section \[sec:4.1\] above, one obtains the cohomology theories corresponding to the three cyclic-type homology theories defined there. A [*Hochschild $n$-cochain*]{} on the algebra $\alg$ is a linear form on $\Omega^n (\alg)$, or equivalently an $(n+1)$-multilinear functional $\varphi$ on $\alg$ which is simplicially normalized in the sense that $\varphi(a_0, a_1, \dots, a_n) = 0$ if $a_i = 1$ for any $i$ such that $1\leq i \leq n$. With the collection of all $n$-cochains denoted $C^n(\alg)={\rm Hom}_{\mathbb{C}}(\Omega^n(\alg),{{\mathbb C}})$, we form the [*Hochschild cochain complex*]{} $(C^\bullet(\alg),\bb)$ with coboundary map $$\bb\,:\, C^n(\alg)~\longrightarrow~C^{n+1}(\alg)$$ given by the transpose of the differential $\bb$ as $$\begin{aligned} \bb\varphi(a_0,a_1,\dots,a_{n+1})&=&\sum_{i=0}^n\,(-1)^i\, \varphi(a_0,\dots,a_i\,a_{i+1},\dots,a_{n+1})\nonumber\\ &&+\, (-1)^{n+1}\,\varphi(a_{n+1}\,a_0,a_1,\dots,a_n) \ .\end{aligned}$$ The cohomology of this complex is the [*Hochschild cohomology*]{} $$\operatorname{HH}^\bullet(\alg)=\operatorname{H}^\bullet\big(C^\bullet(\alg)\,,\,\bb\big) \ ,$$ the dual theory to Hochschild homology defined in eq. (\[HHdef\]). Similarly, the operator $\BB$ transposes to the cochain complex $C^\bullet(\alg)$ and the [*cyclic cohomology*]{} $\operatorname{HC}^\bullet(\alg)$ is defined as the cohomology of the complex $((\text{Tot}\,\Omega^\bullet(\alg))^\vee, \bb+\BB)$. The dual of the periodic complex is the complex which in even degree is spanned by *finite* sequences $(\varphi_0, \varphi_2, \dots, \varphi_{2n})$ with $\varphi_i \in C^i(\alg)$, and similarly in odd degree. The [ *periodic cyclic cohomology*]{} $\operatorname{HP}^\bullet(\alg)$ is the cohomology of this complex. The long exact sequence (\[homlongexact\]) relating Hochschild and cyclic homology has an obvious dual sequence that links Hochschild and cyclic cohomology. The transpose of the periodicity operator provides an injection $S:\operatorname{HC}^n(\alg)\to\operatorname{HC}^{n+2}(\alg)$ of cyclic cohomology groups and therefore gives rise to two *inductive* systems of abelian groups, one running through even degrees and the other through odd degrees. One has $$\operatorname{HP}^\bullet(\alg) = \lim_{\stackrel{\scriptstyle\longrightarrow}{\scriptstyle S}}\,\operatorname{HC}^{\bullet+2n}(\alg) \ .$$ This formal approach to cyclic cohomology, while very useful, hides two important features of the theory. Firstly, it seems to imply that cyclic cohomology is secondary to cyclic homology. In fact, it turns out that many geometric and analytic situations provide natural examples of cyclic *cocycles* [@Connes1]. Secondly, this approach does not explain why cyclic cohomology is indeed cyclic. For this, we note that a Hochschild $0$-cocycle $\tau\in{\rm Hom}(\alg,{{\mathbb C}})$ on the algebra $\alg$ is a trace, i.e., $\tau(a_0\,a_1)=\tau(a_1\,a_0)$. This tracial property is extended to higher orders via the following notion. Let $\lambda:C^n(\alg)\to C^n(\alg)$ be the operator defined by $$\lambda\varphi(a_0,a_1,\dots,a_n)=(-1)^n\,\varphi(a_n,a_0, \dots,a_{n-1}) \ .$$ Then an $n$-cochain $\varphi\in C^n(\alg)$ is said to be [*cyclic*]{} if it is invariant under the action of the cyclic group, $\lambda\varphi=\varphi$. The set of cyclic $n$-cochains is denoted $C_\lambda^n(\alg)$. One can prove that the cohomology of this complex is isomorphic to the cohomology of the complex we have used above to define cyclic cohomology, and so we can alternatively define the [cyclic cohomology]{} $\operatorname{HC}^\bullet(\alg)$ of the algebra $\alg$ as the cohomology of the cyclic cochain complex $(C_\lambda^\bullet(\alg),\bb)$, $$\operatorname{HC}^\bullet\big(\alg\big)= \operatorname{H}^\bullet\big(C^\bullet_\lambda(\alg)\,,\,\bb\big) \ .$$ An important class of cyclic cocycles is obtained as follows. Consider the algebra $\alg = C^\infty(X)$ of smooth functions on a compact oriented manifold $X$ of dimension $d$. Put \^[ ]{}\_X(f\^0,f\^1, …, f\^d) = 1[d!]{} \_X f\^0 f\^1…f\^d \[varphiX\]for $f^i \in \alg$. Then $\varphi^{~}_X$ is a cyclic $d$-cocycle. More generally, one can associate in this way a cyclic $(d-k)$-cocycle with any closed $k$-current $C$ on $X$. In particular, the Chern-Simons coupling $\big\langle C\,\smile\,Q(W,E,f)\,,\,[X]\big\rangle$ on a D-brane $(W,E,f)$ is an inhomogeneous cyclic cocyle of definite parity for any closed cochain $C$ associated to a Ramond-Ramond field on $X$. Local cyclic cohomology {#sec:4.3} ----------------------- Thus far we have not considered the possibility that the algebra $\alg$ might be equipped with a topology. A major weakness of cyclic cohomology compared to K-theory is that it depends very sensitively on the domain of algebras. For instance, $\alg=C^\infty(X)$ is the commutative, nuclear Fréchet algebra of smooth functions on the spacetime manifold $X$ equipped with its standard semi-norm topology. More generally, we can allow $\alg $ to be a complete multiplicatively convex algebra, i.e., $\alg$ is a topological algebra whose topology is given by a family of submultiplicative semi-norms. In such cases the definition of the algebra $\Omega^\bullet(\alg)$ of noncommutative differential forms will involve a choice of a suitably completed topological tensor product $\overline{\otimes}$. The correct choice is forced by the topology on $\alg$ and the corresponding continuity properties of the multiplication map $m:\alg\otimes\alg\to\alg$. For nuclear Fréchet algebras $\alg$, there is a unique topology which is compatible with the tensor product structure on $\alg\otimes\alg$ [@BP1]. In our later considerations we will often consider the situation in which $\alg=\balg^\infty$ is a suitable smooth subalgebra of a separable $C^*$-algebra $\balg$. Local cyclic cohomology is best suited to deal with these and other classes of algebras, and it moreover has a useful extension to a bivariant functor [@Puschnigg1]. The bivariant cyclic cohomology theories were introduced to provide a target for the Chern character from KK-theory, which we describe in the next section. The space of cochains in this theory is a certain deformation of the space of maps $\operatorname{Hom}_{\mathbb{C}}(\,\widehat{\Omega}{}^\bullet (\alg), \widehat{\Omega}{}^\bullet{(\balg)})$ with the ${{\mathbb Z}}_2$-grading induced from the spaces of inhomogeneous forms over the algebras $\alg$ and $\balg$. Alternatively, we can define $$\operatorname{Hom}_{\mathbb{C}}\big(\,\widehat{\Omega}{}^\bullet(\alg)\,,\, \widehat{\Omega}{}^\bullet(\balg)\,\big) = \lim_{\stackrel{\scriptstyle\longleftarrow}{\scriptstyle m}}~ \lim_{\stackrel{\scriptstyle\longrightarrow}{\scriptstyle n}}~ \operatorname{Hom}_{\mathbb{C}}\Big(\,\mbox{$\bigoplus\limits_{i\leq n}\,\Omega^i(\alg) \,,\, \bigoplus\limits_{j\leq m}\,\Omega^j(\balg)$}\Big) \ .$$ This is a ${{\mathbb Z}}_2$-graded vector space equipped with a differential $\partial$ that acts on cochains $\varphi$ by the graded commutator $$\partial \varphi = [\varphi, \bb + \BB] \ .$$ The local version of this theory is defined by using a deformation of the tensor algebra called the *$X$-complex*, which is the ${{\mathbb Z}}_2$-graded completion of $\Omega^\bullet(\alg)$ given by $$\xymatrix{ X^\bullet(T\alg)\,:\,\Omega^0(T\alg)=T\alg~ \ar@<0.5ex>[rr]^{\natural\circ\dd} && \ar@<0.5ex>[ll]^{\bb}~\Omega^1(T\alg)_\natural:= \frac{\Omega^1(T\alg)} {\big[\Omega^1(T\alg)\,,\,\Omega^1(T\alg)\big]} \ . }$$ Puschnigg’s completion of $X^\bullet(T\alg)$ [@Puschnigg1], $$\xymatrix{ \widehat{X}{}^\bullet(T\alg)\,:\,\widehat{\Omega}^{\rm even}(\alg)~ \ar@<0.5ex>[rr] && \ar@<0.5ex>[ll]~ \widehat{\Omega}^{\rm odd}(\alg) \ , }$$ then defines the ${{\mathbb Z}}_2$-graded *bivariant local cyclic cohomology* $${{\rm HL}}_\bullet(\alg,\balg)\=\operatorname{H}_\bullet\big(\operatorname{Hom}_{\mathbb{C}}(\, \widehat{X}{}^\bullet(T\alg),\widehat{X}{}^\bullet(T\balg)\,)\,,\,\partial\big) \ .$$ The main virtues of Puschnigg’s cyclic theory for our purposes is that it is the one “closest” to Kasparov’s KK-theory, in the sense that it possesses the following properties. It is defined on large classes of topological and bornological algebras, i.e., algebras together with a chosen family of *bounded* subsets closed under forming finite unions and taking subsets, *and* for separable $C^*$-algebras. It defines a bifunctor ${{\rm HL}}_\bullet(-,-)$ which is homotopy invariant, split exact and satisfies excision in each argument. It possesses a bilinear, associative composition product $$\otimes_\balg \,:\, {{\rm HL}}_i(\alg,\balg)\times {{\rm HL}}_j(\balg,\calg) ~ \longrightarrow~ {{\rm HL}}_{i+j}(\alg,\calg) \ .$$ It also carries a bilinear, associative exterior product $$\otimes \,:\,{{\rm HL}}_i(\alg_1,\balg_1)\times {{\rm HL}}_j(\alg_2,\balg_2) ~ \longrightarrow ~ {{\rm HL}}_{i+j}\big(\alg_1 \,\widehat{\otimes}\, \alg_2\,,\, \balg_1 \,\widehat{\otimes}\, \balg_2\big) \ ,$$ defined using the projective tensor product which maps onto the minimal $C^*$-algebraic tensor product on the category of separable $C^*$-algebras. In general, without any extra assumptions, this tensor product differs from the usual spatial tensor product, but at least in the examples we consider later on this problem can always be fixed. Thus in what follows, we will not distinguish between the algebraic tensor product $\otimes$ and its topological completion. The local cyclic cohomology reduces to other cyclic theories under suitable conditions, such as the periodic cyclic cohomology for non-topological algebras and even Fréchet algebras, Meyer’s analytic theories for bornological algebras [@Meyer2], and Connes’ entire cyclic cohomology for Banach algebras. It thus possesses the same algebraic properties as the usual bivariant cyclic cohomology theories, and in this sense it unifies cyclic homology and cohomology. A particularly useful property which we will make extensive use of is the following. Let $\alg$ be a Banach algebra with the metric approximation property, and let $\alg^\infty$ be a smooth subalgebra of $\alg$. Then the inclusion $\alg^\infty\hookrightarrow\alg$ induces an invertible element of ${{\rm HL}}_0(\alg^\infty,\alg)$. Thus in this case the algebras $\alg^\infty$ and $\alg$ are *HL-equivalent*. Let us consider again the illustrative example of the algebra of functions $\alg=C(X)$ on a compact oriented manifold $X$ with $\dim(X)=d$. In this case the inclusion of the smooth subalgebra $C^\infty(X)\hookrightarrow C(X)$ gives an isomorphism [@Meyer2] ${{\rm HL}}\big(C(X)\big)\cong{{\rm HL}}\big(C^\infty(X)\big)\cong \operatorname{HP}\big(C^\infty(X)\big)$ with the periodic cyclic cohomology. The Puschnigg complex coincides with the periodic complexified de Rham complex $\big(\Omega^\bullet(X)\,,\,\dd\big)$. Using the isomorphism (\[HPHdR\]) we then arrive at the isomorphism of ${{\mathbb Z}}_2$-graded groups $${{\rm HL}}_\bullet\big(C(X)\big)~\cong~\operatorname{H}_{\rm dR}^\bullet(X) \ .$$ The cyclic $d$-cocycle (\[varphiX\]) under this isomorphism induces the orientation fundamental class $\Xi=m^*[\varphi^{~}_X]\in{{\rm HL}}^d\big(C(X)\otimes C(X)\big)$ corresponding to the orientation cycle $[X]$ of the manifold $X$. D-brane charge on noncommutative spaces {#sec:5} ======================================= In this section we will generalize the Minasian-Moore formula (\[RRcharge\]) for the Ramond-Ramond charge of a D-brane to large classes of separable $C^*$-algebras representing generic noncommutative spacetimes. This will require a few mathematical constructions of independent interest on their own. In particular, we will develop noncommutative versions of the characteristic classes appearing in eq. (\[RRcharge\]) and show how they are related through a generalization of the Grothendieck-Riemann-Roch theorem (\[GRRthm\]). Chern characters {#sec:5.1} ---------------- We will begin by exhibiting the fundamental Chern character maps which link $\operatorname{K}$-theory and periodic cyclic homology, $\operatorname{K}$-homology and periodic cyclic cohomology, and more generally $\operatorname{KK}$-theory and bivariant cyclic cohomology. They provide explicit cyclic cocyles for Fredholm modules, and establish crucial links between duality in $\operatorname{KK}$-theory and in bivariant cyclic cohomology which will be the crux of some of our later constructions. We begin with a description of the Chern character in $\operatorname{K}$-theory. Let $\alg$ be a unital Fréchet algebra over ${{\mathbb C}}$. Acting on the $\operatorname{K}$-theory of the algebra $\alg$, we construct the homomorphism of abelian groups \_:\_0()  \_[even]{}() \[chK0HP0\]as follows. Let $[p]\in\operatorname{K}_0(\alg)$ be the Murray-von Neumann equivalence class of an idempotent matrix $p\in{{\mathbb M}}_r(\alg)=\alg\otimes{{\mathbb M}}_r({{\mathbb C}})$, i.e., a projection $p=p^2$. Then the [*Chern character*]{} assigns to $[p]$ an even class in the periodic cyclic homology of $\alg$ represented by the even periodic cycle $$\operatorname{ch}_\sharp(p)={\:{\rm Tr}\,_}r(p)+\sum_{n\geq1}\,\frac{(2n)!}{n!}\,{\:{\rm Tr}\,_}r\bigl( (p-\mbox{$\frac12$})~\dd p^{2n}\bigr)$$ valued in $\Omega^{\rm even}(\alg)$, where ${\:{\rm Tr}\,_}r:{{\mathbb M}}_r(\alg)\to\alg$ is the ordinary $r\times r$ matrix trace. One readily checks that it gives a cycle in the reduced $(\bb,\BB)$ bi-complex of cyclic homology that we described in Section \[sec:4.1\], i.e., $(\bb+\BB)\operatorname{ch}_\sharp(p)=0$. When $\alg=C^\infty(X)$ with $X$ a smooth compact manifold, it coincides with the usual Chern-Weil character ${\:{\rm Tr}\,\exp}(F/2\pi\ii)$ defined in terms of the curvature $F$ of the canonical Grassmann connection of the corresponding complex vector bundle $E\to X$. The Chern map (\[chK0HP0\]) becomes an isomorphism on tensoring with ${{\mathbb C}}$. For applications to the description of D-brane charges in cyclic theory, it is more natural to use cyclic cohomology classes corresponding to elements in $\operatorname{K}$-homology. Let $(\hil,\rho,F)$ be an $(n+1)$-summable even Fredholm module over the algebra $\alg$ with $n$ even. This means that $[F,\rho(a)]\in\mathcal{L}^{n+1}$ for all $a\in\alg$, where $\mathcal{L}^p=\mathcal{L}^p(\hil):= \{T\in\mathcal{K}(\hil)~|~{\:{\rm Tr}\,^}{~}_\hil(T^p)<\infty\}$ is the $p$-class Shatten ideal of compact operators. Then the [ *character*]{} of the Fredholm module is the cyclic $n$-cocycle $\tau^n$ given by $$\tau^n(a_0,a_1,\dots,a_n)={\:{\rm Tr}\,^}{~}_\hil\bigl(\gamma\,\rho(a_0)\, \bigl[F\,,\, \rho(a_1)\bigr]\cdots\bigl[F\,,\,\rho(a_n)\bigr]\bigr) \ ,$$ where $\gamma$ is the grading involution on $\hil$ defining its ${{\mathbb Z}}_2$-grading into $\pm\,1$ eigenspaces of $\gamma$. One checks closure $\bb\tau^n=0$ and cyclicity $\lambda\tau^n=(-1)^n\,\tau^n$. Since $\mathcal{L}^{p_1}\subset\mathcal{L}^{p_2}$ for $p_1\leq p_2$, we can replace $n$ by $n+2k$ with $k$ any integer in this definition, and so only the (even) parity of $n$ is fixed. Thus for any $k\geq0$, one gets a sequence of cyclic cocycles $\tau^{n+2k}$ with the same parity. The cyclic cohomology classes of these cocycles are related by Connes’ periodicity operator $S$ in $\operatorname{HC}^{n+2k+2}(\alg)$, and therefore the sequence $(\tau^{n+2k})_{k\geq0}$ determines a well-defined class $\operatorname{ch}^\sharp(\hil,\rho,F)$ called the [*Chern character*]{} of the even Fredholm module $(\hil,\rho,F)$ in the even periodic cyclic cohomology $\operatorname{HP}^{\rm even}(\alg)$. Thus we get a map $$\operatorname{ch}^\sharp\,:\,\operatorname{K}^0(\alg)~\longrightarrow~\operatorname{HP}^{\rm even}(\alg)$$ which becomes an isomorphism after tensoring over ${{\mathbb C}}$. See ref. [@BMRS1] for an extension of this definition to unbounded and infinite-dimensional Fredholm modules. Our main object of interest is the Chern character in KK-theory. A cohomological functor which complements the bivariant $\operatorname{KK}$-theory is provided by the local bivariant cyclic cohomology ${{\rm HL}}_\bullet(\alg,\balg)$ that we introduced in Section \[sec:4.3\]. Since both $\operatorname{KK}_\bullet(\alg,\balg)$ and ${{\rm HL}}_\bullet(\alg,\balg)$ are homotopy invariant, stable and satisfy excision, the universal property of $\operatorname{KK}$-theory implies that there is a natural bivariant ${{\mathbb Z}}_2$-graded Chern character homomorphism $$\operatorname{ch}\,:\,\operatorname{KK}_\bullet(\alg,\balg)~\longrightarrow~{{\rm HL}}_\bullet(\alg,\balg)$$ which enjoys the following properties: 1. $\operatorname{ch}$ is multiplicative, i.e., if $\alpha\in \operatorname{KK}_i(\alg,\balg)$ and $\beta\in \operatorname{KK}_j(\balg,\calg)$ then $$\operatorname{ch}(\alpha\otimes_\balg\beta) \= \operatorname{ch}(\alpha) \otimes_\balg \operatorname{ch}(\beta) \ ;$$ 2. $\operatorname{ch}$ is compatible with the exterior product; and 3. $\operatorname{ch}\big([\phi]_{\operatorname{KK}}\big)=[\phi]_{{{\rm HL}}}$ for any for any algebra homomorphism $\phi:\alg\rightarrow\balg$. The last property implies that the Chern character sends invertible elements of $\operatorname{KK}$-theory to invertible elements of bivariant cyclic cohomology. In particular, every PD pair for KK-theory is also a PD pair for HL-theory, but not conversely (due to e.g. torsion). However, in the following it will be important to consider distinct fundamental classes $\Xi\neq\operatorname{ch}(\Delta)$ in local cyclic cohomology. If $\alg,\balg$ obey the universal coefficient theorem (\[UCTKK\]) for KK-theory, then there is an isomorphism $${{\rm HL}}_\bullet(\alg,\balg) ~\cong~\operatorname{Hom}_{\mathbb{C}}\big(\operatorname{K}_\bullet(\alg)\otimes_{{{\mathbb Z}}}{\mathbb{C}}\,,\, \operatorname{K}_\bullet(\balg)\otimes_{{\mathbb Z}}{\mathbb{C}}\big) \ .$$ If the $\operatorname{K}$-theory $\operatorname{K}_\bullet(\alg)$ is finitely generated, then this is also equal to $${{\rm HL}}_\bullet(\alg,\balg) ~\cong~\operatorname{KK}_\bullet(\alg,\balg) \otimes_{{\mathbb Z}}{\mathbb{C}}\ .$$ Todd classes ------------ Let $\alg$ be a PD algebra with fundamental K-homology class $\Delta\in\operatorname{K}^d(\alg\otimes\alg^\op)$, and fundamental cyclic cohomology class $\Xi\in{{\rm HL}}^d(\alg\otimes\alg^\op)$. Then we define the *Todd class* of $\alg$ to be the element $${\Todd(\alg)~:=~\Xi^\vee\otimes_{\alg^\op}\operatorname{ch}(\Delta)}~\in~{{\rm HL}}_0(\alg,\alg) \ .$$ The Todd class is invertible with inverse given by $$\Todd(\alg)^{-1}\=(-1)^d~\operatorname{ch}\big(\Delta^\vee\big)\otimes_{\alg^\op}\Xi \ .$$ More generally, one defines the Todd class $\Todd(\alg)$ for PD pairs of algebras $(\alg,\balg)$ by replacing $\alg^\op$ with $\balg$ above [@BMRS1]. The Todd class depends “covariantly” on the choices of fundamental classes in the respective moduli spaces [@BMRS1]. For any other fundamental class $\Delta_1$ for $\operatorname{K}$-theory of $\alg$, one has $ \Xi^\vee\otimes_{\alg^\op}{\rm ch}(\Delta_1)= {\rm ch}(\ell)\otimes_{\alg} {\Todd}(\alg)$ where $\ell = \Delta^\vee\otimes_{\alg^\op}\Delta_1 $ is an invertible element in $\operatorname{KK}_0(\alg, \alg)$. Conversely, if $\ell$ is an invertible element in $\operatorname{KK}_0(\alg, \alg)$, then $\ell \otimes_\alg \Delta$ is a fundamental class for $\operatorname{K}$-theory of $\alg$ for any fundamental class $\Delta$. In particular, if $\alg,\balg$ are KK-equivalent $C^*$-algebras, with the KK-equivalence implemented by an invertible element $\alpha$ in $\operatorname{KK}_\bullet(\alg,\balg)$, then their Todd classes are related through ()=()\^[-1]{}\_()\_()  . \[ToddKKrel\] The following example provides the motivation behind this definition. Let $\alg= C(X)$ where $X$ is a compact complex manifold. Then $\alg$ is a PD algebra, with KK-theory fundamental class $\Delta$ provided by the Dolbeault operator $\partial$ on $X\times X$, and HL-theory fundamental class $\Xi$ provided by the orientation cycle $[X]$. By the universal coefficient theorem (\[UCTKK\]), one has an isomorphism ${{\rm HL}}_0(\alg,\alg)\cong\operatorname{End}\big(\operatorname{H}^\bullet(X,{{\mathbb Q}})\big)$. Then $\Todd(\alg)=\,\smile\,\Todd(X)$ is cup product with the usual Todd characteristic class $\Todd(X)\in\operatorname{H}^\bullet(X,{{\mathbb Q}})$ of the tangent bundle of $X$. Grothendieck-Riemann-Roch theorem {#sec:5.3} --------------------------------- Let $f:\alg\rightarrow\balg$ be a K-oriented morphism of separable $C^*$-algebras. The Grothendieck-Riemann-Roch formula compares the class $\operatorname{ch}(f!)$ with the HL-theory orientation class $f*$ in ${{\rm HL}}_d(\balg,\alg)$. If $\alg$, $\balg$ are PD algebras, then one has $d=d_\alg-d_\balg$ and  . \[NCGRR\]This formula is proven by expanding out both sides using the various definitions, along with associativity of the Kasparov intersection product [@BMRS1]. It leads to the commutative diagram $$\xymatrix{ \operatorname{K}_\bullet(\balg)~\ar[r]^{\!\!f_!} \ar[d]_{\operatorname{ch}\otimes_\balg\Todd(\balg)} & ~\operatorname{K}_{\bullet+d}(\alg)\ar[d]^{\operatorname{ch}\otimes_\alg\Todd(\alg)} \\ {{\rm HL}}_\bullet(\balg)~\ar[r]_{\!\!\!\!\!f_*} & ~ {{\rm HL}}_{\bullet+d}(\alg) }$$ generalizing eq. (\[GRRthm\]). As an example of the applicability of this formula, suppose that $\alg$ is unital with even degree fundamental class. Then there is a canonical K-oriented morphism $\lambda:{\mathbb{C}}\rightarrow\alg$, $z\mapsto z\cdot1$ which induces a homomorphism on K-theory $\lambda_!:\operatorname{K}_0(\alg)\rightarrow{{\mathbb Z}}$ with \_!() = \_\*(() \_() ) \[lambdaK\]for $\xi\in\operatorname{K}_0(\alg)$. When $\alg=C(X)$, with $X$ a compact spin$^c$ manifold, then $\lambda_!(\xi)=\operatorname{index}(\Dirac_\xi)$ for $\xi\in\operatorname{K}^0(X)$ and eq. (\[lambdaK\]) is just the Atiyah-Singer index theorem (\[ASindexthm\]). Generally, when $\xi=\alg$ is the trivial rank one module over $\alg$, then $\lambda_!(\xi)$ defines a characteristic numerical invariant of $\alg$, which we may call the *Todd genus* of the algebra $\alg$. Isometric pairing formulas -------------------------- Suppose that $\alg$ is a PD algebra with *symmetric* fundamental classes $\Delta$ and $\Xi$, i.e., $\sigma(\Delta)^\op=\Delta$ in $\operatorname{K}^d(\alg\otimes\alg^\op)$, where $\sigma:\alg\otimes\alg^\op\rightarrow\alg^\op\otimes\alg$ is the flip involution $x\otimes y^\op\mapsto y^\op\otimes x$, and similarly for $\Xi$ in ${{\rm HL}}^d(\alg\otimes\alg^\op)$. In this case we can define a symmetric bilinear pairing on the K-theory of $\alg$ by   \_0(,)=[[Z]{}]{}\[NCindexpairing\]for $\alpha,\beta\in\operatorname{K}_\bullet(\alg)$. It coincides with the index pairing (\[indexpairing\]) when $\alg=C(X)$, for $X$ a spin$^c$ manifold with fundamental class given by the Dirac operator $\Delta=\Dirac\otimes\Dirac$, as then $$(\alpha,\beta)_{\operatorname{K}} \= \Dirac_\alpha \otimes_{C(X)} \beta \= \operatorname{index}(\Dirac_{\alpha\otimes\beta})$$ by definition of the intersection product on KK-theory. Similarly, one has a symmetric bilinear pairing on local cyclic homology given by   [[HL]{}]{}\_0(,)= , \[HLpairing\]generalizing the pairing (\[cohpairing\]). If $\alg$ satisfies the universal coefficient theorem (\[UCTKK\]), then one has an isomorphism ${{\rm HL}}_\bullet(\alg,\alg)\cong\operatorname{End}\big({{\rm HL}}_\bullet(\alg)\big)$. If ${{\rm HL}}_\bullet(\alg)$ is a finite-dimensional vector space, $n:=\dim_{{\mathbb{C}}}\big({{\rm HL}}_\bullet(\alg)\big)<\infty$, then we may use the universal coefficient theorem to identify the Todd class $\Todd(\alg)$ with an invertible matrix in $GL(n,{\mathbb{C}})$. In this case the square root $\sqrt{\Todd(\alg)}$ may be defined using the usual Jordan normal form of linear algebra, and then reinterpreted as a class in ${{\rm HL}}_\bullet(\alg,\alg)$ again by using the universal coefficient theorem. This square root is not unique, but we assume that it is possible to fix a canonical choice. Under these circumstances, we can define the *modified Chern character* \_:\_()   [[HL]{}]{}\_()  , \[modch\]which is an isometry of the inner products (\[NCindexpairing\]) and (\[HLpairing\]) [@BMRS1; @BMRS2]. Suppose now that $\alg$, $\dalg$ represent noncommutative D-branes with $\alg$ as above, with a given K-oriented morphism $f:\alg\rightarrow\dalg$ and Chan-Paton bundle $\xi\in\operatorname{K}_\bullet(\dalg)$. In this case there is a noncommutative version of Minasian-Moore formula (\[RRcharge\]) given by   [[HL]{}]{}\_()  . \[NCRRcharge\]More generally, consider a D-brane in the noncommutative spacetime $\alg$ described by a Fredholm module over $\alg$ representing a K-homology class $\mu\in\operatorname{K}^\bullet(\alg)$. It has a “dual” charge given by $${Q(\mu)\=\sqrt{\Todd(\alg)}{\,}^{-1}\otimes_\alg\operatorname{ch}(\mu)}~\in~{{\rm HL}}^\bullet(\alg) \ .$$ This vector satisfies the isometry rule [@BMRS1] $$\Xi^\vee\otimes_{\alg\otimes\alg^\op}\big(Q(\mu)\otimes Q(\nu)^\op\big)\=\Delta^\vee\otimes_{\alg\otimes\alg^\op}\big(\mu\otimes\nu^\op\big) \ ,$$ and reproduces the noncommutative Minasian-Moore formula (\[NCRRcharge\]) in the case when $\mu=f_!(\xi)\otimes_\alg\Delta$ is dual to the Chan-Paton bundle $\xi$. Noncommutative D2-branes ======================== In this section we will apply our general formalism to the example of D-branes on noncommutative Riemann surfaces, as defined in refs. [@CHMM1; @Mathai1]. Consider a collection of D2-branes wrapping a compact, oriented Riemann surface $\Sigma_g$ of genus $g\geq1$ with a constant $B$-field. This example generalizes the classic example of D-branes on the noncommutative torus ${{\mathbb T}}_\theta^2$, obtained for $g=1$. The fundamental group of $\Sigma_g$ admits the presentation $$\Gamma_g \= \Bigl\{\mbox{$U_j, V_j\,,\, j=1, \ldots, g~ \Big|~\prod\limits_{j=1}^g\,[U_j, V_j] = 1$}\Bigr\} \ .$$ Its group cohomology is $\operatorname{H}^2\big(\Gamma_g\,,\,U(1)\big)\cong{\mathbb{R}}/{{\mathbb Z}}$, and so for each $\theta\in[0,1)$ there is a unique $U(1)$-valued two-cocyle $\sigma_\theta$ on $\Gamma_g$, representing the holonomy of the $B$-field on $\Sigma_g$. The reduced twisted group $C^*$-algebra $\alg_\theta:=C_r^*(\Gamma_g,\sigma_\theta)$ is isomorphic to the algebra generated by unitary elements $U_j,V_j$, $j=1,\dots,g$ obeying the single relation $$\prod_{j=1}^g \,[U_j, V_j] \= \exp (2\pi \ii\theta) \ .$$ When $\theta$ is irrational, the degree $0$ K-theory is $\operatorname{K}_0(\alg_\theta)\cong\operatorname{K}^0(\Sigma_g)={{\mathbb Z}}^2$, with generators $e_0=[1]$ and $e_1$ satisfying ${\:{\rm Tr}\,(}e_1)=\theta$, where ${\:{\rm Tr}\,:}C_r^*(\Gamma_g,\sigma_\theta)\rightarrow{\mathbb{C}}$ is the evaluation at the identity element $1_{\Gamma_g}$ of $\Gamma_g$. The degree $1$ K-theory is given by $\operatorname{K}_1(\alg_\theta)\cong\operatorname{K}^1(\Sigma_g)={{\mathbb Z}}^{2g}$, with basis $U_j,V_j$. There is a smooth subalgebra $\alg_\theta^\infty\hookrightarrow\alg_\theta$ such that ${{\rm HL}}_\bullet(\alg_\theta)\cong{{\rm HL}}_\bullet(\alg_\theta^\infty) \cong\operatorname{HP}_\bullet(\alg_\theta^\infty)$ [@BMRS1; @BMRS2]. The algebra $\alg_\theta$ is a PD algebra, with Bott class given by $$\Delta^\vee\=e^{\phantom{\op}}_0\otimes e^\op_1 - e^{\phantom{\op}}_1 \otimes e^\op_0 + \sum_{j=1}^g\,\left(U_j^{\phantom{\op}} \otimes V^\op_j - V_j^{\phantom{\op}}\otimes U^\op_j\right) \ .$$ Let $\mu_\theta :\operatorname{K}^\bullet(\Sigma_g) \xrightarrow{\approx} \operatorname{K}_{\bullet}\big(C^*_r(\Gamma_g, \sigma_\theta)\big)$ be the twisted Kasparov isomorphism, and let $\nu_\theta$ be its analog in periodic cyclic homology. The commutative diagram of isomorphisms $$\xymatrix{ \operatorname{K}^\bullet(\Sigma_g) ~ \ar[r]^{\mu_\theta}\ar[d]_{\operatorname{ch}} & ~ \operatorname{K}_{\bullet}(\alg_\theta) \ar[d]^{\operatorname{ch}_{\Gamma_g}} \\ \operatorname{H}^\bullet(\Sigma_g,{{\mathbb Z}})~ \ar[r]_{\nu_\theta} & ~ {{\rm HL}}_{\bullet} (\alg_\theta) }$$ then serves to show that the Todd class is given by $\Todd(\alg_\theta)=\nu_\theta\big(\Todd(\Sigma_g)\big)$. This construction thus leads to the charge vector for a wrapped noncommutative D2-brane $(\dalg,\xi,f)$, with K-oriented morphism $f:\alg_\theta\rightarrow\dalg$ and Chan-Paton bundle $\xi\in\operatorname{K}_\bullet(\dalg)$, defined by $${Q_\theta(\dalg,\xi,f) \= \nu_\theta\Big( \operatorname{ch}\big(\mu_\theta^{-1}\circ f_!(\xi)\big) \smile \sqrt{{\Todd}(\Sigma_g)}~\Big)}\in{{\rm HL}}_\bullet(\alg_\theta) \ .$$ This formula incorporates the contribution from the constant $B$-field in the usual way [@SW1; @Taylor1]. D-branes and ${{\boldsymbol {H} }}$-flux {#sec:7} ======================================== In this section we will consider in some detail the example of D-branes in a compact, even-dimensional oriented manifold $X$ with constant background Neveu-Schwarz $H$-flux. In this case, it is well-known [@Witten1; @BM1] that one should replace spacetime $X$ by a noncommutative $C^*$-algebra $CT(X,H)$, the stable continuous trace $C^*$-algebra with spectrum $X$ and Dixmier-Douady invariant $H$ [@Rosenberg1]. This algebra has the property that it is locally Morita equivalent to spacetime, but not in general globally equivalent to it. Projective bundles and twisted K-theory {#sec:7.1} --------------------------------------- We will start by describing twisted K-theory, the appropriate receptacle for the classification of D-brane charge in $H$-flux backgrounds, in the spirit of Atiyah and Segal [@AS1] (glossing over many topological details, as before). Let $\hil$ be a fixed, separable Hilbert space of dimension $\geq1$. We will denote the associated projective space of $\hil$ by ${{\mathbb P}}={{\mathbb P}}(\hil)$. It is compact if and only if $\hil$ is finite-dimensional. Let $PU=PU(\hil)=U(\hil)/U(1)$ be the projective unitary group of $\hil$ equipped with the compact-open topology. A *projective bundle over $X$* is a locally trivial bundle of projective spaces, i.e., a fibre bundle $P\to X$ with fibre ${{\mathbb P}}(\hil)$ and structure group $PU(\hil)$. An application of the Banach-Steinhaus theorem shows that we may identify projective bundles with principal $PU(\hil)$-bundles (and the pointwise convergence topology on $PU(\hil)$). If $G$ is a topological group, let $G_X$ denote the sheaf of germs of continuous functions $G\to X$, i.e., the sheaf associated to the constant presheaf given by $U\mapsto F(U)=G$. Given a projective bundle $P\to X$ and a sufficiently fine good open cover $\{U_i\}_{i\in I}$ of $X$, the transition functions between trivializations $P|_{U_i}$ can be lifted to bundle isomorphisms $g_{ij}$ on double intersections $U_{ij}=U_i\cap U_j$ which are projectively coherent, i.e., over each of the triple intersections $U_{ijk}=U_i\cap U_j\cap U_k$ the composition $g_{ki}\,g_{jk}\,g_{ij}$ is given as multiplication by a $U(1)$-valued function $f_{ijk}:U_{ijk}\to U(1)$. The collection $\{(U_{ij},f_{ijk})\}$ defines a $U(1)$-valued two-cocycle called a $B$-field on $X$, which represents a class $B_P$ in the sheaf cohomology group $\operatorname{H}^2(X,U(1)_X)$. On the other hand, the sheaf cohomology $\operatorname{H}^1(X,PU(\hil)_X)$ consists of isomorphism classes of principal $PU(\hil)$-bundles, and we can consider the isomorphism class $[P]\in\operatorname{H}^1(X,PU(\hil)_X)$. There is an isomorphism $\operatorname{H}^1(X,PU(\hil)_X)\xrightarrow{\approx}\operatorname{H}^2(X,U(1)_X)$ provided by the boundary map $[P]\mapsto B_P$. There is also an isomorphism $$\operatorname{H}^2\big(X\,,\,U(1)_X\big)~\xrightarrow{\approx}~ \operatorname{H}^3(X,{{\mathbb Z}}_X)\cong\operatorname{H}^3(X,{{\mathbb Z}}) \ .$$ The image $\delta(P)\in\operatorname{H}^3(X,{{\mathbb Z}})$ of $B_P$ is called the Dixmier-Douady invariant of $P$. When $\delta(P)=[H]$ is represented in $\operatorname{H}^3(X,{\mathbb{R}})$ by a closed three-form $H$ on $X$, called the $H$-flux of the given $B$-field $B_P$, we will write $P=P_H$. One has $\delta(P)=0$ if and only if the projective bundle $P$ comes from a vector bundle $E\to X$, i.e., $P={{\mathbb P}}(E)$. By Serre’s theorem every torsion element of $\operatorname{H}^3(X,{{\mathbb Z}})$ arises from a finite-dimensional bundle $P$. Explicitly, consider the commutative diagram of exact sequences of groups given by \[SUnseq\]where we identify the cyclic group $\mathbb{Z}_n$ with the group of $n$-th roots of unity. Let $P$ be a projective bundle with structure group $PU(n)$, i.e., with fibres $\mathbb{P}(\mathbb{C}^n)$. Then the commutative diagram of long exact sequences of sheaf cohomology groups associated to the commutative diagram (\[SUnseq\]) of groups implies that the element $B_P\in \operatorname{H}^2(X,U(1)_X)$ comes from $\operatorname{H}^2(X,(\mathbb{Z}_n)_X)$, and therefore its order divides $n$. One also has $\delta(P_1\otimes P_2)=\delta(P_1)+\delta(P_2)$ and $\delta(P^\vee\,)=-\delta(P)$. This follows from the commutative diagram $$\xymatrix{ 0 \ar[r] & U(1) \times U(1) \ar[r] \ar[d] & U(\hil_1 ,\hil_2) \ar[r] \ar[d] & PU(\hil_1 ,\hil_2) \ar[r] \ar[d]& 0\\ 0 \ar[r] & U(1) \ar[r] & U(\hil_1 \otimes \hil_2) \ar[r] & PU(\hil_1 \otimes \hil_2) \ar[r] & 0 \ , }$$ and the fact that $P^\vee\otimes P={{\mathbb P}}(E)$ where $E$ is the vector bundle of Hilbert-Schmidt endomorphisms of $P$. Putting everything together, it follows that the cohomology group $\operatorname{H}^3(X,{{\mathbb Z}})$ is isomorphic to the group of stable equivalence classes of principal $PU(\hil)$-bundles $P\to X$ with the operation of tensor product. We are now ready to define the twisted K-theory of the manifold $X$ equipped with a projective bundle $P\to X$, such that $P_x={{\mathbb P}}(\hil)$ for all $x\in X$. We will first give a definition in terms of Fredholm operators, and then provide some equivalent, but more geometric definitions. Let $\hil$ be a ${{\mathbb Z}}_2$-graded Hilbert space. We define $\operatorname{Fred}^0(\hil)$ to be the space of self-adjoint degree 1 Fredholm operators $T$ on $\hil$ such that $T^2-1\in\cK(\hil)$, together with the subspace topology induced by the embedding $\operatorname{Fred}^0(\hil)\hookrightarrow \balg(\hil)\times\cK(\hil)$ given by $T\mapsto (T,T^2-1)$ where the algebra of bounded linear operators $\balg(\hil)$ is given the compact-open topology and the Banach algebra of compact operators $\cK=\cK(\hil)$ is given the norm topology. Let $P=P_H\to X$ be a projective Hilbert bundle. Then we can construct an associated bundle $\operatorname{Fred}^0(P)$ whose fibres are $\operatorname{Fred}^0(\hil)$. We define the *twisted K-theory group of the pair $(X,P)$* to be the group of homotopy classes of maps \^0(X,H)= . \[TwistedKproj\]The group $\operatorname{K}^0(X,H)$ depends functorially on the pair $(X,P_H)$, and an isomorphism of projective bundles $\rho:P\to P'$ induces a group isomorphism $\rho_*:\operatorname{K}^0(X,H)\to\operatorname{K}^0(X,H'\,)$. Addition in $\operatorname{K}^0(X,H)$ is defined by fibrewise direct sum, so that the sum of two elements lies in $\operatorname{K}^0(X,H_2)$ with $[H_2]=\delta(P\otimes{{\mathbb P}}({\mathbb{C}}^2))=\delta(P)=[H]$. Under the isomorphism $\hil\otimes{\mathbb{C}}^2\cong\hil$, there is a projective bundle isomorphism $P\to P\otimes{{\mathbb P}}({\mathbb{C}}^2)$ for any projective bundle $P$ and so $\operatorname{K}^0(X,H_2)$ is canonically isomorphic to $\operatorname{K}^0(X,H)$. When $[H]$ is a non-torsion element of $\operatorname{H}^3(X,{{\mathbb Z}})$, so that $P=P_H$ is an infinite-dimensional bundle of projective spaces, then the index map $\operatorname{K}^0(X,H)\to{{\mathbb Z}}$ is zero, i.e., any section of $\operatorname{Fred}^0(P)$ takes values in the index zero component of $\operatorname{Fred}^0(\hil)$. Let us now describe some other models for twisted K-theory which will be useful in our physical applications later on. A definition in algebraic K-theory may given as follows. A bundle of projective spaces $P$ yields a bundle $\operatorname{End}(P)$ of algebras. However, if $\hil$ is an infinite-dimensional Hilbert space, then one has natural isomorphisms $\hil\cong\hil\oplus\hil$ and $$\operatorname{End}(\hil)~\cong~\operatorname{Hom}(\hil\oplus\hil,\hil)~\cong~\operatorname{End}(\hil)\oplus \operatorname{End}(\hil)$$ as left $\operatorname{End}(\hil)$-modules, and so the algebraic K-theory of the algebra $\operatorname{End}(\hil)$ is trivial. Instead, we will work with the Banach algebra $\cK(\hil)$ of compact operators on $\hil$ with the norm topology. Given that the unitary group $U(\hil)$ with the compact-open topology acts continuously on $\cK(\hil)$ by conjugation, to a given projective bundle $P_H$ we can associate a bundle of compact operators $\bun_H\to X$ given by $$\bun_H=P_H\times_{PU}\cK$$ with $\delta(\bun_H)=[H]$. The Banach algebra $\alg_H:=C_0(X,\bun_H)$ of continuous sections of $\bun_H$ vanishing at infinity is the continuous trace $C^*$-algebra $CT(X,H)$ [@Rosenberg1]. Then the twisted K-theory group $\operatorname{K}^\bullet(X,H)$ of $X$ is canonically isomorphic to the algebraic K-theory group $\operatorname{K}_\bullet(\alg_H)$. We will also need a smooth version of this definition. Let $\alg_H^\infty$ be the smooth subalgebra of $\alg_H$ given by the algebra $CT^\infty(X,H)=C^\infty(X,{{\mathcal L}}_{P_H}^1)$, where ${{\mathcal L}}_{P_H}^1=P_H\times_{PU}{{\mathcal L}}^1$. Then the inclusion $CT^\infty(X,H)\hookrightarrow CT(X,H)$ induces an isomorphism $\operatorname{K}_\bullet\big(CT^\infty(X,H)\big)\xrightarrow{\approx}\operatorname{K}_\bullet\big(CT(X,H)\big)$ of algebraic K-theory groups. Upon choosing a bundle gerbe connection [@Murray1; @BCMMS1], one has an isomorphism $\operatorname{K}_\bullet\big(CT^\infty(X,H)\big)\cong\operatorname{K}^\bullet(X,H)$ with the twisted K-theory (\[TwistedKproj\]) defined in terms of projective Hilbert bundles $P=P_H$ over $X$. Finally, we propose a general definition based on K-theory with coefficients in a sheaf of rings. It parallels the bundle gerbe approach to twisted K-theory [@BCMMS1]. Let $\balg$ be a Banach algebra over ${\mathbb{C}}$. Let $\ecat(\balg,X)$ be the category of continuous $\balg$-bundles over $X$, and let $C(X,\balg)$ be the sheaf of continuous maps $X\to \balg$. The ring structure in $\balg$ equips $C(X,\balg)$ with the structure of a sheaf of rings over $X$. We can therefore consider left (or right) $C(X,\balg)$-modules, and in particular the category $\lfcat\big(C(X,\balg)\big)$ of locally free $C(X,\balg)$-modules. Using the section functor in the usual way, for $X$ compact there is an equivalence of additive categories (,X)(C(X,))  . \[elfcatequiv\] Since these are both additive categories, we can apply the Grothendieck functor to each of them and obtain the abelian groups $\operatorname{K}(\lfcat(C(X,\balg)))$ and $\operatorname{K}(\ecat(\balg,X))$. The equivalence of categories (\[elfcatequiv\]) ensures that there is a natural isomorphism of groups ((C(X,)))((,X))  . \[elfGrothiso\]This motivates the following general definition. If $\alg$ is a sheaf of rings over $X$, then we define the *K-theory of $X$ with coefficients in $\alg$* to be the abelian group $$\operatorname{K}(X,\alg):=\operatorname{K}\big(\lfcat(\alg)\big) \ .$$ For example, consider the case $\balg={\mathbb{C}}$. Then $C(X,{\mathbb{C}})$ is just the sheaf of continuous functions $X\to{\mathbb{C}}$, while $\ecat({\mathbb{C}},X)$ is the category of complex vector bundles over $X$. Using the isomorphism of K-theory groups (\[elfGrothiso\]) we then have $$\operatorname{K}\big(X\,,\,C(X,{\mathbb{C}})\big)~:=~ \operatorname{K}\big(\lfcat\big(C(X,{\mathbb{C}})\big)\big)~\cong~ \operatorname{K}\big(\ecat({\mathbb{C}},X)\big)\= \operatorname{K}^0(X) \ .$$ The definition of twisted K-theory uses another special instance of this general construction. For this, we define an *Azumaya algebra over $X$ of rank $m$* to be a locally trivial algebra bundle over $X$ with fibre isomorphic to the algebra of $m \times m$ complex matrices over $\mathbb{C}$, ${{\mathbb M}}_m(\mathbb{C})$. An example is the algebra $\operatorname{End}(E)$ of endomorphisms of a complex vector bundle $E \rightarrow X$. We can define an equivalence relation on the set $A(X)$ of Azumaya algebras over $X$ in the following way. Two Azumaya algebras $A$, $A{'}$ are called equivalent if there are vector bundles $E$, $E{'}$ over $X$ such that the algebras $A \otimes \operatorname{End}(E)$, $A{'} \otimes \operatorname{End}(E{'}\,)$ are isomorphic. Then every Azumaya algebra of the form $\operatorname{End}(E)$ is equivalent to the algebra of functions $C(X)$ on $X$. The set of all equivalence classes is a group under the tensor product of algebras, called the *Brauer group of $X$* and denoted $\Br(X)$. By Serre’s theorem there is an isomorphism $$\delta \,:\, \Br(X) ~\xrightarrow{\approx}~ \operatorname{tor}\big(\operatorname{H}^3(X,\mathbb{Z})\big) \ ,$$ where $\operatorname{tor}(H^3(X,\mathbb{Z}))$ is the torsion subgroup of $\operatorname{H}^3(M,\mathbb{Z})$. For an explicit cocycle description of the Dixmier-Douady invariant $\delta (A)$ for an Azumaya algebra $A$, see ref. [@Kapustin1]. If $A$ is an Azumaya algebra bundle, then the space of continuous sections $C(X,A)$ of $X$ is a ring and we can consider the algebraic K-theory group $\operatorname{K}(A):=\operatorname{K}_0(C(X,A))$ of equivalence classes of projective $C(X,A)$-modules, which depends only on the equivalence class of $A$ in the Brauer group [@DoKa1]. Under the equivalence (\[elfcatequiv\]), we can represent the Brauer group $\Br(X)$ as the set of isomorphism classes of sheaves of Azumaya algebras. Let $\alg$ be a sheaf of Azumaya algebras, and $\lfcat(\alg)$ the category of locally free $\alg$-modules. Then as above there is an isomorphism $$\operatorname{K}\big(X\,,\,C(X,\alg)\big)\cong\operatorname{K}\big({\rm Proj}\big(C(X,\alg)\big)\big) \ ,$$ where ${\rm Proj}(C(X,\alg))$ is the category of finitely-generated projective $C(X,\alg)$-modules. The group on the right-hand side is the group $\operatorname{K}(A)$. For given $[H]\in\operatorname{tor}(\operatorname{H}^3(X,{{\mathbb Z}}))$ and $A\in\Br(X)$ such that $\delta(A)=[H]$, this group can be identified as the twisted K-theory group $\operatorname{K}^0(X,H)$ of $X$ with twisting $A$. This definition is equivalent to the description in terms of bundle gerbe modules, and from this construction it follows that $\operatorname{K}^0(X,H)$ is a subgroup of the ordinary K-theory of $X$. If $\delta(A)=0$, then $A$ is equivalent to $C(X)$ and we have $\operatorname{K}(A):=\operatorname{K}_0(C(X))=\operatorname{K}^0(X)$. The projective $C(X,A)$-modules over a rank $m$ Azumaya algebra $A$ are vector bundles $E \rightarrow X$ with fibre $\mathbb{C}^{n\,m} \cong (\mathbb{C}^m)^{\oplus n}$, which is naturally an ${{\mathbb M}}_m(\mathbb{C})$-module. This is a projective module and all projective $C(X,A)$-modules arise in this way [@Kapustin1]. We will now describe the connection to twisted cohomology, following refs. [@BCMMS1; @MaSt1]. Upon choosing a bundle gerbe connection, one has an isomorphism of ${{\mathbb Z}}_2$-graded cohomology groups $$\operatorname{HP}_\bullet\big(CT^\infty(X,H)\big)~\cong~\operatorname{H}^\bullet(X,H)\=\operatorname{H}^\bullet\big(\Omega^\bullet(X)\,,\,\dd-H\wedge\big)$$ where the right-hand side is the $H$-twisted cohomology of $X$. The Chern-Weil representative, in terms of differential forms on $X$, of the canonical Connes-Chern character $$\operatorname{ch}\,:\,\operatorname{K}_\bullet \big( {CT}^\infty(X, H)\big) ~ \longrightarrow~ \operatorname{HP}_\bullet \big( {CT}^\infty(X, H)\big)$$ then leads to the twisted Chern character $${\operatorname{ch}}_H \,:\, \operatorname{K}^\bullet(X, H) ~\longrightarrow~ \operatorname{H}^\bullet (X, H) \ .$$ Isometric pairing formulas {#sec:7.2} -------------------------- The Clifford algebra bundle $\Cl(T^*X)$ is an Azumaya algebra over $X$ with Dixmier-Douady invariant $\delta\big(\Cl(T^*X)\big)=w_3(X)$, the third Stiefel-Whitney class of the tangent bundle of $X$ [@Plymen1]. Consider the algebra $$\balg_H~:=~CT\big(X\,,\,w_3(X)-H\big)~\cong~ C_0\big(X\,,\,\bun_{-H}\otimes\Cl(T^*X)\big) \ .$$ Then $(\alg_H,\balg_H)$ is a PD pair with fundamental class $\Delta=\Dirac\otimes\Dirac$ [@BMRS2; @Tu1]. The restriction of the algebra $\alg_H\otimes\balg_H$ to the diagonal of $X\times X$ is isomorphic to the algebra $CT\big(X\,,\,w_3(X)\big)\otimes\cK$, which is Morita equivalent to the algebra of continuous sections $C_0\big(X\,,\,\Cl(T^*X)\big)$. Under the isomorphism $\operatorname{K}^0\big(X\,,\,w_3(X)\big)\cong\operatorname{K}_0\big(C_0(X,\Cl(T^*X))\big)$, the tensor product of projective bundles defines a bilinear pairing on twisted K-theory groups given by $$\operatorname{K}^\bullet(X,H)\otimes\operatorname{K}^\bullet\big(X\,,\,w_3(X)-H\big)~ \longrightarrow~\operatorname{K}^0\big(X\,,\,w_3(X)\big)~\xrightarrow{\operatorname{index}}~{{\mathbb Z}}\ .$$ On the other hand, since the torsion class $w_3(X)$ is trivial in de Rham cohomology, there is an isomorphism $\operatorname{H}^\bullet\big(X\,,\,w_3(X)\big)\cong\operatorname{H}^\bullet(X,{\mathbb{R}})$ and hence the cup product defines a bilinear pairing on twisted cohomology groups via the mapping $$\operatorname{H}^\bullet(X,H)\otimes\operatorname{H}^\bullet\big(X\,,\,w_3(X)-H\big)~ \longrightarrow~\operatorname{H}^{\rm even}(X,{\mathbb{R}}) \ .$$ The fundamental cyclic cohomology class $\Xi$ of the PD pair $(\alg_H,\balg_H)$ may thus be identified with the orientation cycle $[X]$. In this case the Todd class $\Todd(\alg_H)$ may be identified with the Atiyah-Hirzebruch genus $\widehat{A}(X)$ of the tangent bundle $TX$, and the modified Chern character (\[modch\]) is $\operatorname{ch}_H\wedge\sqrt{\widehat{A}(X)}$. Note that when $X$ is a spin$^c$ manifold, then $w_3(X)=0$ and the algebra $C_0\big(X\,,\,\Cl(T^*X)\big)$ is Morita equivalent to $C(X)$ [@Plymen1]. In this instance $\balg_H=CT(X,-H)=\alg_H^\op$ is the opposite algebra of $\alg_H$, and the restriction of $\alg_H\otimes\balg_H$ to the diagonal of $X\times X$ is stably isomorphic to the algebra of functions $C(X)$. Twisted K-cycles and Ramond-Ramond charges {#sec:7.3} ------------------------------------------ If spacetime $X$ is a spin manifold, then any D-brane $(W,E,f)$ in $X$ determines canonical element [@CW1] $$f!\in\operatorname{KK}_d\big(CT(W,f^*[H]+w_3(\nu_W))\,,\,CT(X,H)\big) \ .$$ Since $w_3(\nu_W)=w_3(W)$ in this case [@Witten1; @FW1], we may identify the D-brane algebra $\dalg=CT\big(W\,,\,f^*[H]+w_3(W)\big)$ and the corresponding Chan-Paton bundle is an element $E\in\operatorname{K}^0\big(W\,,\,f^*[H]+w_3(W)\big)$. There are two particularly interesting special classes of such twisted D-branes. The first class is determined by the usual requirement that the worldvolume $W$ be a spin$^c$ manifold, as in the ordinary Baum-Douglas construction. This instance was first considered in ref. [@MaSi1]. Then $w_3(W)=0$, the algebra $\dalg$ is the restriction of $\alg_H$ to $W$, and $E\in\operatorname{K}^0(W,f^*[H])$. The geometric K-homology equivalence relations are then completely analogous to those of the untwisted case in Section \[sec:2.1\] [@MaSi1]. When the $H$-flux defines a non-torsion element in $\operatorname{H}^3(X,{{\mathbb Z}})$, the Chan-Paton bundle $E$ is a projective bundle of infinite rank, corresponding to an infinite number of wrapped branes on $W$. When $H$ defines an $n$-torsion element, then the $B$-field $B_{P_H}$ incorporates the contribution from the ${{\mathbb Z}}_n$-valued ’t Hooft flux necessary for anomaly cancellation on the finite system of $n$ spacetime-filling branes and antibranes in the $H$-flux background [@Kapustin1]. The second class is in some sense opposite to the first one, and it is more physical in that it is tied to the Freed-Witten anomaly cancellation condition [@FW1] f\^\*\[H\]+w\_3(W)0  . \[FWcanc\]In this case $E\in\operatorname{K}^0(W)$ and the D-brane algebra $\dalg$ is (stably) commutative. The mathematical meaning of this limit is that it makes the worldvolume $W$ into a “twisted spin$^c$” manifold, which may be defined precisely as follows. By Kuiper’s theorem, the unitary group $U(\hil)$ of an infinite-dimensional Hilbert space $\hil$ is contractible (both in the norm and compact-open topologies). Thus the projective unitary group $PU(\hil)$ has the homotopy type of an Eilenberg-Maclane space $K(\mathbb{Z},2)$, and its classifying space $BPU(\hil)$ is an Eilenberg-Maclane space $K(\mathbb{Z},3)$. It follows that any element of $\operatorname{H}^3(X,\mathbb{Z})$ corresponds to a map $F : X \rightarrow BPU(\hil)$, and hence to the projective bundle which is the pullback by $F$ of the universal bundle over $BPU(\hil)$. It follows that $K(\mathbb{Z},3)$ is a classifying space for the third cohomology, $$\operatorname{H}^3(X,\mathbb{Z}) \cong \big[X\,,\,K(\mathbb{Z},3)\big] \ ,$$ and so we can represent an $H$-flux by a continuous map $H: X \rightarrow K(\mathbb{Z},3)$. Taking a universal $K(\mathbb{Z},2)$-bundle over $K(\mathbb{Z},3)$ and pulling it back through $H$ to $X$, we get a $K(\mathbb{Z},2)$-bundle $P_H$ over $X$. Consider the $K(\mathbb{Z},2)$-bundle $$\begin{aligned} & BU(1) & ~\longrightarrow ~BSpin^c~ \longrightarrow~ BSO \\ & \parallel & \\ & K(\mathbb{Z},2) &\end{aligned}$$ with classifying map $\beta\circ w_2 : BSO \rightarrow BBU(1)=K(\mathbb{Z},3)$, the Bockstein homomorphism of the second Stiefel-Whitney class. The action of $K(\mathbb{Z},2)$ on $BSpin^c$ induces a principal $BSpin^c$-bundle $Q=P_H\times_{K(\mathbb{Z},2)}BSpin^c$, i.e., a sequence of bundles $Q_n=P_H\times_{K(\mathbb{Z},2)}BSpin^c(n)$, with corresponding universal bundles $UQ_n=(P_H\times_{K(\mathbb{Z},2)}ESpin^c(n)) \times_{Spin^c(n)}\mathbb{R}^n$. The homotopy groups of the associated Thom spectrum $${\rm Thom}(UQ)=P_+\,\bigwedge_{K(\mathbb{Z},2)_+}\,MSpin^c$$ are the $H$-twisted spin$^c$ bordism groups of $X$. Using this one can deduce that a compact manifold $W$ is $H$-twisted $\operatorname{K}$-oriented if it is an oriented manifold with a continuous map $f : W \rightarrow X$ such that the Freed-Witten condition (\[FWcanc\]) holds. We say that a pair $(W,f)$, with $W$ a compact oriented manifold and $f: W \rightarrow X$ a continuous map, is $H$-twisted spin$^c$ if it satisfies this cancellation. A choice of $H$-twisted spin$^c$ structure is a choice of a two-cochain $c$ such that, at the cochain level, $\delta(c)=\beta\circ w_2(W)-f^*[H]$. This follows from the following geometric fact. Let $\alpha : P \rightarrow P$ be an automorphism of a projective bundle $P\to X$ with infinite-dimensional separable fibres. It induces a line bundle $L_{\alpha} \rightarrow X$. For $x \in X$, the non-zero elements of $(L_{\alpha })_x$ are the linear isomorphisms $E_x \rightarrow E_x$ which induce $\alpha |_{P_x}$, where $P_x=\mathbb{P}(E_x)$. Then the assignment $\operatorname{Aut}(P) \rightarrow \operatorname{H}^2(X,\mathbb{Z}), \, \alpha \mapsto [L_{\alpha }]$ identifies the group of connected components $\pi_0(\operatorname{Aut}(P))$ with the group $\operatorname{H}^2(X,\mathbb{Z})$ of isomorphism classes of line bundles over $X$. This follows from the identification of the automorphism group $\operatorname{Aut}(P)$ of the bundle $P$ with the space of sections of the endormorphism bundle $\operatorname{End}(P)$, i.e., the space of maps $X \rightarrow PU(\hil)$, which is an Eilenberg-Maclane space $K(\mathbb{Z},2)$. For a more extensive treatment of these issues, see refs. [@Dou1; @Wang1]. This leads us to the following notion. Let $(W,f)$ be a manifold (not necessarily $H$-twisted spin$^c$). A vector bundle $V \rightarrow W$ is said to be an *$H$-twisted spin$^c$ vector bundle* if $f^*[H]= w_3(V)$. The choice of a specific $H$-twisted spin$^c$ structure on $V$ is made as above by choosing an appropriate two-cochain. The notion of an $H$-twisted spin$^c$ manifold is just the special case $V=TW$ of this latter one. The analogs of the Baum-Douglas gauge equivalence relations for geometric twisted K-homology may be straightforwardly written down in the obvious way using projective Hilbert bundles instead of vector bundles. In the construction of the unit sphere bundle (\[unitspherebun\]), we assume that $w_3(\,\widehat{W}\,)=\pi^*(f^*[H])$. Then $(\,\widehat{W},f\circ\pi)$ is an $H$-twisted spin$^c$ manifold. The rest of the construction proceeds by using the untwisted Thom class $H(F)\in\operatorname{K}^i(\,\widehat{W}\,)$. See ref. [@Wang1] for the relation to a description involving bundle gerbe modules. There are more general twistings one may consider which are still physically meaningful. Suppose that $[H]\in \mathbb{Z}_n \subset \operatorname{H}^3(X,\mathbb{Z})$ and fix an element $y \in \operatorname{H}^3(X,\mathbb{Z}_n)$. Then we may consider bordism of manifolds $(W,f)$, where the worldvolume $W$ is a compact oriented manifold and $f:W\rightarrow X$ is a continuous map satisfying f\^\*\[H\]= w\_3(W)+f\^\*((y))  , \[genFWcond\]with $\beta$ the Bockstein homomorphism. The condition (\[genFWcond\]) is the most general form of the Freed-Witten anomaly cancellation condition for a system of $n$ spacetime-filling brane-antibrane pairs [@Kapustin1]. With this more general kind of twisting, one can also consider bordism of manifolds $(W,f)$, where $W$ is a compact spin$^c$ manifold as before and $f:W\rightarrow X$ is a continuous map satisfying $f^*[H]=f^*(\beta(y))$. The equivalences between the various forms of the geometric twisted K-homology group $\operatorname{K}_\bullet(X,H)$ follows from the equivalences among the corresponding twisted K-theories. In any of these cases, one arrives at the twisted D-brane charge vector $${Q_H(W,E,f)\=\operatorname{ch}_H\big(f_!(E)\big)\wedge\sqrt{\widehat{A}(X)}}\in\operatorname{H}^\bullet(X,H) \ .$$ Only when $[H]$ is a torsion class does the Ramond-Ramond charge correspond to an element of the ordinary (untwisted) cohomology of the spacetime manifold $X$. Correspondences and T-duality {#sec:8} ============================= In this final section we shall apply our formalism to a new description of topological open string T-duality [@BMRS1; @BMRS2]. The description is based on the formulation of KK-theory in terms of correspondences [@CSk1; @BW1; @Cuntz1]. Amongst other things, this leads to an explicit construction of the various structures inherent in Kasparov’s bivariant K-theory, and moreover admits a natural noncommutative generalization [@BMRS2]. Correspondences {#sec:8.1} --------------- Let $X,Y$ be smooth manifolds, and set $\operatorname{KK}_d(X,Y):=\operatorname{KK}_d\big(C_0(X)\,,\,C_0(Y)\big)$. Elements of the group $\operatorname{KK}_d(X,Y)$ can be represented by *correspondences* $$\xymatrix{ &(Z,E)\ar[ld]_f\ar[rd]^g& \\ X & & Y }$$ where $Z$ is a smooth manifold, $E$ is a complex vector bundle over $Z$, the map $f:Z\to X$ is smooth and proper, $g:Z\to Y$ is a smooth K-oriented map, and $d=\dim(Z)-\dim(Y)$. This diagram defines a morphism $$g_!\big(f^*(-)\otimes E\big)\in \operatorname{Hom}\big(\operatorname{K}^\bullet(X)\,,\,\operatorname{K}^{\bullet+d}(Y)\big)$$ implemented by the KK-theory class $[f]\otimes_{C_0(Z)}[[E]]\otimes_{C_0(Z)}(g!)$, where $[[E]]$ is the KK-theory class in $\operatorname{KK}_0(Z,Z)\cong\operatorname{End}\big(\operatorname{K}^\bullet(Z)\big)$ of the vector bundle $E$ defined by tensor product with the K-theory class $[E]$ of $E$ (this ignores the extension term in the universal coefficient theorem (\[UCTKK\])). The collection of all correspondences forms an additive category under disjoint union. The group $\operatorname{KK}_d(X,Y)$ is then obtained as the quotient space of the set of correspondences by the equivalence relation generated by suitable notions of cobordism, direct sum and vector bundle modification, analogous to those of Section \[sec:2.1\] [@BMRS2]. The correspondence picture of KK-theory gives a somewhat more precise realization of the notion, introduced categorically in Section \[sec:3.3\], of Kasparov bimodules as “generalized” morphisms of $C^*$-algebras. It provides a geometric presentation of the analytic index for families of elliptic operators on $X$ parametrized by $Y$. The limiting case $\operatorname{KK}_d(X,\pt)=\operatorname{K}_d(X)$ is the geometric K-homology of $X$ as described in Section \[sec:2\], since in this case a correspondence is simply a Baum-Douglas K-cycle $(Z,E,f)$ over $X$. On the other hand, the group $\operatorname{KK}_d(\pt,Y)=\operatorname{K}^d(Y)$ is the K-theory of $Y$, obtained via an ABS-type construction of the charge of the D-brane $(Z,E,g)$ in $Y$ using the spin$^c$ structure on the bundle $TZ\oplus g^*(TY)$. One of the great virtues of this formalism is that it gives an explicit description of the intersection product in KK-theory, which as mentioned in Section \[sec:3.3\] is notoriously difficult to define. In the notation above it is a map $$\otimes_M\,:\,\operatorname{KK}(X,M)\times\operatorname{KK}(M,Y)~\longrightarrow~ \operatorname{KK}(X,Y)$$ which sends two correspondences $$\xymatrix{ &(Z_1,E_1)\ar[ld]_f\ar[rd]^{g_M}& & (Z_2,E_2)\ar[ld]_{f_M}\ar[rd]^{g} & \\ X & & M & & Y }$$ to the correspondence $$[Z,E]\=[Z_1,E_1]\otimes_M[Z_2,E_2]$$ with $Z=Z_1\times_M Z_2$ and $E=E_1\boxtimes E_2$. To ensure that the fibred product $Z$ is a smooth manifold, one has to impose the transversality condition $$\dd f_M(T_{z_2} Z_2) + \dd g_M(T_{z_1} Z_1) \= T_{f_M(z_2)} M$$ for all $(z_1,z_2)\in Z_1\times Z_2$. Such choices can always be straightforwardly made using standard transversality theorems and homotopy invariance of the KK-functor, such that this restricted set of correspondences is in a sense “dense” in the space of all correspondences [@CSk1]. T-duality and KK-equivalence {#sec:8.2} ---------------------------- The correspondence picture is reminescent of the Fourier-Mukai transform, which is related to T-duality on spacetimes compactified on tori $X=M \times {{\mathbb T}}^n$ in the absence of a background $H$-flux. In this case the T-dual is topologically the same space $M \times \widehat{{\mathbb T}}^n$, and the mechanism implementing the T-duality is given by the smooth analog of the Fourier-Mukai transform [@Hori1]. Let ${{\mathbb T}}^n$ be an $n$-torus, and let $\widehat{{{\mathbb T}}}{}^n\cong\Pic^0({{\mathbb T}}^n)$ be the corresponding dual $n$-torus. Recall that the Poincaré line bundle $\Poin_0\rightarrow{{\mathbb T}}^n\times\widehat{{{\mathbb T}}}{}^n$ is the unique line bundle such that $\Poin_0\big|_{{{\mathbb T}}^n\times\{\,\widehat{t}~\}}\in\Pic^0({{\mathbb T}}^n)$ is the flat line bundle corresponding to $\widehat{t}\in\widehat{{{\mathbb T}}}{}^n$ and whose restriction $\Poin_0\big|_{\{0\}\times{{\widehat{{{\mathbb T}}}{}}}{}^n}$ is trivial. This data defines a diagram $$\xymatrix{ &\big(M\times{{\mathbb T}}^n\times{{\widehat{{{\mathbb T}}}{}}}{}^n\,,\,\Poin\big) \ar[ld]_{p_1}\ar[rd]^{p_2}& \\ M\times{{\mathbb T}}^n & & M\times{{\widehat{{{\mathbb T}}}{}}}{}^n }$$ where $p_1,p_2$ are canonical projections and $\Poin$ is the pullback of the Poincaré line bundle to $M\times{{\mathbb T}}^n\times{{\widehat{{{\mathbb T}}}{}}}{}^n$. The smooth analog of the Fourier-Mukai transform is the isomorphism of K-theory groups $$T_!\,:\,\operatorname{K}^\bullet\big(M\times{{\mathbb T}}^n\big)~\xrightarrow{\approx}~ \operatorname{K}^{\bullet+n}\big(M\times{{\widehat{{{\mathbb T}}}{}}}{}^n\big)$$ given by $$T_!(-)\=(p_2)_!\big(p_1^*(-)\otimes\Poin\big) \ .$$ We conclude that *topological open string T-duality* is a correspondence. In this case, the correspondence represents an invertible element of KK-theory, i.e., a KK-equivalence. The Fourier-Mukai transform can be rephrased in a satisfactory manner, entirely in terms of noncommutative geometry, as a crossed product algebra ${C}_0(M\times {{\mathbb T}}^n)\rtimes {\mathbb{R}}^n$, where the action of the group ${\mathbb{R}}^n$ on ${C}_0(M\times {{\mathbb T}}^n)$ is just the given action of ${\mathbb{R}}^n$ on ${{\mathbb T}}^n$ by translations and the trivial action on $M$. By Rieffel’s version of the Mackey imprimitivity theorem [@Rieffel1], one sees that the crossed product $C^*$-algebra ${C}_0(M\times {{\mathbb T}}^n)\rtimes {\mathbb{R}}^n$ is Morita equivalent to $$C_0(M)\otimes C^*({\mathbb{R}}^n) \cong C_0\big(M\times{{\widehat{{{\mathbb T}}}{}}}{}^n\big) \ .$$ Thus the T-dual of the $C^*$-algebra ${C}_0(M\times {{\mathbb T}}^n)$ is obtained by taking the crossed product of the algebra with ${\mathbb{R}}^n$. The Connes-Thom isomorphism then defines a *KK-equivalence* $$\alpha~\in~\operatorname{KK}_n\big(M\times{{\mathbb T}}^n\,,\,M\times{{\widehat{{{\mathbb T}}}{}}}{}^n\big)$$ which is just the families Dirac operator. Moreover, Takai duality gives a Morita equivalence $$\big(C_0(M\times{{\mathbb T}}^n)\rtimes{\mathbb{R}}^n\big)\rtimes{\mathbb{R}}^n\sim C_0(M\times{{\mathbb T}}^n) \ ,$$ showing that the T-duality transformation is topologically of order 2. The reason for making this reformulation in terms of noncommutative geometry is that it extends to the case when spacetime $X$ is a principal torus bundle $\pi:E\xrightarrow{{{\mathbb T}}^n}M$ of rank $n$ in the presence of a background $H$-flux. In this instance the T-dual is a crossed product algebra $CT(E,H)\rtimes{\mathbb{R}}^n$, which is generally a bundle of rank $n$ noncommutative tori fibred over $M$ [@MR1]. This requires that $H$ restrict to zero in the cohomology of the torus fibers and that the action of ${\mathbb{R}}^n$ on the continuous trace $C^*$-algebra ${CT}(X, H)$ is a lift of the given action of ${\mathbb{R}}^n$ on $X$. That such a lift exists is a non-trivial result proven in ref. [@MR1]. This crossed product algebra is a noncommutative $C^*$-algebra, but it need not be a continuous trace algebra. In ref. [@GSN1] it was shown, by checking the open string metric, that in some cases these algebras are globally defined, open string versions of T-folds. The correspondence picture in this context appears to nicely describe the doubled torus formalism for T-folds, as we will see below. When $\pi_*[H]=0$, the T-dual algebra is isomorphic to a continuous trace $C^*$-algebra $CT\big(\,\widehat{E}\,,\,\widehat{H}\,)$ and represents a geometrically dual spacetime in the usual sense. Noncommutative correspondences {#sec:8.3} ------------------------------ The discussion at the end of Section \[sec:8.2\] above motivates the following noncommutative generalization of the correspondence picture of Section \[sec:8.1\] above [@BMRS2]. Let $\alg,\balg$ be separable $C^*$-algebras. We will represent elements of $\operatorname{KK}(\alg,\balg)$ by *noncommutative correspondences* $$\xymatrix { & (\calg , \xi) & \\ \alg\ar[ur]^{f} & & \balg\ar[ul]_{g} }$$ where $\calg$ is a separable $C^*$-algebra and $\xi \in \operatorname{KK}(\calg, \calg)$, whereas $f:\alg\to\calg$ and $g:\balg\to\calg$ are homomorphisms with $g$ $\operatorname{K}$-oriented. The intersection product gives an element $[f]\otimes_\calg \xi \otimes_\calg (g!) \in\operatorname{KK}(\alg, \balg)$, with associated K-theory morphism $g^!\big(f_*(-) \otimes_\calg \xi\big)\in\operatorname{Hom}\big(\operatorname{K}_\bullet( \alg)\,,\, \operatorname{K}_{\bullet}(\balg)\big)$. Every class in $\operatorname{KK}_d(\alg,\balg)$ comes from a noncommutative correspondence, in fact from one with trivial $\xi=1_\calg$. The representation of the intersection product in this instance uses amalgamated products of $C^*$-algebras [@BMRS2]. Let us consider the class of examples mentioned earlier, focusing for simplicity on the simplest case where spacetime $X$ is a principal circle bundle $\pi:E\xrightarrow{{{\mathbb T}}}M$ in a background $H$-flux. The T-dual is another principal circle bundle $\widehat{\pi}:\widehat{E}\xrightarrow{\widehat{{{\mathbb T}}}}M$ with characteristic class $c_1(\widehat{E})=\pi_*[H]$. The Gysin sequence for $E$ defines the T-dual $H$-flux $[\,\widehat{H}\,]\in \operatorname{H}^3\big(\,\widehat{E}\,,\,{{\mathbb Z}}\big)$ with $c_1(E) = \widehat{\pi}_*[\, \widehat{H}\,]$ and $[H] = [\,\widehat{H}\,]$ in $\operatorname{H}^3\big(E\times_M \widehat{E}\,,\,{{\mathbb Z}}\big)$. This data defines a noncommutative correspondence $$\xymatrix{ & \big(CT(E\times_M \widehat{E},H) \,,\, \xi\big) & \\ CT(E, H)\ar[ur]^{f} & & CT\big(\widehat{E}\,,\, \widehat{H}\,\big)\ar[ul]_{g} }$$ where $\xi$ is an analogue of the Poincaré line bundle. It determines a KK-equivalence $\alpha\in\operatorname{KK}_1\big(CT(E, H)\,,\, CT(\,\widehat{E}, \widehat{H}\,)\big)$. See ref. [@BMRS2] for further examples of noncommutative correspondences. Axiomatic T-duality and D-brane charge {#sec:8.4} -------------------------------------- Inspired by the above results, we now give an axiomatic definition of T-duality in $\operatorname{K}$-theory that any definition of the *T-dual* $T(\alg)$ of a $C^*$-algebra $\alg$ should satisfy. These axioms include the requirements that the Ramond-Ramond charges of $\alg$ should be in bijective correspondence with the Ramond-Ramond charges of ${T}(\alg)$, and that T-duality applied twice yields a $C^*$-algebra which is physically equivalent to the $C^*$-algebra that we started out with. For this, we postulate the existence of a suitable category of separable $C^*$-algebras, possibly with extra structure (for example the ${\mathbb{R}}^n$-actions used above). Its objects $\alg$ are called *T-dualizable algebras* and satisfy the following requirements: 1. The map $\alg\mapsto T(\alg)$ from $\alg$ to the [T-dual]{} of $\alg$ is a covariant functor; 2. There is a functorial map $\alg\mapsto\alpha_\alg$, where the invertible element $\alpha_\alg$ defines a KK-equivalence in $\operatorname{KK}\big(\alg\,,\,T(\alg)\big)$; and 3. The algebras $\alg$, $T\big(T(\alg)\big)$ are Morita equivalent, with associated KK-equivalence given by the invertible element $\alpha_\alg\otimes_{T(\alg)}\alpha_{T(\alg)}$. Let us consider a class of examples generalizing those already presented in this section. Let $\alg$ be a $G$-$C^*$-algebra, where $G$ is a locally compact, abelian vector Lie group (basically ${\mathbb{R}}^n$). Then the algebra ${T}(\alg) = \alg\rtimes G$ satisfies the axioms above [@BMRS1], thanks to the Connes-Thom isomorphism and Takai duality (here we tacitly identify $G$ with its Pontrjagin dual $\tilde G$). The assumption made above that the T-dual ${T}(\alg)$ is a $C^*$-algebra is very strong and it is not always satisfied, as seen in ref. [@BHM1]. Yet even in that case, the axioms above are satisfied, provided one also allows more general algebras belonging to a category studied there. There is also an analogous axiomatic definition of T-duality in local cyclic cohomology [@BMRS1], relevant to the duality transformations of Ramond-Ramond fields. A crucial point about the formulation in terms of bivariant K-theory is that it provides a *refinement* of the usual notion of T-duality. For instance, for a suitable class of algebras the universal coefficient theorem (\[UCTKK\]) expresses the KK-theory group $\operatorname{KK}_\bullet(\alg,\balg)$ as an extension of the group $\operatorname{Hom}_{{\mathbb Z}}\big(\operatorname{K}_\bullet(\alg)\,,\,\operatorname{K}_\bullet(\balg)\big)$ by $\operatorname{Ext}_{{\mathbb Z}}\big(\operatorname{K}_{\bullet+1}(\alg)\,,\,\operatorname{K}_\bullet(\balg)\big)$. The extension group can lead to important torsion effects not present in the usual formulations of T-duality. We close by studying the invariance of the noncommutative D-brane charge vector (\[NCRRcharge\]) under T-duality. As is well known [@Myers1], the T-duality invariance of Ramond-Ramond couplings on D-branes is a subtle issue which requires further conditions to be imposed on the structures involved. The present formalism yields a systematic and general way to establish these criteria. If the D-brane algebra $\dalg$ is a PD algebra, then by the Grothendieck-Riemann-Roch formula (\[NCGRR\]) one has $${Q}(\dalg,\xi,f) \= \operatorname{ch}(\xi) \otimes_\dalg \,\Todd(\dalg)\otimes_\dalg (f*) \otimes_\alg\sqrt{{\Todd}(\alg)}\,^{-1} \ .$$ Suppose that there is a local cyclic cohomology class $\Lambda\in{{\rm HL}}(\dalg,\dalg)$ such that $${(f*)\otimes_\alg\sqrt{{\Todd}(\alg)}\,^{-1}\=\Lambda\otimes_\dalg(f*)} \ .$$ Then there is a noncommutative version of the Wess-Zumino class (\[WZclass\]) in ${{\rm HL}}_\bullet(\dalg)$ given by $${D}_{\rm WZ}(\dalg,\xi,f)\=\operatorname{ch}(\xi) \otimes_\dalg \,\Todd(\dalg)\otimes_\dalg\Lambda \ .$$ Consider a pair of D-branes $(\dalg,\xi,f)$ and $(\dalg',\xi',f'\,)$ which are $\operatorname{KK}$-equivalent, with the equivalence determined by an invertible element $\alpha$ in $\operatorname{KK}(\dalg,\dalg'\,)$ and $\xi'=\xi\otimes_\dalg\alpha$. If $${\Lambda'\=\operatorname{ch}(\alpha)^{-1}\otimes_\dalg\Lambda\otimes_\dalg \,\operatorname{ch}(\alpha)}$$ then by eq. (\[ToddKKrel\]) one has ${D}_{\rm WZ}(\dalg',\xi',f'\,)={D}_{\rm WZ}(\dalg,\xi,f)\otimes_\dalg \,\operatorname{ch}(\alpha)$. It follows that $${D}_{\rm WZ}(\dalg'',\xi'',f''\,)\={D}_{\rm WZ}(\dalg,\xi,f)\otimes_\dalg \,\operatorname{ch}(\alpha\otimes_{\dalg'}\alpha'\,)$$ in ${{\rm HL}}_\bullet(\dalg''\,)\cong{{\rm HL}}_\bullet(\dalg)$. This formula expresses the desired T-duality covariance under the conditions spelled out above. Acknowledgments {#acknowledgments .unnumbered} --------------- The author would like to thank the organisors and participants of the workshop for the very pleasant scientific and social atmosphere. He would especially like to thank J. Brodzki, V. Mathai, R. Reis, J. Rosenberg and A. Valentino for the enjoyable collaborations and extensive discussions over the last few years, upon which this article is based. This work was supported in part by the Marie Curie Research Training Network Grant [*ForcesUniverse*]{} (contract no. MRTN-CT-2004-005104) from the European Community’s Sixth Framework Programme.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, we present monotonicity results of a function involving to the inverse hyperbolic sine. From these, we derive some inequalities for bounding the inverse hyperbolic sine.' address: - 'Research Institute of Mathematical Inequality Theory, Henan Polytechnic University, Jiaozuo City, Henan Province, 454010, China' - 'School of Mathematics and Informatics, Henan Polytechnic University, Jiaozuo City, Henan Province, 454010, China' author: - Feng Qi - 'Bai-Ni Guo' date: - Commenced on 10 March 2009 and completed on 11 March 2009 in Jiaozuo - title: Monotonicity results and bounds for the inverse hyperbolic sine --- [^1] Introduction and main results ============================= In [@Zhu-New-Arc-Hyper Theorem 1.9 and Theorem 1.10], the following inequalities were established: For $0\le x\le r$ and $r>0$, the double inequality $$\label{zhu-arcsinh-ineq-1} \frac{(a+1)x}{a+\sqrt{1+x^2}\,}\le\operatorname{arcsinh}x\le \frac{(b+1)x}{b+\sqrt{1+x^2}\,}$$ holds true if and only if $a\le2$ and $$\label{zhu-cond-1} b\ge\frac{\sqrt{1+r^2}\,\operatorname{arcsinh}r-r}{r-\operatorname{arcsinh}r}.$$ The aim of this paper is to elementarily generalize the inequality  to monotonicity results and to deduce more inequalities. Our results may be stated as the following theorems. \[Arc-Hyperbolic-Sine-thm1\] For $\theta\in\mathbb{R}$, let $$\label{f-theta(x)-dfn} f_\theta(x)=\frac{\bigl(\theta+\sqrt{1+x^2}\,\bigr)\operatorname{arcsinh}x}{x},\quad x>0.$$ 1. When $\theta\le2$, the function $f_\theta(x)$ is strictly increasing; 2. When $\theta>2$, the function $f_\theta(x)$ has a unique minimum. As straightforward consequences of Theorem \[Arc-Hyperbolic-Sine-thm1\], the following inequalities are inferred. \[Arc-Hyperbolic-Sine-thm2\] Let $r>0$. 1. For $\theta\le2$, the double inequality $$\label{Arc-Hyperbolic-Sine-ineq1} \frac{(1+\theta)x}{\theta+\sqrt{1+x^2}\,}<\operatorname{arcsinh}x\le \frac{\bigl[\bigl(\theta+\sqrt{1+r^2}\,\bigr)(\operatorname{arcsinh}r)/r\bigr]x}{\theta+\sqrt{1+x^2}\,}$$ holds true on $(0,r]$, where the constants $1+\theta$ and $\frac{\left(\theta+\sqrt{1+r^2}\,\right)\operatorname{arcsinh}r}{r}$ in  are the best possible. 2. For $\theta>2$, the double inequality $$\label{Arc-Hyperbolic-Sine-ineq2} \frac{4\bigl(1-1/{\theta^2}\bigr)x}{\theta+\sqrt{1+x^2}\,}\le \operatorname{arcsinh}x \le \frac{\max\bigl\{1+\theta, \bigl(\theta+\sqrt{1+r^2}\,\bigr)(\operatorname{arcsinh}r)/r\bigr\}x} {\theta+\sqrt{1+x^2}\,}.$$ Replacing $\operatorname{arcsinh}x$ by $x$ in  and  yields $$\begin{gathered} \left.\begin{aligned} \frac{\bigl[\bigl(\theta+\sqrt{1+r^2}\,\bigr)(\operatorname{arcsinh}r)/r\bigr]\sinh x}{\theta+\cosh x}&\\ \frac{\max\bigl\{1+\theta, \bigl(\theta+\sqrt{1+r^2}\,\bigr)(\operatorname{arcsinh}r)/r\bigr\}\sinh x} {\theta+\cosh x}& \end{aligned}\right\}>x\\ >\begin{cases} \dfrac{(1+\theta)\sinh x}{\theta+\cosh x},&\theta\le2\\[0.7em] \dfrac{4\bigl(1-1/{\theta^2}\bigr)\sinh x}{\theta+\cosh x},&\theta>2 \end{cases}\end{gathered}$$ for $x\in(0,\operatorname{arcsinh}r)$. This can be regarded as Oppenheim type inequalities for the hyperbolic sine and cosine functions. For information on Oppenheim’s double inequality for the sine and cosine functions, please refer to [@Oppeheim-Sin-Cos.tex] and closely related references therein. It is clear that the left-hand side inequality in  recovers the left-hand side inequality in  while the right-hand side inequalities in  and  do not include each other. By similar approach to prove our theorems in next section, we can procure similar monotonicity results and inequalities for the inverse hyperbolic cosine. Proof of theorems ================= Now we prove our theorems elementarily. Direct differentiation yields $$\begin{aligned} f_\theta'(x)&=\frac1{x^2}\biggl(\theta+\frac1{\sqrt{x^2+1}\,}\biggr) \biggl[\frac{x\bigl({\theta}/{\sqrt{x^2+1}\,}+1\bigr)} {\theta+1/{\sqrt{x^2+1}\,}}-\operatorname{arcsinh}x\biggr]\\ &\triangleq\frac1{x^2}\biggl(\theta+\frac1{\sqrt{x^2+1}\,}\biggr)h_{\theta}(x),\\ h_{\theta}'(x)&=\frac{x^2 \bigl(2-\theta^2+\theta\sqrt{x^2+1}\,\bigr)} {\sqrt{x^2+1}\,\bigl(\theta\sqrt{x^2+1}\,+1\bigr)^2}\\ &\triangleq\frac{x^2q_x(\theta)} {\sqrt{x^2+1}\,\bigl(\theta\sqrt{x^2+1}\,+1\bigr)^2}.\end{aligned}$$ The function $q_x(\theta)$ has two zeros $$\theta_1(x)=\frac{\sqrt{1+x^2}\,-\sqrt{9+x^2}\,}2\quad \text{and}\quad \theta_2(x)=\frac{\sqrt{1+x^2}\,+\sqrt{9+x^2}\,}2.$$ They are strictly increasing and have the bounds $-1\le\theta_1(x)<0$ and $\theta_2(x)\ge2$ on $(0,\infty)$. As a result, under the condition $\theta\not\in(-1,0)$, 1. when $\theta\le-1$, the function $q_x(\theta)$ and the derivative $h_{\theta}'(x)$ are negative, and so the function $h_{\theta}(x)$ is strictly decreasing on $(0,\infty)$; 2. when $0\le\theta\le2$, the function $q_x(\theta)$ and the derivative $h_{\theta}'(x)$ is positive, and so the function $h_{\theta}(x)$ is strictly increasing on $(0,\infty)$; 3. when $\theta>2$, the function $q_x(\theta)$ and the derivative $h_{\theta}'(x)$ have a unique zero which is the unique minimum point of $h_{\theta}(x)$. Furthermore, since $\lim_{x\to\infty}h_{\theta}(x)=\infty$ for $\theta\ge0$ and $\lim_{x\to0^+}h_{\theta}(x)=0$, it follows that 1. when $\theta\le-1$, the function $h_{\theta}(x)$ is negative, and so the derivative $f_\theta'(x)$ is positive, that is, the function $f_\theta(x)$ is strictly increasing on $(0,\infty)$; 2. when $0\le\theta\le2$, the function $h_{\theta}(x)$ is positive, and so the derivative $f_\theta'(x)$ is also positive, accordingly, the function $f_\theta(x)$ is strictly increasing on $(0,\infty)$; 3. when $\theta>2$, the function $h_{\theta}(x)$ and the derivative $f_\theta'(x)$ have a unique zero which is the unique minimum point of the function $f_\theta(x)$ on $(0,\infty)$. On the other hand, when $\theta\in(-1,0)$, we have $$\begin{aligned} \bigl[x^2f_\theta'(x)\bigr]'&=\frac{x^2\bigl[\sqrt{x^2+1}\, +(\operatorname{arcsinh}x)/x-\theta\bigr]} {(x^2+1)^{3/2}}>0\end{aligned}$$ which means that the function $x^2f_\theta'(x)$ is strictly increasing on $(0,\infty)$. From the limit $\lim_{x\to0^+}\bigl[x^2f_\theta'(x)\bigr]=0$, it is derived that the function $x^2f_\theta'(x)$ is positive. Hence the function $f_\theta(x)$ is strictly increasing on $(0,\infty)$. The proof of Theorem \[Arc-Hyperbolic-Sine-thm1\] is complete. Since $\lim_{x\to0^+}f_\theta(x)=1+\theta$, by Theorem \[Arc-Hyperbolic-Sine-thm1\], it is easy to see that $1+\theta<f_\theta(x)\le f_\theta(r)$ on $(0,r]$ for $\theta\le2$. The inequality  is thus proved. For $\theta>2$, the minimum point $x_0\in(0,\infty)$ satisfies $$\operatorname{arcsinh}x_0=\frac{x_0\bigl({\theta}/{\sqrt{x_0^2+1}\,}+1\bigr)} {\theta+1/{\sqrt{x_0^2+1}\,}}.$$ Therefore, the minimum of the function $f_\theta(x)$ on $(0,\infty)$ equals $$\begin{gathered} \frac{\bigl(\theta+\sqrt{1+x_0^2}\,\bigr)\operatorname{arcsinh}x_0}{x_0} =\frac{x_0\bigl({\theta}/{\sqrt{x_0^2+1}\,}+1\bigr)} {\theta+1/{\sqrt{x_0^2+1}\,}} \cdot \frac{\bigl(\theta+\sqrt{1+x_0^2}\,\bigr)}{x_0}\\ =\frac{\bigl({\theta}/{\sqrt{x_0^2+1}\,}+1\bigr)\bigl(\theta+\sqrt{1+x_0^2}\,\bigr)} {\theta+1/{\sqrt{x_0^2+1}\,}} =\frac{\bigl(\theta+\sqrt{1+x_0^2}\,\bigr)^2} {\theta{\sqrt{x_0^2+1}\,}+1} \ge4\biggl(1-\frac1{\theta^2}\biggr).\end{gathered}$$ From this, it is obtained that $$4\biggl(1-\frac1{\theta^2}\biggr)\le f_\theta(x)\le \max\Bigl\{\lim_{x\to0^+}f_\theta(x),f_\theta(r)\Bigr\}$$ for $x\in(0,r]$, which implies the inequality . The proof of Theorem \[Arc-Hyperbolic-Sine-thm2\] is thus completed. [9]{} F. Qi and B.-N. Guo, *A concise proof of Oppenheim’s double inequality relating to the cosine and sine functions*, Avaliable online at <http://arxiv.org/abs/0902.2511>. L. Zhu, *New inequalities of Shafer-Fink type for arc-hyperbolic sine*, J. Inequal. Appl. **2008** (2008), Article ID 368275, 5 pages; Available online at <http://dx.doi.org/10.1155/2008/368275>. [^1]: This paper was typeset using -LaTeX
{ "pile_set_name": "ArXiv" }
--- abstract: 'Saving energy is an important issue for cloud providers to reduce energy cost in a data center. With the increasing popularity of cloud computing, it is time to examine various energy reduction methods for which energy consumption could be reduced and lead us to green cloud computing. In this paper, our aim is to propose a virtual machine selection algorithm to improve the energy efficiency of a cloud data center. We are also presenting experimental results of the proposed algorithm in a cloud computing based simulation environment. The proposed algorithm dynamically took the virtual machines’ allocation, deallocation, and reallocation action to the physical server. However, it depends on the load and heuristics based on the analysis placement of a virtual machine which is decided over time. From the results obtained from the simulation, we have found that our proposed virtual machine selection algorithm reduces the total energy consumption by 19% compared to the existing one. Therefore, the energy consumption cost of a cloud data center reduces and also lowers the carbon footprint. Simulation based experimental results show that the proposed heuristics which are based on resource provisioning algorithms reduce the energy consumption of the cloud data center and decrease the virtual machine’s migration rate.' author: - 'Nasrin Akhter$^{1*}$,  Mohamed Othman$^{12*}$  Ranesh Kumar Naha$^3$ ' title: 'Energy-aware virtual machine selection method for cloud data center resource allocation' --- addtoreset[footnote]{}[page]{} [Akhter : ]{} Cloud Computing, Data Center, Virtual Machine, Dynamic Allocation, Energy Efficiency. Introduction ============ power consumption of distributed and large scale systems, like the grid and cloud data center, has enormously increased its operational cost and its impact on the environment. Indeed, they need a massive electrical power supply which is a major concern for various institutions. The data center’s maximum power spent on underutilized servers and on the cooling systems are used to cool off the underutilized servers. To facilitate cloud services, we need to build a large scale data center with over a thousand physical nodes, which requires a large amount of electrical power. With the growing demand of cloud computing, energy consumption will be vastly increased in the near future. The power efficiency of hardware and proper resource management helps to minimize electrical energy costs. A recent study shows that servers usually operate from 10 to 50% of their full capacity, which has been concluded from the collected data from 5000 operational servers over a half year period \[1\]. At the same time, a completely idle server consumes its maximum power at its peak. A Cloud computing environment is built using one or more data centers, and these data centers have many computing resources which we call servers or hosts. Multiple virtual machines could be allocated for every host through virtual technology, and each virtual machine acts as an individual physical machine with their own OS and system resources. As a concept, cloud computing users are able to use computing resources as payment, because customer dissatisfaction should be avoided through SLA. When CPU utilization exceeds its limit due to an oversubscription agreement, SLA terms could be violated. However, a VM migration before a possible oversubscription is a viable solution to avoid SLA violation \[35\]. On the other hand, energy saving is possible through VM consolidation by defining under-loaded hosts. In both situations, it is necessary to choose VMs for migration and also a new placement is needed for the VMs to be migrated. In this paper, we have described VM placement and selection models. We have proposed a new VM selection method. The proposed method works by analysing RAM and network bandwidth usages. It reduces energy consumption in cloud data centers and we developed a new VM selection algorithm. A Simulation was performed under a complex scenario, and the results from the simulation are presented in this paper. Finally, we have demonstrated an analysis on obtained simulation results. The final results have shown that our proposed algorithm is more energy efficient compared to the existing one. Related Work {#Rela_wo} ============ The promising model for utility computing was proposed by \[2,3\] which delivers various cloud services. Utility computing provides computation, storage, data access, software, and other software services. The increasing demand of cloud computing power consumption for data centers and cooling system have raised tremendously. However, reducing power consumption of cloud computing infrastructures is a challenging research issue.\ Power management of virtualized data center in terms of energy-aware had been studied by Nathuji and Schwan \[4\]. Cloud computing resource provision is unpredictable and workload has varied over time. Random online algorithms and non-deterministic online algorithms normally improve the quality in a similar scenario compared to deterministic algorithms, as discussed by Ben-David et al. \[5\]. Algorithm depends on the input provided from distributed model and cannot be modeled using a plain statistical distribution \[6,7\] due to the complexity of realistic world setting. Determining the resource usage through application is not easy modelled through plain probability distribution which is shown by several studies \[6,8,9\].\ A new operating system, “Muse” was proposed by Chase et al. \[10\] which can be used in a hosting center, and it is an adaptive resource management system which plays a vital role in integrating energy and power resources for the hosting center. The Muse approach considers resources such as CPU servers by managing other resources like network bandwidth, disks, and memory energy saving, which is also possible \[11,12,13\]. Pinheiro et al. \[14\] introduced three “double-threshold VM selection policies” to determine whether VMs can be migrated. The basis of this policy is to set a lower and upper utilization threshold, so all VMs will be allocated to the host depending on these thresholds.\ Beloglazov and Buyya \[15\] had proposed an efficient resource management policy through VM consolidation for a cloud based data center, and they showed an overall operational cost reduction by using their proposed algorithm. Afterward, Beloglazov et al. \[16\] had proposed a high-level system architecture for cloud data centers by introducing the green service allocator and proposed an VM allocation algorithm by using a modified Best Fit Decreasing (BFD) \[17\] algorithm. This modified algorithm works based on the current CPU utilization. Beloglazov and Buyya \[18\] proposed several algorithms for VM allocation; these algorithms find overloaded and under loaded hosts, and VM from overloaded hosts which are migrated to a new location where the host is under loaded. They had proposed three policies, Minimum Migration Time (MMT), Random Choice (RC) and Maximum Correlation (MC) for VM selections, basically these algorithms select overloaded hosts repeatedly until the host is considered to be overloaded. The MMT, RC and MC policies were developed based on the idea which had been proposed by Verma et al. \[19\].\ Cao et al. \[20\] had proposed a power-saving approach based on a demand forecast for the allocation of VMs. They will try to reduce the total CPU frequency by switching the hosts on/off. The authors took two types of resources into consideration: computing cores and memory. Naha et al. \[21, 22, 34\] proposed the cloud brokering and load balancing method for the cloud data center, the work was validated further in \[33\], but they did not consider the energy-awareness issue. We had proposed Energy aware VM selection Algorithm for the cloud data center in \[23\]. Raycroft et al. \[24\] analyzed the energy consumption of the global VM allocation using various real world policies under a realistic testing scenario. However, the simulation was centered on a single application, and it does not take into consideration the communication among VMs across regions.\ Wu et al. \[25\] proposed a scheduling algorithm by using a dynamic voltage frequency scaling (DVFS) technique for the cloud data center. However, during VM consolidation and VM migration DVFS did not improve the power consumption as Beloglazov and Buyya had shown. \[18\]. An energy efficient scheduling of virtual machines (EEVS) algorithms was proposed by Ding et al. \[26\] which had reduced the energy consumption of cloud data center, however, they overlooked the VM migration and transitions of the processor. Wolke et al. \[27\] found that periodic reallocations and combinations of the controller’s placement achieved the highest energy efficiency with predefined service levels. A detail review on energy aware resource allocation of cloud data center were presented in our previous work \[32\]. VM migration and consolidation {#VM_mig_con} ============================== Power consumption of the cloud data center had increased because of an underutilized host and inefficiency of the resource management. On the other hand, overutilization of a server is caused by a system failure and this can increase SLA violation. To reduce overutilization, we may need to migrate VMs from an overutilized host to an underutilized host. Managing resources efficiently is an interesting issue but VM consolidation is one of the most important solutions for this problem. The energy consumption of a data center can be reduced by a live VM migration and consolidation. The single VM migration problem ------------------------------- On a single host or a physical server, multiple VMs can be allocated. In terms of energy and performance awareness, a dynamic VMs consolidation problem and time discretion can be divided into several time frames, where each frame takes one second. The resource service provider pays the energy cost which is consumed by the physical servers. The cost will be calculated by multiplying the cost per unit energy and the time period. The resource usage and resource capacity of a single host is characterised by the CPU usages.\ Though VMs can experience a dynamic workload during an operation, CPU usages varies over time. When the maximum CPU performance that is allowed exceeds, the host is considered to be oversubscribed. In this case, the established SLA policy between the service provider and consumer will be violated. The service provider will need to pay a penalty for violating the SLA and the penalty is calculated by multiplying the violated time frame of the SLA violation with the cost per unit. To solve the oversubscription problem, a single VM should be migrated to another host. This migration process will decrease the CPU’s utilization and will help to maintain the utilization threshold. However, during the migration, if another host that is used for migration in that specific time, the energy cost will be doubled. Defining when a migration should be initiated is quite challenging, especially when the minimization of energy cost and SLA violation costs take place.\ ![image](Sin_VM_mig){width="4.5in"} Fig. 1 shows how a single VM migration helps to protect a possible system failure before the host is heavily overloaded using the CPU utilization threshold. The analysis of cost function, optimal offline and online algorithm has been performed by Beloglazov and Buyya \[18\]. The cost of consumed energy by the server is paid by the resource provider and it is calculated as $C_pt_p$ ($C_p$ is the power cost and $t_p$ is time period). The resource capacity of a host is used by VMs, which is characterised by the CPU’s performance parameter. Due to dynamic workload, the CPU usage varies over time and it its usage is considered to be oversubscribed when the request of VMs exceeds the CPU’s maximum performance. The oversubscription issue occurs when the SLA is violated, and the penalty for SLA violation is calculated by $C_vt_v$ ($C_v$ is the SLA violation cost of single unit of time and $t_v$ is the SLA violation time duration). During the VM migration an extra host is accommodated until the migration process is completed and the time needed for migration is $T$ so, total power consumption during migration is $2C_pT$. At some point of time $v$, if SLA violation starts and SLA violation continued until $m$, then the total SLA violation time $r$ will be as Eq. (1).\ $$\label{first} r=m-v$$ The cost function $C(v,m)$ for SLA violation will be as Eq. (2) for three different cases. $$\label{2nd} \footnotesize C(v,m)=\left\{\begin{array}{l@{\quad}l} (v-m)C_p & \text{if} \ m<v,v-m \geq T, \\ (v-m)C_p+2(m-v+T)C_p+(m-v+T)C_v & \text{if} \ m \leq v,v-m<T, \\ rC_p+(r-m+v)C_p+rC_v & \text{if} \ m>v. \end{array} \right.$$ In the first case, VM migration starts before SLA violation $(m < v)$ and the migration starts when the SLA is violated $(v - m \geq T)$. In this case, the duration of SLA violation is 0. In the second case, VM migration starts before the SLA violation , after which the migration starts later, even though the SLA has been violated. Lastly, in the third case, VM migration starts after the SLA violation. According to Beloglazov and Buyya, \[18\], “The optimal offline algorithm for the single VM migration problem incurs the cost of $\frac{T}{s}$, and is achieved when $\frac{(v-m)}{T}=1$ and the competitive ratio of the optimal online deterministic algorithm for the single VM migration problem is $2 + s$, and the algorithm is achieved when $m = v$”. Problem of Dynamic VM Consolidation ----------------------------------- The resource utilization of cloud data centers could be improved by dynamic VM consolidation, which can also improve energy efficiency. Dynamic VM consolidation can determine when VMs reallocation should be initiated for an overloaded host. The VMs reallocation decision-making influences the proper resource utilization and QoS requirements delivered by the system.\ In our work, we deal with complex problems of dynamic VM consolidation, which requires us to consider multiple VMs and multiple hosts. All VMs confront variable workloads. Every host has a maximum CPU capacity limit and VM could be allocated within this limit. Based on heuristics, if the CPU capacity limit is exceeded, then VMs can be migrated through live migration. It is assumed that when a host is in idle mode, it considered to be in switch off mode and it consumes no power. All functioning hosts are referred to be in active mode. However, the problem is when a VMs should be migrated for minimizing power consumption. We will propose certain algorithms which are chosen for the appropriate VMs migration to minimize power consumption.\ ![image](VM_cons){width="3in"} Fig. 2 shows a dynamic VM consolidation which saves power consumption by using VM consolidation and by turning off the idle host after VM consolidation. For a dynamic VM consolidation, we have assumed that there are n homogeneous hosts with an $Ah$ capacity for each host and the maximum capacity of the CPU that can be allocated is $Av$. Hence, the maximum capacity of a host for VM allocation is $m=\frac{A_h}{A_v}$. Thereofore, the total number of VMs that will be allocated to all hosts is $mn$. We assume that an idle host consumed no power because no VM is allocated in an idle host, so it is switched off or in sleep mode working under a negligible power consumption. The total cost $C$ for active host is shown in Eq. (3). $$\label{first} C=\sum_{t=t_0}^T \left( C_p\sum_{i=0}^n a_{ti} + C_v \sum_{j=0}^n v_{tj}\right)$$ In Eq. (3), initial time is $t_0$; the total time is $T$, $a_{ti}$ indicating whether the host $i$ is active at the time $t$; $v_{tj}$ indicating whether the host $j$ is experiencing an SLA violation at the time $t$. The dynamic VM consolidation upper bound of the competitive ratio of the optimal online deterministic algorithm (ALG) comparing with optimal offline algorithm (OPT) is shown in Eq. (4). $$\label{first} \frac{ALG(I)}{OPT(I)} \leq 1+\frac{ms}{2(m+1)}$$ Heuristics for VM Consolidation ------------------------------- The CPU utilization threshold is calculated by analysing the historical data which is based on the resource usages by the VMs. The heuristics based analysis improves the decision making for the service allocation. The Heuristics algorithm automatically adjusts the utilization threshold with the help of a statistical analysis on the historical data which gathered during VMs lifetime. A previous study shows that the heuristics algorithm improves the energy consumption and service the quality \[18\]. The VM consolidation using historical CPU utilization had continued throughout the simulation as illustrated in Fig. 3. ![image](Dyn_VM_cons){width="3in"} VMs Placement and Selection in cloud ------------------------------------ Several VMs could be ran in a single machine based on the service request. Under the virtualization concept, multiple operating systems could be ran on multiple VMs in a single physical machine. Through the VM consolidation we can use system resources efficiently and unused resources could be placed in lower power state to save energy. Several heuristic methods have been proposed for VM consolidation by Beloglazov and Buyya \[18\]. To perform a VM consolidation it is necessary to select an overloaded host first and then, we must follow specific criteria for host overloading. Furthermore, one or more hosts need to be migrated into an underloaded host or a new host. Depending on the load, several underloaded hosts could be migrated in one or more hosts. In this case, surely some underloaded host could be programmed in sleep mode after migrating VMs, which is running on them. Which VM and what type of VM should be migrated first is the another an issue which should be taken into consideration. Furthermore, the selected VM will be placed into a new location. Algorithm for resource optimization =================================== In an operational environment, a thorough knowledge on future events is not imaginable through the control of algorithms. Such events deal with online problems. The optimization issue where input is received in an online manner and output is also produced online and is called an online problem, according to \[7\]. The algorithm developed for these types of problems is called an online algorithm. To characterise the efficiency and performance of these algorithms, we can apply a competitive analysis. Based on the online algorithm knowledge, a competitive analysis generates the worst possible input. The input of online algorithm maximizes the modest ratio which is based on the outside world. This input should not be confused with the internal states of algorithms such as internal memory and control. Proposed method for VM Selection ================================ In this section we are going to propose a new policy for VM selection. Our proposed policy finds the appropriate VMs for migration. ------------------------------------------------------------ **Algorithm 1:** 1 Input: vmList from host Output: Selected VM 2 mVms $\leftarrow$ get MVms(host) 3 if mVms is NULL then 4 $\vert$ return null 5 else 6 $\vert$ vmToM $\leftarrow$ null 7 $\vert$ nM $\leftarrow$ Double.MAX\_VALUE 8 $\vert$ foreach vm in mVMs do 9 $\vert$ $\vert$ if vm.isInM() $\leftarrow$ False then 10 $\vert$ $\vert$ $\vert$ metric $\leftarrow$ vm.getRam() 11 $\vert$ $\vert$ $\vert$ if m &lt; mM then 12 $\vert$ $\vert$ $\vert$ $\vert$ mM $\leftarrow$ m 13 $\vert$ $\vert$ $\vert$ $\vert$ vmToM $\leftarrow$ vm 14 $\vert$ return vmToM ------------------------------------------------------------ : Energy Aware VM Selection (EAVMS).[]{data-label="table_example"} General Description ------------------- The dynamic VM consolidation migrates VMs when a host is considered to be underloaded or overloaded. An underloaded host is placed into sleep mode after migrating all running VMs on it. In the case of an overloaded host, at first, it is necessary to select VMs that need to be migrated. After that, the system needs to find a new placement for migrating all the selected VMs. Our new VM Selection policy decides which VMs needs to migrate first. This algorithm migrates VMs until a a host is considered to be overloaded. Problem Formulation ------------------- Our Maximum Migration Time (MxMT) policy migrates VM which took the maximum time for completing the migration. In a host, several VM could be allocated. When a host is considered to be overloaded, the MxMT policy will choose VM which need the longest period for migrating compared to other VMs that are allocated for that host. The selected VM that is chosen for the migration denoted as $v$. The migration time is defined by dividing the available bandwidth for the host $h$ from the amount of used RAM by the VM . The proposed MxMT policy finds the VM which satisfies the following condition: $$\label{First} v \in V_h | \forall_x \in V_h, \frac{RAM_u(v)}{NET_h} \geq \frac{RAM_u(x)}{NET_h},$$ The proposed algorithm checks the condition stated in Eq. (5), for all VMs our method finds the VM which took maximum time frame for migration. In Equation (5.1), $RAM_u(v)$ we find the amount of utilized RAM used by VM $v$. Similarly, $RAM_u(x)$ is the utilized RAM by VM $x$. $NET_h$ is the unused network bandwidth which is unoccupied for the host $h$. [C[2cm]{}|L[0.5cm]{}L[0.5cm]{}L[0.5cm]{}L[0.5cm]{}L[0.5cm]{}L[0.5cm]{}]{} **Server**&**0%**&**10%**&**20%**&**30%**&**40%**&**50%**\ HP ProLiant G4 & 86 & 89.4 & 92.6 & 96 & 99.5 & 102\ HP ProLiant G5 & 93.7 & 97 & 101 & 105 & 110 & 116\ [C[2cm]{}|L[0.5cm]{}L[0.5cm]{}L[0.5cm]{}L[0.5cm]{}L[0.5cm]{}]{} **Server**&**60%**&**70%**&**80%**&**90%**&**100%**\ HP ProLiant G4 & 106 & 108 & 112 & 114 & 117\ HP ProLiant G5 & 121 & 125 & 129 & 133 & 135\ Algorithm for Virtual Machine Selection ======================================= The pseudocode of our proposed Energy Aware VM Selection (EAVMS) algorithm is presented in Table 1. The proposed algorithm select the VM for migration from overloaded host. The key policy of this algorithm is that it selects the VM that requires the maximum time frame to migrate compared to the migration time of other allocated VMs to a specific host. The migration time is estimated by dividing the available network bandwidth to the amount of current RAM used by the VM. Experimental and Simulation Setup ================================= For our simulation we have used CloudSim 3.0 toolkit \[28\] which is a modern simulation tool that supports the modelling of cloud data center with on demand virtualization resources and application management. We modeled a data center with eight hundred heterogeneous physical machines and over 1000 running VMs in a simulation environment. System Power Utilization Model ------------------------------ The hosts’ power consumption is defined according to the power consumption of HP ProLiant G4 and G5 server. According to the power consumption of these servers, a server consumes from 86 W with 0% CPU utilization and maximum 135 W with 100% CPU utilization. Power consumption on different levels of utilization for these two servers is shown in Table 2 and 3. The dual core server was chosen for this simulation because it is easy to overload servers using lesser workload. On the other hand, dual cores CPUs are adequate for the evaluation of resource management related to algorithms which are designed for multicore CPU architecture. Network Behaviour Modeling -------------------------- CloudSim \[28\] simulation framework supports the modeling of realistic networking topologies and models. For internetworking cloud entities such as hosts, data centers and connected end users are based on the conceptual networking abstraction model. This model is based on a latency matrix instead of actual network devices like routers and switches. CloudSim is an event based simulation framework, event management engine of CloudSim which maintains a latency for transmitting messages within the cloud entities. The number of network nodes are stored in a topology description table in BRITE \[31\] format. These nodes represent all CloudSim entities including hosts, data center, and cloud brokers. BRITE information is loaded every time during CloudSim initialization and is used for generating latency matrix. Workload Characterization ------------------------- Workload traces form the Real system are acceptable for evaluation a simulation. Workload data has been taken from PlanetLab \[29\] which is a part of CoMon \[30\] project. During a random period of ten days, the Workload traces have collected from March 2011 to April 2011. The CPU utilization is below 50% in terms of workload traces, and during the simulation the VM assignment had been random. Table 4 represents the characteristics of the workload. A CPU utilization had been collected from the servers that were located at more than 500 places around the world. Thousands of VMs were deployed in the workload as shown in Table 4 and measurements of CPU utilization was 5 minutes. [L[2.2cm]{}|C[1.7cm]{}L[1.4cm]{}L[1.4cm]{}C[1.7cm]{}C[1.5cm]{}C[1.7cm]{}]{} **Date**&**Number of VMs**&**Mean**&**St. dev.**&**Quartile 1**&**Median**&**Quartile 3**\ 03/03/2011 & 1052 & 12.31% & 17.09% & 2% & 6% & 15%\ 06/03/2011 & 898 & 11.44% & 16.83% & 2% & 5% & 13%\ 09/03/2011 & 1061 & 10.70% & 15.57% & 2% & 4% & 13%\ 22/03/2011 & 1516 & 9.26% & 12.78% & 2% & 5% & 12%\ 25/03/2011 & 1078 & 10.56% & 14.14% & 2% & 6% & 14%\ 03/04/2011 & 1463 & 12.39% & 16.55% & 2% & 6% & 17%\ 09/04/2011 & 1358 & 11.12% & 15.09% & 2% & 6% & 15%\ 11/04/2011 & 1233 & 11.56% & 15.07% & 2% & 6% & 16%\ 12/04/2011 & 1054 & 11.54% & 15.15% & 2% & 6% & 16%\ 20/04/2011 & 1033 & 10.43% & 15.21% & 2% & 4% & 12%\ Experimental Setup ------------------ Each node of data center is modelled with a dual core CPU and the performance of each core is equivalent to 1860 MIPS for HP ProLiant ML110 G4 server and 2660 MIPS for HP ProLiant ML110 G5 servers. Each server had been modelled with 1 GbPS network bandwidth. The types of VM were exhibited as Amazon EC2 instances and were listed in Table 5. The VMs of the data center were deployed with single core CPU because the workload data comes from single core VMs. Initially, system resources were occupied as VM types by the VMs. However, during the life time simulation, fewer system resources that were occupied had been consistent with workload data for a dynamic VM consolidation. [L[3cm]{}|C[2cm]{}|C[2cm]{}]{} **Instance Type** & **CPU Speed MIPS** & **Storage GB**\ High-CPU Medium Instance & 2500 & 0.85\ Extra Large Instance & 2000 & 3.75\ Small Instance & 1000 & 1.7\ Micro Instance & 500 & 613\ Simulation Scenario ------------------- The modelled data center is connected to the internet and user requests are generated from the internet. User requests are generated according to the workload traces and are passed to the data center. The data center processes requests, consolidates and deconsolidates VMs when necessary. Figure 4 shows the basic scenario of simulation for our proposed algorithms. ![image](SiSc.eps){width="60.00000%"} Physical Resources ------------------ We ran a simulation with an arrangement on three of the same type physical machines. The configuration of the physical machines are Intel$^{\circledR}$  core^TM^ 2 Duo CPU E8400 3.00 GHz processor and with 500GB storage, 4 GB Ram, Windows 7 with 32-bit OS and with 250 GB storage. Performance Metrics =================== The Performance Measured had been established by following the three main performance metrics. These metrics are proposed by Beloglazov and Buyya \[18\]. Energy Consumption ------------------ The total energy consumption is measured by taking into consideration the total energy consumption made by a data center during the application workload. The unit for energy consumption is kilo watt per hour (kWh). SLA Violation ------------- The SLA violation (SLAV) percentage is defined as the percentage of SLA violation events which are relatively in accordance to the total number of the processed time frames. SLA violation is calculated through Performance Degradation due to Migration (PDM) and SLA Time per Active Host (SLATAH). Number of VM Migrations ----------------------- The number of VM migrations is initiated by the VM manager during the adaptation of the VM placement. We calculate number of VMs migrated during the simulation with single day workload. Each time during our simulation, we simulate with a 24-hours workload. Energy and SLA Violations ------------------------- The objective of proposing new algorithms is to reducie the violation of SLA and energy consumption. Therefore, Energy and SLA Violation (ESV) in Eq. (6) are used as a combined metric. $$ESV = E.SLAV$$ Simulation Results and Discussions ================================== We have simulated our proposed algorithm in a simulated environment. The simulation was conducted using CloudSim toolkit with real life workload which had been derived from thousands of PlanetLab physical servers which were located around the world. During the simulation, we had combined our proposed algorithm with the previously proposed best algorithm combinations proposed by \[18\]. The best algorithm combinations are THR-MMT-1.0, THR-MMT-0.8, IQR-MMT-1.5, MAD-MMT-2.5, LRR-MMT-1.2 and LR-MMT-1.2. ![image](F2EnergyIQRLRR.eps){width="80.00000%"} ![image](F2EnergyMADTHR.eps){width="80.00000%"} ![image](F2EnergyTHR1LR.eps){width="80.00000%"} With the combination of our proposed algorithm the policies denoted as: THR-MxMT-1.0, THR-MxMT-0.8, IQRMxMT-1.5, MAD-MxMT-2.5, LRR-MxMT-1.2 and LR-MxMT-1.2. Simulation results had been considered for all six algorithm combinations. Our proposed algorithm outperforms in terms of results compared with to the prior work. We measured the Energy consumption, SLATAH, and VM Migration.\ Energy Consumption ------------------ Fig. 5 (a) and (b) shows the energy consumption of our proposed MxMT algorithm along with the combination of IQR and LRR. The proposed IQR-MxMT-1.2 reduced the energy consumption by 19% on average when compared to IQR-MMT-1.2. On the other hand LRR-MxMT-1.2 saves over 18% on average of energy consumption compared to LRR-MMT-1.2.\ The proposed MAD-MxMT-2.5 and THR-MxMT-0.8 have reduced energy consumption by over 18% and 19% respectively on average. The simulation result of MAD-MMT-2.5 and MAD-MxMT are shown in Fig. 6 (a) and another simulation results of THR-MMT-0.8 and THR-MxMT-0.8 are shown in Fig. 6 (b).\ Given the simulation result shown in Fig. 7 (a), our proposed MxMT algorithm reduced over 20% of energy consumption on average compared to the previously proposed algorithm. Next, the proposed LR-MxMT-1.2 policy cut-off around 18% energy consumption on average compared to LR-MMT-1.2 policy as shown in Fig. 7 (b).\ SLA Time Per Active Host ------------------------ Our experimental result shows that the SLA time per active host has greatly increased. As Fig. 8 (a) IQR-MxMT-1.5 shows an average of 43% SLATAH, while the average was 5% in IQR-MMT-1.5. Similar results had been observed for the LRRMMT-1.2 and LRR-MxMT-1.2 policies as shown in Fig. 8 (b). For MADMMT-2.5 and MAD-MxMT-2.5 policies SLATAH found average 5% and 46% respectively as illustrated in Fig. 8 (c). On the other hand, for THR-MMT-0.8 and THR-MxMT-0.8 policies, an average 5% and 40% SLATAH had been observed respectively which are shown in Fig. 8 (d), and it points the SLATAH increment compared to the prior work.\ On average, 27% and 83% SLATAH were found for THR-MMT-1.0 and THR-MxMT-1.0 policies which are illustrated in Fig. 9 (a). Fig. 9 (b) depicted SLATAH for LR-MMT-1.2 and LR-MxMT-1.2 policies. LR-MMT-1.2 produced an average of 4% SLATAH and LR-MxMT-1.2 produced an average of 41% SLATAH.\ Virtual Machine Migration ------------------------- Our experimental result shows a great reduction of VM migration which is significant for our proposed algorithm. Compared to the prior work, VM migration had decreased by 95%, 94%, 95% and 96% for IQR-MxMT-1.5, LRR-MxMT-1.2, MAD-MxMT-2.5 and THR-MxMT-0.8 respectively. The simulation result is shown in Fig. 10 (a), (b), (c) and (d).\ Similarly, the number of VM migration had decreased for THR-MxMT-1.0 and LR-MxMT-1.2 by 94% and 93% as illustrated in Fig. 11 (a) and (b). The number of VM migration was reduced because we chose a heavy VM to migrate first, if the host had been considered to be overloaded. Conclusion {#cons} ========== From the simulation results presented in this paper, we summarize that our proposed VM selection algorithm is more energy efficient compared to the prior work. On average, our proposed VM selection algorithm saves 19% of energy cost. On the other hand, the proposed algorithm reduces over 94% VM migration while SLATAH had increased. This is due to the migration of VM which had occupied a maximum memory. In this paper we have presented a simulation which is based on the evaluation of VM selection algorithm for dynamic VM consolidation. We have described our proposed method for VM selection. Next, we have formulated our method and had implemented our method as an algorithm. Furthermore, we have tested our proposed algorithm in a simulated cloud data center environment under a real life workload. Finally, we have presented an analysis of our simulation result which had been obtained from the experiments. Acknowledgment {#acknowledgment .unnumbered} ============== This work is supported by the Malaysian Ministry of Education  under  the  Fundamental Research Grant Scheme FRGS/02/01/12/1143/FR. ![image](F2SLATAHIQRLRRMADTHR.eps){width="0.8\paperwidth"} ![image](F2SLATAHTHR1LR.eps){width="80.00000%"} ![image](F2VMMigIQRLRRMADTHR.eps){width="80.00000%"} ![image](F2VMMigTHR1LR.eps){width="80.00000%"} [1ll]{} L.A. Barroso, U. Hölzle, The case for energy-proportional computing, Computer, 40 (12) (2007) 33-37. R. Buyya, C.S. Yeo, S. Venugopal, J. Broberg, I. Brandic, Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility, Future Generation computer systems, 25 (6) (2009) 599-616. M. Armbrust, A. Fox, R. Griffith, A.D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, A view of cloud computing, Communications of the ACM, 53 (4) (2010) 50-58. R. Nathuji, K. Schwan, VirtualPower: coordinated power management in virtualized enterprise systems, in: ACM SIGOPS Operating Systems Review, ACM, 2007, pp. 265-278. S. Ben-David, A. Borodin, R. Karp, G. Tardos, A. Wigderson, On the power of randomization in on-line algorithms, Algorithmica, 11 (1) (1994) 2-14. P. Barford, M. Crovella, Generating representative web workloads for network and server performance evaluation, ACM SIGMETRICS Performance Evaluation Review, 26 (1) (1998) 151-160. A. Borodin, R. El-Yaniv, Online computation and competitive analysis, cambridge university press, 2005. D.G. Feitelson, Workload modeling for performance evaluation, in: Performance Evaluation of Complex Systems: Techniques and Tools, Springer, 2002, pp. 114-141. H. Li, Workload dynamics on clusters and grids, The Journal of Supercomputing, 47 (1) (2009) 1-20. J.S. Chase, D.C. Anderson, P.N. Thakar, A.M. Vahdat, R.P. Doyle, Managing energy and server resources in hosting centers, in: ACM SIGOPS Operating Systems Review, ACM, 2001, pp. 103-116. R. Neugebauer, D. McAuley, Energy is just another resource: Energy accounting and energy pricing in the Nemesis OS, in: Hot Topics in Operating Systems, 2001. Proceedings of the Eighth Workshop on, IEEE, 2001, pp. 67-72. D.G. Sullivan, M.I. Seltzer, Isolation with flexibility: A resource management framework for central servers, in: Proceedings of the USENIX Annual Technical Conference, San Diego, CA, 2000, pp. 337–350. B. Verghese, A. Gupta, M. Rosenblum, Performance isolation: sharing and isolation in shared-memory multiprocessors, ACM SIGPLAN Notices, 33 (11) (1998) 181-192. E. Pinheiro, R. Bianchini, E.V. Carrera, T. Heath, Load balancing and unbalancing for power and performance in cluster-based systems, in: Workshop on compilers and operating systems for low power, Barcelona, Spain, 2001, pp. 182-195. A. Beloglazov, R. Buyya, Energy efficient resource management in virtualized cloud data centers, in: Proceedings of the 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, IEEE Computer Society, 2010, pp. 826-831. A. Beloglazov, J. Abawajy, R. Buyya, Energy-aware resource allocation heuristics for efficient management of data centers for cloud computing, Future Generation computer systems, 28 (5) (2012) 755-768. C. Panarello, A. Lombardo, G. Schembra, L. Chiaraviglio, M. Mellia, Energy saving and network performance: a trade-off approach, in: Proceedings of the 1st International Conference on Energy-Efficient Computing and Networking, ACM, 2010, pp. 41-50. A. Beloglazov, R. Buyya, Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in cloud data centers, Concurrency and Computation: Practice and Experience, 24 (13) (2012) 1397-1420. A. Verma, G. Dasgupta, T.K. Nayak, P. De, R. Kothari, Server workload analysis for power minimization using consolidation, in: Proceedings of the 2009 conference on USENIX Annual technical conference, USENIX Association, 2009, pp. 28-28. J. Cao, Y. Wu, M. Li, Energy efficient allocation of virtual machines in cloud computing environments based on demand forecast, in: Advances in Grid and Pervasive Computing, Springer, 2012, pp. 137-151. R.K. Naha, M. Othman, Brokering and Load-Balancing Mechanism in the Cloud-Revisited, IETE Technical Review, 31 (4) (2014) 271-276. R.K. Naha, M. Othman, Optimized Load Balancing for Efficient Resource Provisioning in the Cloud, in: The 2nd IEEE International Symposium on Telecommunication Technologies (ISTT), Lankawi, Malaysia, 2014, pp. 382-285. N. Akhter, M. Othman, Energy Efficient Virtual Machine Provisioning in Cloud Data Centers, in: The 2nd IEEE International Symposium on Telecommunication Technologies (ISTT), Lankawi, Malaysia, 2014, pp. 282-286. P. Raycroft, R. Jansen, M. Jarus, P.R. Brenner, Performance bounded energy efficient virtual machine allocation in the global cloud, Sustainable Computing: Informatics and Systems, 4 (1) (2014) 1-9. C.-M. Wu, R.-S. Chang, H.-Y. Chan, A green energy-efficient scheduling algorithm using the DVFS technique for cloud datacenters, Future Generation computer systems, 37 (2014) 141-147. Y. Ding, X. Qin, L. Liu, T. Wang, Energy efficient scheduling of virtual machines in cloud with deadline constraint, Future Generation computer systems, 40 (2015) 62–74. A. Wolke, B. Tsend-Ayush, C. Pfeiffer, M. Bichler, More than bin packing: Dynamic resource allocation strategies in cloud data centers, Information Systems, 52 (2015) 83-95. R.N. Calheiros, R. Ranjan, A. Beloglazov, C.A. De Rose, R. Buyya, CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms, Software: Practice and Experience, 41 (1) (2011) 23-50. K. Park, V.S. Pai, CoMon: a mostly-scalable monitoring system for PlanetLab, ACM SIGOPS Operating Systems Review, 40 (1) (2006) 65-74. L. Tang, Y. Chen, F. Li, H. Zhang, J. Li, Empirical study on the evolution of planetlab, in: Networking, 2007. ICN’07. Sixth International Conference on, IEEE, 2007, pp. 64-64. A. Medina, A. Lakhina, I. Matta, J. Byers, BRITE: An approach to universal topology generation, in: Modeling, Analysis and Simulation of Computer and Telecommunication Systems, 2001. Proceedings. Ninth International Symposium on, IEEE, 2001, pp. 346-353. N. Akhter, M. Othman, Energy aware resource allocation of cloud data center: review and open issues, Cluster Computing, 19 (3) (2016) 1163-1182. R. K. Naha, M. Othman, Cost-aware service brokering and performance sentient load balancing algorithms in the cloud, Journal of Network and Computer Applications, 75 (2016) 47-57. R. K. Naha, M. Othman, and N. Akhter. “Evaluation of cloud brokering algorithms in cloud based data center.” Far East J. Electron. Commun 15.2 (2015): 85-98. R. K. Naha, M. Othman, and N. Akhter. “Diverse approaches to cloud brokering: innovations and issues.” International Journal of Communication Networks and Distributed Systems 19.1 (2017): 99-120.
{ "pile_set_name": "ArXiv" }
--- author: - Ken Freeman title: Structure and Evolution of the Milky Way --- The Thin Disk: Formation and Evolution ====================================== Here are some of the issues related to the formation and evolution of the Galactic thin disk: - Building the thin disk: its exponential radial structure, and the role of mergers. - The star formation history: chemical evolution and continued gas accretion. - Evolutionary processes in the disk: disk heating, radial mixing. - The outer disk: chemical properties and chemical gradients. Many of the basic observational constraints on the properties of the Galactic disk are still uncertain. At this time, we do not have reliable information about the star formation history of the disk. We do not know how the metallicity distribution and the stellar velocity dispersions in the disk have evolved with time. One might have expected that these observational questions were well understood by now, but this is not yet so. The basic observational problem is the difficulty of measuring ages for individual stars. The younger stars of the Galactic disk show a clear abundance gradient of about 0.07 dex kpc$^{-1}$, outlined nicely by the cepheids [@Luck2006]. In the outer disk, for the older stars, the abundance gradient appears to be even stronger: the abundance gradient (and the gradient in the ratio of alpha-elements to Fe) have flattened with time towards the solar values. A striking feature of the radial abundance gradient in the Galaxy is that it flattens for $R > 12$ kpc at an \[Fe/H\] value of about -0.5 [@Carney2005]. A similar flattening of the abundance gradient is seen in the outer regions of the disk of M31 [@Worthey2005]. The relation between the stellar age and the mean metallicity and velocity dispersion are the fundamental observables that constrain the chemical and dynamical evolution of the Galactic thin disk. The age-metallicity relation (AMR) in the solar neighborhood is still uncertain. Different authors find different relations, ranging from a relatively steep decrease of metallicity with age from [@Rocha-Pinto2004] to almost no change of mean metallicity with age from [@Nordstrom2004]. Much of the earlier work indicated that a large scatter in metallicity was seen at all ages, which was part of the motivation to invoke large-scale radial mixing of stars within the disk. This mixing, predicted by [@Sellwood2002], is generated by resonances with the spiral pattern, and is able to move stars from one near-circular orbit to another. It would bring stars from the inner and outer disks, with their different mean abundances, into the solar neighborhood. Radial mixing is potentially an important feature of the evolution of the disk. At this stage, it is a theoretical concept, and it is not known how important it is in the Galactic disk. We are not aware of any strong observational evidence at this stage for the existence of radial mixing. More recent results on the AMR (e.g. Wylie de Boer , unpublished) indicate that there is a weak decrease of mean metallicity with age in the Galactic thin disk, but that the spread in metallicity at any age is no more than about 0.10 dex. If this is correct, then radial mixing may not be so important for chemically mixing the Galactic disk. The age-velocity dispersion relation (AVR) is also not well determined observationally. The velocity dispersion of stars appears to increase with age, and this is believed to be due to the interaction of stars with perturbers such as giant molecular clouds and transient spiral structure. But there is a difference of opinion about the duration of this heating. One view is that the stellar velocity dispersion $\sigma$ increases steadily for all time, $\sim t^{0.2-0.5}$, based on [@Wielen1977]’s work using chromospheric ages and kinematics for the McCormick dwarfs. Another view [e.g. @Quillen2000], based on the data for subgiants from [@Edvardsson1993] is that the heating takes place for the first $\sim 2$ Gyr, but then saturates when $\sigma \approx 20$ km s$^{-1}$ because the stars of higher velocity dispersion spend most of their orbital time away from the Galactic plane where the sources of heating lie. Data from [@Soubiran2008] support this view. Again, much of the difference in view goes back to the difficulty of measuring stellar ages. Accurate ages from asteroseismology would be very welcome. Accurate ages and distances for a significant sample of red giants would allow us to measure the AMR and AVR out to several kpc from the Sun. This would be a great step forward in understanding the chemical and dynamical evolution of the Galactic disk. The Formation of the Thick Disk =============================== Most spiral galaxies, including out Galaxy, have a second thicker disk component. For example, the thick disk and halo of the edge-on spiral galaxy NGC 891, which is much like the Milky Way in size and morphology, has a thick disk nicely seen in star counts from HST images [@Mouhcine2010]. Its thick disk has scale height $\sim 1.4$ kpc and scalelength $\sim 4.8$ kpc, much as in our Galaxy. The fraction of baryons in the thick disk is typically about $10$ to $15$ percent in large systems like the Milky Way, but rises to about $50$% in the smaller disk systems [@Yoachim2008]. The Milky Way has a significant thick disk, discovered by [@Gilmore1983]. Its vertical velocity dispersion is about 40 [km s$^{-1}$]{}; its scale height is still uncertain but is probably about $1000$ pc. The surface brightness of the thick disk is about 10% of the thin disk’s, and near the Galactic plane it rotates almost as rapidly as the thin disk. Its stars are older than 10 Gyr and are significantly more metal poor than the stars of the thin disk; most of the thick disk stars have \[Fe/H\] values between about $-0.5$ and $-1.0$ and are enhanced in alpha-elements relative to Fe. This is usually interpreted as evidence that the thick disk formed rapidly, on a timescale $\sim 1$ Gyr. From its kinematics and chemical properties, the thick disk appears to be a discrete component, distinct from the thin disk. Current opinion is that the thick disk shows no vertical abundance gradient [e.g. @Gilmore1995; @Ivezic2008]. The old thick disk is a very significant component for studying Galaxy formation, because it presents a kinematically and chemically recognizable relic of the early Galaxy. Secular heating is unlikely to affect its dynamics significantly, because its stars spend most of their time away from the Galactic plane. How do thick disks form ? Several mechanisms have been proposed, including: - thick disks are a normal part of early disk settling, and form through energetic early star forming events, e.g. in gas-rich mergers [@Samland2003; @Brook2004] - thick disks are made up of accretion debris [@Abadi2003]. From the mass-metallicity relation for galaxies, the accreted galaxies that built up the thick disk of the Galaxy would need to be more massive than the SMC to get the right mean \[Fe/H\] abundance ($\sim -0.7$). The possible discovery of a counter-rotating thick disk [@Yoachim2008] in an edge-on galaxy would favor this mechanism. - thick disks come from the heating of the thin disk via disruption of its early massive clusters [@Kroupa2002]. The internal energy of large star clusters is enough to thicken the disk. Recent work on the significance of the high redshift clump structures may be relevant to the thick disk problem: the thick disk may originate from the merging of clumps and heating by clumps [e.g. @Bournaud2009]. These clumps are believed to form by gravitational instability from turbulent early disks: they appear to generate thick disks with scale heights that are radially approximately uniform, rather than the flared thick disks predicted from minor mergers. - thick disks come from early partly-formed thin disks, heated by accretion events such as the accretion event which is believed to have brought omega Centauri into the Galaxy [@Bekki2003]. In this picture, thin disk formation began early, at $z = 2$ to $3$. The partly formed thin disk is partly disrupted during the active merger epoch which heats it into thick disk observed now, The rest of the gas then gradually settles to form the present thin disk, a process which continues to the present day. - a recent suggestion is that stars on more energetic orbits migrate out from the inner galaxy to form a thick disk at larger radii where the potential gradient is weaker [@Schonrich2009] How can we test between these possibilities for thick disk formation? [@Sales2009] looked at the expected orbital eccentricity distribution for thick disk stars in different formation scenarios. Their four scenarios are: - a gas-rich merger: the thick disk stars are born in-situ - the thick disk stars come in from outside via accretion - the early thin disk is heated by accretion of a massive satellite - the thick disk is formed as stars from the inner disk migrate out to larger radii. Preliminary results from the observed orbital eccentricity distribution for thick disk stars may favor the gas-rich merger picture [@Wilson2011]. This is a potentially powerful approach for testing ideas about the origin of the thick disk. Because it depends on the orbital properties of the thick disk sample, firm control of selection effects is needed in the identification of which stars belong to the thick disk. Kinematical criteria for choosing the thick disk sample are clearly not ideal. To summarize this section on the thick disk: Thick disks are very common in disk galaxies. In our Galaxy, the thick disk is old, and is kinematically and chemically distinct from the thin disk. It is important now to identify what the thick disk represents in the galaxy formation process. The orbital eccentricity distribution of the thick disk stars will provide some guidance. Chemical tagging will show if the thick disk formed as a small number of very large aggregates, or if it has a significant contribution from accreted galaxies. This is one of the goals for the upcoming AAT/HERMES survey: see section 5. The Galactic Stellar Halo ========================= The stars of the Galactic halo have \[Fe/H\] abundances mostly less than -1.0. Their kinematics are very different from the rotating thick and thin disks: the mean rotation of the stellar halo is close to zero, and it is supported against gravity primarily by its velocity dispersion. It is now widely believed that much of the stellar halo comes from the debris of small accreted satellites [@Searle1978]. There remains a possibility that a component of the halo formed dissipationally during the Galaxy formation process [@Eggen1962; @Samland2003]. Halo-building accretion events continue to the present time: the disrupting Sgr dwarf is an example in our Galaxy, and the faint disrupting system around NGC 5907 is another example of such an event [@Martinez-Delgado2010]. The metallicity distribution function (MDF) of the major surviving satellites around the Milky way is not like the MDF in the stellar halo [e.g. @Venn2008] but the satellite MDFs may have been more similar long ago. We note that the fainter satellites are more metal-poor and are consistent with the Milky Way halo in their \[$\alpha$/Fe\] behaviour. Is there a halo component that formed dissipationally early in the Galactic formation process? [@Hartwick1987] showed that the metal-poor RR Lyrae stars delineate a two-component halo, with a flattened inner component and a spherical outer component. [@Carollo2010] identified a two-component halo and the thick disk in a sample of 17,000 SDSS stars, mostly with \[Fe/H\] $< -0.5$. They described the kinematics well with these three components:\ Thick disk: ($\bar{V}, \sigma$, \[Fe/H\]) = (182, 51, -0.7)\ Inner halo: ($\bar{V}, \sigma$, \[Fe/H\]) = (7, 95, -1.6)\ Outer halo: ($\bar{V}, \sigma$, \[Fe/H\]) = (-80, 180, -2.2)\ Here \[Fe/H\] is the mean abundance for the component, $\bar{V}$ and $\sigma$ are its mean rotation velocity relative to a non-rotating frame, and velocity dispersion, in [km s$^{-1}$]{}. The outer halo appears to have retrograde mean rotation. As we look at subsamples at greater distances from the Galactic plane, we see that the thick disk dies away and the retrograde outer halo takes over from the inner halo. With the above kinematic parameters, the equilibrium of the inner halo is a bit hard to understand. It may not yet be in equilibrium. From comparison with simulations, [@Zolotov2009] argue that the inner halo has a partly dissipational origin, while the outer halo is made up from debris of faint metal-poor accreted satellites. Recently [@Nissen2010] studied a sample of 78 halo stars with \[Fe/H\] $>-1.6$ and find that they show a variety of \[$\alpha$/Fe\] enhancement. Their sample shows high and low \[$\alpha$/Fe\] groups, and the low \[$\alpha$/Fe\] stars are mostly in high energy retrograde orbits. The high \[$\alpha$/Fe\] stars could be ancient halo stars born in situ and possible heated by satellite encounters. The low-alpha stars may be accreted from dwarf galaxies. How much of the halo comes from accreted structures? An ACS study by [@Ibata2009] of the halo of NGC 891 (a nearby edge-on galaxy like the Milky Way) shows a spatially lumpy metallicity distribution, indicating that its halo is made up largely of accreted structures which have not yet mixed away. This is consistent with simulations of stellar halos by [@Font2008], [@Gilbert2009] and [@Cooper2010]. To summarize this section on the Galactic stellar halo: the stellar halo is probably made up mainly of the debris of small accreted galaxies, although there may be an inner component which formed dissipatively. The Galactic Bar/Bulge ====================== The boxy appearance of the Galactic bulge is typical of galactic bars seen edge-on. These bar/bulges are very common: about 2/3 of spiral galaxies show some kind of central bar structure in the infra-red. Where do these bar/bulges come from ? Bars can arise naturally from the instabilities of the disk. A rotating disk is often unstable to forming a flat bar structure at its center. This flat bar in turn is often unstable to vertical buckling which generates the boxy appearance. This kind of bar/bulge is not generated by mergers but follows simply from the dynamics of a flat rotating disk of stars. The maximum vertical extent of boxy or peanut-shaped bulges occurs near the radius of the vertical and horizontal Lindblad resonances, i.e. where $$\Omega_b = \Omega - \kappa/2 = \Omega - \nu_z/2.$$ Here $\Omega$ is the circular angular velocity, $\Omega_b$ is the pattern speed of the bar, $\kappa$ is the epicyclic frequency and $\nu_z$ is the vertical frequency of oscillation. We note that the frequencies $\kappa$ and particularly $\nu_z$ depend on the amplitude of the oscillation. Stars in this zone oscillate on 3D orbits which support the peanut shape. We can test whether the Galactic bulge formed through this kind of bar-buckling instability of the inner disk, by comparing the structure and kinematics of the bulge with those of N-body simulations that generate a boxy/bar bulge [e.g. @Athanassoula2005]. The simulations show an exponential structure and near-cylindrical rotation: do these simulations match the properties of the Galactic bar/bulge? The stars of the Galactic bulge appear to be old and enhanced in $\alpha$-elements. This implies a rapid history of star formation. If the bar formed from the inner disk, then it would be interesting to know whether the bulge stars and the stars of the adjacent disk have similar chemical properties. This is not yet clear. There do appear to be similarities in the $\alpha$-element properties between the bulge and the thick disk in the solar neighborhood . The bar-forming and bar-buckling process takes 2-3 Gyr to act after the disk settles. In the bar-buckling instability scenario, the bulge [*structure*]{} is probably younger than the bulge [*stars*]{}, which were originally part of the inner disk. The alpha-enrichment of the bulge and thick disk comes from the rapid chemical evolution which took place in the inner disk before the instability acted. In this scenario, the stars of the bulge and adjacent disk should have similar ages: accurate asteroseismology ages for giants of the bulge and inner disk would be a very useful test of the scenario. We are doing a survey of about 28,000 clump giants in the Galactic bulge and the adjacent disk, to measure the chemical properties (Fe, Mg, Ca, Ti, Al, O) of stars in the bulge and adjacent disk: are they similar, as we would expect if the bar/bulge grew out of the disk? We use the AAOmega fiber spectrometer on the AAT, to acquire medium-resolution spectra of about 350 stars at a time, at a resolution $R \sim 12,000$. The central regions of our Galaxy are not only the location of the bulge and inner disk, but also include the central regions of the Galactic stellar halo. Recent simulations [e.g. @Diemand2005; @Moore2006; @Brook2007] indicate that the [*metal-free*]{} (population III) stars formed until redshift $z \sim 4$, in chemically isolated subsystems far away from the largest progenitor. If its stars survive, they are spread throughout the Galactic halo. If they are not found, then it would be likely that their lifetimes are less than a Hubble time which in turn implies a truncated IMF. On the other hand, the [*oldest*]{} stars form in the early rare high density peaks that lie near the highest density peak of the final system. They are not necessarily the most metal-poor stars in the Galaxy. Now, these oldest stars are predicted to lie in the central bulge region of the Galaxy. Accurate asteroseismology ages for metal-poor stars in the inner Galaxy would provide a great way to tell if they are the oldest stars or just stars of the inner Galactic halo. This test would require a $\sim 10$% precision in age. Our data so far indicate that the rotation of the Galactic bulge is close to cylindrical [see also @Howard2009]. Detailed analysis will be needed to see if there is any evidence for a small classical merger generated bulge component, in addition to the boxy/peanut bar/bulge which probably formed from the disk. We also see a more slowly rotating metal-poor component in the bulge region. The problem now is to identify the [*first*]{} stars from among the expected metal-poor stars of the inner halo. Galactic Archaeology ==================== The goals of Galactic Archaeology are to find signatures or fossils from the epoch of Galaxy assembly, to give us insight into the processes that took place as the Galaxy formed. A major goal is to identify observationally how important mergers and accretion events were in building up the Galactic disk, bulge and halo of the Milky Way. CDM simulations predict a high level of merger activity which conflicts with some observed properties of disk galaxies, particularly with the relatively common nature of large galaxies like ours with small bulges [e.g. @Kormendy2010]. The aim is to reconstruct the star-forming aggregates and accreted galaxies that built up the disk, bulge, and halo of the Galaxy. Some of these dispersed aggregates can still be recognized kinematically as stellar moving groups. For others, the dynamical information was lost through heating and mixing processes, but their debris can still be recognized by their chemical signatures (chemical tagging). We would like to find groups of stars, now dispersed, that were associated at birth either - because they were born together and therefore have almost identical chemical abundances over all elements [e.g. @deSilva2009], or - because they came from a common accreted galaxy and have abundance patterns that are clearly distinguished from those of the Galactic disk . The galactic disk shows kinematical substructure in the solar neighborhood: groups of stars moving together, usually called moving stellar groups. Some are associated with dynamical resonances (e.g. the Hercules group): in such groups, we do not expect to see chemical homogeneity or age homogeneity [e.g. @Bensby2007]. Others are the debris of star-forming aggregates in the disk (e.g. the HR1614 group and Wolf 630 group). They are chemically homogeneous, and such groups could be useful for reconstructing the history of the galactic disk. Yet others may be debris of infalling objects, as seen in CDM simulations [e.g. @Abadi2003]. The stars of the HR 1614 group appear to be the relic of a dispersed star-forming event. These stars have an age of about 2 Gyr and \[Fe/H\] $= +0.2$, and they are scattered all around us. This group has not lost its dynamical identity despite its age. [@deSilva2007] measured accurate differential abundances for many elements in HR 1614 stars, and found a very small spread in abundances. This is encouraging for recovering dispersed star forming events by chemical tagging. Chemical studies of the old disk stars in the Galaxy can help to identify disk stars which came in from outside in disrupting satellites, and also those that are the debris of dispersed star-forming aggregates like the HR 1614 group [@Freeman2002]. The chemical properties of surviving satellites (the dwarf spheroidal galaxies) vary from satellite to satellite, but are different in detail from the overall chemical properties of the disk stars. We can think of a chemical space of abundances of elements: O, Na, Mg, Al, Ca, Mn, Fe, Cu, Sr, Ba, Eu for example. Not all of these elements vary independently. The dimensionality of this space chemical space is probably between about 7 and 9. Most disk stars inhabit a sub-region of this space. Stars that come from dispersed star clusters represent a very small volume in this space. Stars which came in from satellites may have a distribution in this space that is different enough to stand out from the rest of the disk stars. With this chemical tagging approach, we hope to detect or put observational limits on the satellite accretion history of the galactic disk. Chemical studies of the old disk stars in the Galaxy can identify disk stars that are the debris of common dispersed star-forming aggregates. Chemical tagging will work if - stars form in large aggregates, which is believed to be true - aggregates are chemically homogenous - aggregates have unique chemical signatures defined by several elements or element groups which do not vary in lockstep from one aggregate to another. We need sufficient spread in abundances from aggregate to aggregate so that chemical signatures can be distinguished with accuracy achievable ($\sim 0.05$ dex differentially) de Silva’s work on open clusters was aimed at testing the last two conditions: they appear to be true. See [@deSilva2009] for more on chemical tagging. We should stress here that chemical tagging is not just assigning stars chemically to a particular population, like the thin disk, thick disk or halo. Chemical tagging is intended to assign stars chemically to substructure which is no longer detectable kinematically. We are planning a large chemical tagging survey of about a million stars, using the new HERMES multi-object spectrometer on the AAT. The goal is to reconstruct the dispersed star-forming aggregates that built up the disk, thick disk and halo within about 5 kpc of the sun. HERMES is a new high resolution multi-object spectrometer on the AAT. Its spectral resolution is about 28,000, with a high resolution mode with $R = 50,000$. It is fed by 400 fibers over a 2-degree field, and has 4 non-contiguous wavelength bands covering a total of about 1000Å. The four wavelength bands were chosen to include measurable lines of elements needed for chemical tagging. HERMES is scheduled for first light in late 2012. The HERMES chemical tagging survey will include stars brighter than $V = 14$ and has a strong synergy with Gaia: for the dwarf stars in the HERMES sample, the accurate ($1$%) parallaxes and proper motions will be invaluable for more detailed studies. The fractional contribution of the different Galactic components to the HERMES sample will be about $78$% thin disk stars, $17$% thick disk stars and about $5$% halo stars. About $70$% of the stars will be dwarfs within about 1000 pc and $30$% giants within about 5 kpc. About $9$% of the thick disk stars and about $14$% of the thin disk stars pass within our 1 kpc dwarf horizon. Assume that all of their formation aggregates are now azimuthally mixed right around the Galaxy, so that all of their formation sites are represented within our horizon. Simulations [@Bland-Hawthorn2004] show that a complete random sample of about a million stars with $V < 14$ would allow detection of about 20 thick disk dwarfs from each of about 4500 star formation sites, and about 10 thin disk dwarfs from each of about 35,000 star formation sites. These estimates depend on the adopted mass spectrum of the formation sites. In combination with Gaia, HERMES will give the distribution of stars in the multi-dimensional{position, velocity, chemical} space, and isochrone ages for about 200,000 stars with $V < 14$. We would be interested to explore further what the HERMES survey can contribute to asteroseismology. Some authors have argued that the thick disk may have formed from the debris of the huge and short-lived star formation clumps observed in disk galaxies at high redshift [e.g. @Bournaud2009; @Genzel2011]. If this is correct, then only a small number of these huge building blocks would have been involved in the assembly of the thick disk, and their debris should be very easy to identify via chemical tagging techniques. Chemical tagging in the inner regions of the Galactic disk will be of particular interest. We expect about 200,000 survey giants in the inner region of the Galaxy. The surviving old ($> 1$ Gyr) open clusters are all in the outer Galaxy, beyond a radius of 8 kpc. Young open clusters are seen in the inner Galaxy, but do not appear to survive the disruptive effects of the tidal field and giant molecular clouds in the inner regions. We expect to find the debris of many broken open and globular clusters in the inner disk. These will be good for chemical tagging recovery using the HERMES giants. The radial extent of the dispersal of individual broken clusters will provide an acute test of radial mixing theory within the disk. Another opportunity comes from the the Na/O anomaly, which is unique to globular clusters, and may help to identify the debris of disrupted globular clusters.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We prove that if a finite group $G$ has a representation with fixity $f$, then it acts freely and homologically trivially on a finite CW-complex homotopy equivalent to a product of $f+1$ spheres. This shows, in particular, that every finite group acts freely and homologically trivially on some finite CW-complex homotopy equivalent to a product of spheres.' address: 'Department of Mathematics, Bilkent University, Ankara, 06800, Turkey.' author: - '" Ozg" un " Unl" u and Erg[" u]{}n Yalçin' title: Constructing homologically trivial actions on products of spheres --- [^1] [^2] Introduction {#section:Introduction} ============ It is known that every finite group acts freely on a product of spheres ${\mathbb S}^{n_1}\times \cdots \times {\mathbb S}^{n_k}$ for some $n_1, n_2, \dots, n_k$. This follows from a construction given in [@oliver page 547] which is attributed to J. Tornehave by B. Oliver. The construction is based on a simple idea that one can permute the spheres in a product to get smaller isotropy. More specifically, for a finite group $G$, one defines the $G$-space $X$ as a product of $G$-spaces $\operatorname{Map}_{{\langle}g {\rangle}} (G, {\mathbb S}(\rho_g))$ over all elements $g\in G$, where ${\mathbb S}(\rho_g)$ denotes the unit sphere of a nontrivial one-dimensional complex representation $\rho_g: {\langle}g {\rangle}\to {\mathbb C}$. Note that because of the way $X$ is constructed, $G$ acts freely on $X$, but the induced action of $G$ on the homology of $X$ is not a trivial action in general. It is interesting to ask if there exists a similar construction so that the induced action on homology is trivial. The following was stated as a problem by J. Davis in the Problem Session of Banff 2005 conference on Homotopy Theory and Group Actions: \[ques:jdavis\] Does every finite group act freely on some product of spheres with trivial action on homology? To find a free, homologically trivial action of a finite group $G$ on a product of spheres, one may try to take a family of $G$-spheres ${\mathbb S}^{n_i}$ for $i=1,\dots, k$ and let $G$ act on the product ${\mathbb S}^{n_1}\times \cdots \times {\mathbb S}^{n_k}$ by diagonal action. An action which is obtained in this way is called a [*product action*]{} in general and a [*linear action*]{} if each $G$-sphere in the product is a unit sphere of a representation. By construction, product actions are homologically trivial if the action on each sphere is homologically trivial but not every finite group has a free product action on a product of spheres. For example, it is known that the alternating group $A_4$ cannot act freely on a product of spheres by a product action. This follows from a result of Oliver [@oliver Theorem 1] which says that $A_4$ does not act freely on any product of equal dimensional spheres with trivial action on homology. On the other hand, it can be shown that $A_4$ acts freely on ${\mathbb S}^2 \times {\mathbb S}^3$ with trivial action on homology. So, one cannot answer the above question affirmatively by considering only the product actions. The situation with $A_4$ is not an exceptional case. For example, if one searches through the characters of a finite group $G$ and tries to see when $G$ has a family of characters $\chi _1, \dots, \chi _k$ such that $G$ acts freely on ${\mathbb S}(\chi _1) \times \cdots \times {\mathbb S}(\chi _k )$, then one notices that not many groups have such a family of characters. In fact, Urmie Ray [@ray] showed that finite groups which have free linear actions are very rare: If a finite group $G$ has a free linear action on a product of spheres, then all nonabelian simple sections of $G$ are isomorphic to $A_5$ or $A_6$. In this paper we attack the problem stated above using some more recent construction methods that were developed to study the rank conjecture. In particular, we use some ideas from our earlier papers [@unlu-yalcin1] and [@unlu-yalcin2] where we constructed free actions on products of spheres for finite groups which have representations with small fixity and for $p$-groups with small rank. Fixity of a representation $V$ of a finite group $G$ over a field $F$ is defined as the maximum of dimensions $\dim_{F} V^g $ over all elements $g \in G$. If $\rho : G \to U(n)$ is a faithful complex representation with fixity $f$, then $G$ acts freely on the space $X=U(n)/U(n-f-1)$. For small values of $f$, one can modify the space $X$ to obtain a free action on a product of $f+1$ spheres (see [@adem-davis-unlu] and [@unlu-yalcin1]). In this paper we improve this result to all values of fixity for actions on finite CW-complexes homotopy equivalent to product of spheres. \[thm:main\] Let $G$ be a finite group. If $G$ has a faithful complex representation with fixity $f$, then $G$ acts freely on a finite complex $X$ homotopy equivalent to a product of $f+1$ spheres with trivial action on homology. We prove this theorem using a recursive method for constructing free actions. This method involves the construction of a $G$-equivariant spherical fibration $p : E \to X$ over a given finite $G$-CW-complex $X$ in each step. We require that the total space $E$ is also homotopy equivalent to a finite $G$-CW-complex and that the $G$-action on $E$ has smaller isotropy than the $G$-action on $X$. Once they are constructed, by taking fiber joins these fibrations can be replaced by $G$-fibrations which are non-equivariantly homotopy equivalent to trivial fibrations. This gives a $G$-action on a finite CW-complex $Y$ homotopy equivalent to $X \times {\mathbb S}^N$ for some $N>0$. Using this method, after some steps, one gets a free action on a product of spheres. This method was first developed by Connolly and Prassidis [@connolly-prassidis] and later used also in [@adem-smith] and [@unlu-thesis]. Since every finite group has a faithful complex representation, every finite group has a complex representation with fixity $f$ for some positive integer $f$. Hence, as a corollary of Theorem \[thm:main\] we obtain an affirmative answer to Question \[ques:jdavis\] in homotopy category. \[cor:main\] Every finite group acts freely and homologically trivially on some finite CW-complex $X$ homotopy equivalent to a product of spheres The paper is organized as follows: In Section \[sect:EquivariantFibrations\], we introduce $G$-fibrations and discuss the effects of taking fiber joins of $G$-fibrations. In Section \[sect:Federer\], we discuss the equivariant Federer spectral sequence introduced by Møller [@moller] and using it, we give another proof for a theorem by M. Klaus [@klaus] (see Theorem \[thm:finiteness of homotopy\]). In Section \[sect:construction\], we introduce our main construction method, and finally in Section \[sect:mainthm\], we prove Theorem \[thm:main\]. $G$-fibrations {#sect:EquivariantFibrations} ============== In this section, we first give some preliminaries on $G$-fibrations and then prove some lemmas on fiber joins of $G$-fibrations. For more details on this material we refer the reader to [@lueck] and [@waner2]. Some of this material also appears in [@connolly-prassidis], [@guclukanthesis], [@klaus], and [@unlu-thesis]. \[defn:Gfibration\] A $G$-fibration is a $G$-map $p: E \to B$ which satisfies the following homotopy lifting property for every $G$-space $X$: Given a commuting diagram of $G$-maps $$\xymatrix{ X \times \{ 0\} \ar[d] \ar[r]^-{h} & E \ar[d]^{p} \\ X \times I \ar[r]^-{H} & B,}\\$$ there exists a $G$-map $\widetilde H: X\times I \to E$ such that $\widetilde H |_{X\times \{0\}}=h$ and $p \circ \widetilde H =H$. Given a $G$-fibration $p:E\to B$ over $B$, the isotropy subgroup $G_b\leq G$ of a point $b\in B$ acts on the fiber space $F_b:=p^{-1} (b)$. So, $F_b$ is a $G_b$-space. Let us denote the set of isotropy subgroups of the $G$-action on $B$ by $\operatorname{Iso}(B)$. Let $\{ F_H \}$ denote a family of $H$-spaces over all $H \in \operatorname{Iso}(B)$. If for every $b \in B$, the fiber space $F_b$ is $G_b$-homotopy equivalent to $F_{G_b}$, then $p:E \to B$ is said to have [*fiber type*]{} $\{ F_H \}$. Note that in general a $G$-fibration does not have to have a fiber type, i.e., for $b_1, b_2 \in B$ with $G_{b_1}=G_{b_2}=H$, it may happen that $F_{b_1}$ and $F_{b_2}$ are not $H$-homotopy equivalent. But throughout the paper we only consider $G$-fibrations which have a fiber type. Observe that if $p: E \to B$ is a $G$-fibration such that $B^H$ is path connected for every $H \in \operatorname{Iso}(B)$, then $p$ has a fiber type since for every $b_1, b_2\in B^H$, the fiber spaces $F_{b_1}$ and $F_{b_2}$ are $H$-homotopy equivalent by a standard argument in homotopy theory. In our applications the $G$-fibrations that we construct will often satisfy this connectedness property. If $p : E \to B$ is a $G$-fibration with fiber type $\{ F_H \}$ such that for all $H \in \operatorname{Iso}(B)$ the fixed point space $B^H$ is connected, then the family $\{ F_H\}$ satisfies a certain compatibility condition. To see this, let $H , K \in \operatorname{Iso}(B)$ such that $K^g \leq H$ for some $g \in G$. Then, we have $gB^H \subseteq B^K$, so by the connectedness of $B^K$, we obtain that for every $b \in B^H$, the $K$-space $gF_b$ is $K$-homotopy equivalent to $F_K$. This means that the $K$-spaces $\operatorname{Res}_ K g^* F_H$ and $F_K$ are $K$-homotopy equivalent for all $H\in \operatorname{Iso}(B)$ where $\operatorname{Res}_K g^* F_H$ is the space $F_H$ which is considered as a $K$-space through the map $K \to H$ defined by $k \to g^{-1} kg$. \[def:compatiblefamily\] Let ${\mathcal H}$ be a family of subgroups of $G$ closed under conjugation. A family of $H$-spaces $\{F_H\}$ over all $H \in {\mathcal H}$ is called a [*compatible family*]{} of $H$-spaces if for every $H, K \in {\mathcal H}$ with $K^g \leq H$ for some $g \in G$, the $K$-space $\operatorname{Res}_K g^* F_H$ is $K$-homotopy equivalent to $F_K$, where $\operatorname{Res}_K g^* F_H$ is the the space $F_H$ considered as a $K$-space through the map $K \to H$ defined by conjugation $k \to g^{-1} k g$. The main aim of this section is to introduce some tools for construction of $G$-fibrations with fiber type $\{ F_H\}$ for a given compatible family $\{ F_H\}$. We first introduce some more terminology: Given two $G$-fibrations $p_1:E_1 \to B$ and $p_2: E_2 \to B$ over the same $G$-space $B$, a $G$-map $f: E_1\to E_2$ is called a fiber preserving map if it satisfies $p_2\circ f =p_1$. Two fiber preserving $G$-maps $f, f': E_1\to E_2$ are said to be $G$-fiber homotopic if there is a $G$-map $H: E_1\times I \to E_2$ which is fiber preserving at each $t\in I$ such that $H(x, 0)=f(x)$ and $H(x,1)=f'(x)$ for all $x\in E_1$. We say two $G$-fibrations $p_1 :E_1 \to B$ and $p_2: E_2 \to B$ are $G$-fiber homotopy equivalent if there are fiber preserving $G$-maps $f_1:E_1\to E_2$ and $f_2: E_2 \to E_1$ such that $f_1\circ f_2$ and $f_2 \circ f_1$ are $G$-fiber homotopic to identity maps. For an $H$-space $F_H$, let $\operatorname{Aut}_H (F_H)$ denote the topological monoid of self $H$-homotopy equivalences of $F_H$. Note that $\operatorname{Aut}_H (F_H)$ is not a connected space in general but it is easy to show that all its components have the same homotopy type. When we need to choose a component, we often take the connected component which includes the identity map. We denote this component by $\operatorname{Aut}^I _H (F_H)$. Since $\operatorname{Aut}_H (F_H)$ is a monoid, the usual construction of classifying spaces for monoids applies, and we get a universal fibration $E\operatorname{Aut}_H (F_H) \to B\operatorname{Aut}_H (F_H)$ with fiber $\operatorname{Aut}_H (F_H)$. From this one also obtains a fibration $$F_H \to E_H \to B\operatorname{Aut}_H (F_H)$$ where $E_H =E\operatorname{Aut}_H (F_H) \times _{\operatorname{Aut}_H (F_H)} F_H$. This is actually an $H$-fibration with trivial $H$-action on the base space. It turns out that this fibration is a universal fibration for all $H$-equivariant fibrations with trivial action on the base space. \[thm:classification\] Let $H$ be a finite group, $F_H$ be a finite $H$-CW-complex, and $B$ be a CW-complex with trivial $H$-action. Then, there is a one-to-one correspondence between $H$-fiber homotopy classes of $H$-fibrations over $B$ with fiber $F_H$ and the set of homotopy classes of maps $B \to B\operatorname{Aut}_H (F_H )$. The correspondence is given by taking the pullback of the universal $H$-fibration described above via the map $f:B \to B\operatorname{Aut}_H(F_H)$. This theorem is proved in [@guclukanthesis] in full detail. The proof is based on the proof of Stasheff’s theorem on the classification of Hurewicz fibrations [@stasheff]. More general versions of this theorem also appear in [@french] and [@waner2]. Also note that, as in the case of orientable vector bundle theory, we can give an orientable version of the classification of $H$-fibrations over a trivial $H$-space $B$. If $p:E \to B$ is an $H$-fibration with fiber $F_H$ over a trivial $H$-space, then there is a natural group homomorphism $\chi: \pi_1 (B)\to \pi_0 (\operatorname{Aut}_H (F_H))=\mathcal{E}_H (F_H)$ where $\mathcal{E}_H(F_H)$ denotes the group of homotopy classes of self $H$-homotopy equivalences of $F_H$. If this homomorphism is trivial, then we call the $H$-fibration $p$ a [*homotopy orientable*]{} $H$-fibration. This notion of orientability is stronger than usual notion of orientable fibration where one only requires the action on the homology of $F_H$ to be trivial (see [@ehrlich]). Note that an $H$-fibration is homotopy orientable if and only if its classifying map $f: B \to B\operatorname{Aut}_H (F_H)$ lifts to a map $\tilde f : B \to B \operatorname{Aut}^{I} _H (F_H )$. Also note that homotopy orientable fibrations are classified (as homotopy orientable fibrations) by the homotopy classes of maps $[B , B\operatorname{Aut}_H ^I (F_H)]$. We will be using these facts later in the proof of Lemma \[lem:jointrivial\]. In the rest of this section we focus on the fiber join construction performed on $G$-fibrations. We use fiber joins to kill obstructions that occur in the construction of $G$-fibrations. The fiber join of two $G$-fibrations is defined in the following way: Let $p_1: E_1\to B$ and $p_2: E_2\to B$ be two fibrations. We define a fibration $E_1\times _B E_2$ and maps $E_1\times _B E_2\to E_i$ for $i=1,2$ by the following pullback diagram $$\xymatrix{ E_1\times _B E_2 \ar[r]\ar[d] & E_{1} \ar[d]^{p_{1}} \\ E_2 \ar[r]^{p_2} & B . }$$ Then the $G$-space $E_1*_B E_2$ is defined as the homotopy pushout of the following diagram $$\xymatrix{ E_1\times _B E_2 \ar[r]\ar[d] & E_{1} \ar[d] \\ E_2 \ar[r] & E_1*_B E_2. }$$ By the universal property of homotopy pushouts we get a $G$-fibration $$p_1*p_2:E_1*_B E_2\to B$$ called the [*fiber join of $p_1$ and $p_2$*]{}. Iterating this construction, we obtain a $G$-fibration $$\underset{k}{\ast} \, p: \underbrace{E*_B E*_B \dots *_B E}_{k\text{-many}}\to B$$ which we call the $k$-fold fiber join of $p$ with itself. Note that if $p$ is a $G$-fibration with fiber type $\{F_H\}$, then the fiber type of the $k$-fold join $\ast _k p$ is $\{\ast _k F_H\}$. If $B$ has trivial $H$-action, then the $k$-fold join is classified by a map $B \to B \operatorname{Aut}_H (\ast _k F_H)$. We would like to explain this map in terms of the classifying map of $p$. For this, observe that there is a monoid homomorphism $$\varphi : \operatorname{Aut}_H (F_H) \times \cdots \times \operatorname{Aut}_H (F_H) \to \operatorname{Aut}_H (\ast_k F_H)$$ defined by $$\varphi (a_1,\dots , a_k) (x_1 t_1, \dots ,x_k t_k)=(a_1(x_1) t_1 , \dots , a_k (x_k) t_k )$$ for every $x_1, \dots, x_k \in F_H$ and $t_1, \dots , t_k \in [0,1]$ with $\sum_i t_i=1$. We have the following lemma: \[lem:fiberjoins\] Let $H$ be a finite group, $F_H$ be a finite $H$-CW-complex, and $B$ be a CW-complex with trivial $H$-action. If $p: E\to B$ is a $H$-fibration with fiber type $F_H$ whose classifying map is $f: B \to B\operatorname{Aut}_H (F_H)$, then the classifying map of the $G$-fibration $\ast _k \, p$ is given by the composition $$\xymatrix@C=3pc{B \ar[r]^-{f\times \cdots \times f} & B\operatorname{Aut}_H(F_H)\times \cdots \times B\operatorname{Aut}_H (F_H) \ar[r]^-{B\varphi} & B \operatorname{Aut}_H (\ast _k F_H )}$$ where $B\varphi$ is the map induced from the monoid homomorphism $\varphi$ defined above. Let $A=\prod _{i=1}^k \operatorname{Aut}_H (F_H)$. By standard properties of homotopy pushout diagrams, we observe that the fibration $\ast _k \, p$ is the pullback fibration of the fibration $$q: EA \times _A (\ast _k F_H ) \to BA$$ via the map $\prod _i f: B \to BA$. Note that $A$ acts on $\ast_k F_H$ via the monoid homomorphism $\varphi$, so the classifying map of $q$ is $B\varphi$. This completes the proof. A special case of a $G$-fibration is a $G$-fiber bundle over a $G$-CW-complex. More specifically, if $\xi : E\to B$ is a complex $G$-vector bundle over a $G$-CW-complex $B$, then the sphere bundle $p: S(E) \to B$ of this vector bundle is a spherical $G$-fibration. Note that for every $b \in B$, the fiber space $p^{-1} (b)$ is a $G_b$-space which is homeomorphic to ${\mathbb S}(V_{G_b})$ where $V_{G_b}$ denotes vector space $\xi ^{-1}(b)$ with the induced $G_b$-action. Note that when $B^H$ is path connected for all $H \in \operatorname{Iso}(B)$, the family of complex representations $\{ V_H\}$ defined over all $H \in \operatorname{Iso}(B)$ is a compatible family. The compatibility of a family of representations is defined in the following way: A family of representations $\alpha_H :H \to U(n)$ over $H \in {\mathcal H}$ is called a [*compatible family of representations*]{} if for every map $c_g: H\to K$ induced by conjugation with $g\in G$, there exists a $\gamma \in U(n) $ such that the following diagram commutes $$\xymatrix{K \ar[d]^{c_g} \ar[r]^-{\alpha _K} & U(n) \ar[d]^{ c_{\gamma }} \\ H \ar[r]^-{\alpha _H} & U(n) .}\\$$ Note that if $F_H$ is an $H$-space which is $H$-homotopy equivalent to ${\mathbb S}(V_H)$ for some compatible family of complex $H$-representation $V_H$, then $\{ F_H \}$ is a compatible family of $H$-spaces. So, the sphere bundle $p: S(E) \to B$ of a $G$-vector bundle is a spherical $G$-fibration with fiber type $\{ {\mathbb S}(V_H) \}$. Note also that for every $k\geq 1$, the fiber join $\ast _k {\mathbb S}(V_H)$ is $H$-homotopy equivalent to the $H$-space ${\mathbb S}(V_H ^{\oplus k})$ where $V_H ^{\oplus k}$ denotes the $k$-fold direct sum of $V_H$. In Section \[sect:construction\], we construct $G$-fibrations with fiber types of the form $\{ {\mathbb S}(V_H)\}$. The following result is used in those constructions. \[lem:joinswap\] Let $H$ be a finite group, $F_H$ be an $H$-space which is $H$-homotopy equivalent to ${\mathbb S}(V_H)$ for some complex $H$-representation $V_H$. Let $\gamma , \gamma ^1 : \operatorname{Aut}_H (F_H)\to \operatorname{Aut}_H (\ast _k F_H)$ be maps defined by $\gamma (a)= \varphi (a,a, \dots, a)$ and $\gamma ^1 (a)= \varphi (a, {\mathrm{id}}, \dots, {\mathrm{id}})$, respectively. Then, the induced group homomorphisms $\gamma _*$ and $\gamma ^1 _*$ on homotopy $\pi_q (\operatorname{Aut}_H (F_H)) \to \pi _q (\operatorname{Aut}_H (\ast _k F_H)) $ satisfy the relation $\gamma _*=k \gamma^1 _*$. Let $\gamma ^i: \operatorname{Aut}_H (F_H)\to \operatorname{Aut}_H (\ast _k F_H )$ be the map defined by $$\gamma ^i (a)=\varphi ({\mathrm{id}}, \dots, a, \dots, {\mathrm{id}})$$ where $a$ is on the $i$-th coordinate. We have $\gamma=\gamma ^1 \gamma ^2 \cdots \gamma ^k$ under the product induced by the product in the monoid $A$. Since the group operation on $\pi _q (\operatorname{Aut}_H (\ast _k F_H))$ coming from the monoid structure on $\operatorname{Aut}_H (\ast _k F_H)$ coincides with the usual group structure on homotopy groups, we have $\gamma _*= \gamma ^1 _* +\cdots + \gamma ^k _*$. So, to complete the proof, it is enough to show that $\gamma ^i$ and $\gamma ^j$ are homotopic for every $i,j \in \{ 1, \dots, k\}$. Since $F_H$ is $H$-homotopy equivalent to ${\mathbb S}(V_H)$, it is enough to prove this for ${\mathbb S}(V_H)$. Note that in this case we have $\gamma ^i=T(i,j) \gamma ^j$ where $T(i,j): V_H^{\oplus k} \to V_H^{\oplus k}$ is a linear transformation which swaps the $j$-th summand with the $i$-th summand. Since $U(n)$ is connected, there is a path between $T(i,j)$ and the identity. Using this path, we can define a homotopy between $\gamma ^i$ and $\gamma ^j$. For more general $H$-spaces $F_H$, there exists a swap map $$S(i,j): \ast _k F_H \to \ast _k F_H,$$ which swaps the $i$-th and $j$-th coordinates, similar to the linear transformation $T(i,j)$ in the proof of Lemma \[lem:joinswap\]. If $F_H$ is a free $H$-space homotopy equivalent to an odd dimensional sphere, then $S(i,j)$ will be homotopy equivalent to the identity map. If the $H$-action on $F_H$ is not free, then the swap map $S(i,j)$ is not homotopy equivalent to the identity in general even when $F_H$ is homotopy equivalent to an odd dimensional sphere. On the other hand, if $F_H$ is a homotopy representation with the property that all fixed point spheres are odd dimensional, then under certain conditions on $H$ or on the dimension function of $F_H$, one can prove that $S(i,j)$ is homotopy equivalent to the identity (see Proposition 20.12 in [@lueck]). We end this section with the following observation. \[lem:jointrivial\] Let $H$ be a finite group and $p: E \to {\mathbb S}^n $ be an $H$-fibration over the trivial $H$-space ${\mathbb S}^n$ where $n\geq 2$. Suppose that the fiber type $F_H$ of $p$ is $H$-homotopy equivalent to ${\mathbb S}(V_H)$ for some complex $H$-representation $V_H$. If  $\pi _{n-1} ( \operatorname{Aut}_H (F_H ))$ is a finite group of order $N$, then $\ast _N p$ is $H$-fiber homotopy equivalent to the trivial fibration. By Theorem \[thm:classification\], the $H$-fibration $p$ is classified by the homotopy class of a map $f: {\mathbb S}^n \to B\operatorname{Aut}_H (F_H)$. Since $n\geq 2$, this map lifts to map $\widetilde f : S^n \to B\operatorname{Aut}^{I}_H (F_H)$, so we can assume that the $H$-fibration $p$ is an homotopy orientable fibration . Since $B\operatorname{Aut}^I _H (F_H)$ is simply connected, we have $$[S^n, B\operatorname{Aut}^I _H (F_H )]\cong \pi _n (B\operatorname{Aut}^I _H (F_H))\cong \pi_{n-1} (\operatorname{Aut}_H (F_H)).$$ So, $p$ is classified by a homotopy class $\alpha\in \pi _{n-1} (\operatorname{Aut}_H (F_H))$. By a slightly modified version of Lemma \[lem:fiberjoins\], it is easy to see that the fiber join $\ast _N p$ is classified by $\gamma _* (\alpha)$ where $\gamma: \operatorname{Aut}_H (F_H) \to \operatorname{Aut}_H ( \ast _H F_H )$ is the map defined by $\gamma (a)=\varphi (a,\dots, a)$. By Lemma \[lem:joinswap\], we have $$\gamma _* (\alpha)=N\gamma _* ^1 (\alpha)=\gamma _* ^1 (N \alpha )=0.$$ So, $\ast _N p$ is $H$-fiber homotopy equivalent to the trivial fibration. Equivariant Federer spectral sequence {#sect:Federer} ===================================== The main purpose of this section is to prove the following theorem which is due to M. Klaus [@klaus]. We give a different proof here using the equivariant Federer spectral sequence which was introduced by Møller in [@moller]. \[thm:finiteness of homotopy\] Let $G$ be a finite group and $V$ be a complex representation of $G$. Then, for every $n>0$, there is an $m\geq 1$ such that $\pi_n (\operatorname{Aut}_G ({\mathbb S}(V ^{\oplus k})) $ is finite for all $k \geq m$. Before the proof, we first introduce the standard definitions about Bredon cohomology and local coefficients systems that we are going to use to describe the equivariant Federer spectral sequence. For more details on Bredon cohomology we refer the reader to [@bredon]. Let $X$ be a topological space. A [*local coefficient system*]{} of $X$ is a functor $L:\Pi (X)\to {\mathcal Ab}$ where $\Pi (X)$ denotes the fundamental groupoid of $X$ and ${\mathcal Ab}$ denotes the category of abelian groups. Let ${\mathcal L}$ denote the category whose objects are pairs $(X,L)$ where $X$ is a topological space and $L$ is a local coefficient system of $X$ and a morphism from $(X,L)$ to $(Y,M)$ is a pair $(f,\varphi )$ where $f:X\to Y$ is a continuous function and $\varphi $ is a natural transformation from $L$ to $M\circ f_*$. Here $f_*$ denotes the functor $f_*:\Pi (X)\to\Pi (Y)$ induced by $f$ sending $x$ to $f(x)$ and $\gamma $ to $f\circ \gamma $. Let $G$ be a finite group and $\operatorname{Or}_{G}$ denote the [*orbit category*]{} of $G$ whose objects are orbits $G/H$ where $H$ is a subgroup of $G$. The morphisms of $\operatorname{Or}_{G}$ from $G/H$ to $G/K$ are $G$-maps between them where we consider the left cosets $G/H$ and $G/K$ as left $G$-sets. We denote the morphism from $G/H$ to $G/K$ which sends $H$ to $aK$ by $\hat{a}$. \[defn:eqlocalcoefsystem\] Let ${\mathcal Top}$ denote the category of topological spaces and continuous maps. Let $X$ be a $G$-space. We define a contravariant functor $\Phi(X):\operatorname{Or}_{G}\to {\mathcal Top}$ which sends $G/H$ to $X^H$ and $\hat{a}$ to $a:X^K\to X^H$. A $G$-[*equivariant local coefficient system*]{} on $X$ is a contravariant functor $\underline{\, L} :\operatorname{Or}_{G}\to {\mathcal L}$ such that $F\circ \underline{\, L} =\Phi(X)$ where $F:{\mathcal L}\to {\mathcal Top}$ is the forgetful functor which sends $(X,L)$ to $X$ and $(f,\varphi )$ to $f$. We will use the following notation: $\underline{\, L}(G/H)=(X^H , L(G/H))$. Let $\underline{L}$ be a $G$-equivariant local coefficient system of a finite $G$-CW-complex $X$. Recall that for a coefficient system $L$ on $X$, the group of $n$-cochains $\Gamma ^n(X; L)$ is defined as the group of all functions $c$ which take $n$-cells $\sigma $ in $X$ with characteristic map $h_{\sigma}:\Delta ^n\to X$ and send it to an element $c(\sigma )$ in $L(z_{\sigma })$ where $\Delta ^n$ is considered as the convex hull of a linearly independent subset $\{e_0,e_1,\dots e_n\}$ of ${\mathbb R}^{n+1}$ and $z_{\sigma }=h_{\sigma}(e_0)$. The coboundary operator $\delta:\Gamma ^{n-1}(X;L)\to \Gamma ^{n}(X;L)$ is defined as follows: For $c$ in $\Gamma ^{n-1}(X;L)$ and $\sigma $ a $n$-cell in $X$, we have $$(-1)^n(\delta c)(\sigma )=L(\gamma _\sigma )^{-1}c(\partial _0\sigma )+\sum _{i=1}^{n} (-1)^{i}c(\partial _i\sigma )\in L(z_{\sigma })$$ where $\partial _i\sigma $ is the $(n-1)$-cell with characteristic map $h_{\sigma}\circ d^n_i$ and $\gamma _\sigma (t)=h_{\sigma}((1-t)e_1+te_0)$. Here $d^n_i:\Delta ^{(n-1)}\to \Delta ^n$ is the affine map sending $e_j$ to $e_j$ when $j<i$ and to $e_{j+1}$ otherwise. Now for the $G$-equivariant coefficient system $\underline{L}$, we define $\Gamma ^n_G(X;\underline{L} )$ as elements in the direct sum $$\underline{\, c}=(c(G/H)) \in \bigoplus _{H\leq G}\Gamma ^n(X^H;L(G/H) )$$ which satisfy the following condition: $$c(G/H)(a\sigma )=\underline{\, L}(\hat{a})(z_{\sigma })(\, c(G/K)(\sigma )\, ) \text{\ \ in } L(G/H)(az_{\sigma })$$ for all $\sigma \in X^K$ and $a\in G$ with $a^{-1}Ha \leq K$. We can define a coboundary operator $\underline{\delta} :\Gamma ^{n-1}_G(X,\underline{L})\to \Gamma ^{n}_G(X,\underline{L})$ by the direct sum of the ordinary coboundary operators $$\underline{\, \delta }=\bigoplus _{H\leq G} \delta _H:\bigoplus _{H\leq G}\Gamma ^n(X^H ;L(G/H) )\to\bigoplus _{H\leq G}\Gamma ^{n+1}(X^H ; L(G/H) )$$ Since the ordinary coboundary operator is a natural transformation from $\Gamma^n$ to $\Gamma^{n+1}$ considered as functors from ${\mathcal L}$ to ${\mathcal Ab}$, we get $\underline{\, \delta }(\Gamma ^n_G(X;\underline{\, L} ))\subseteq \Gamma ^{n+1}_G(X;\underline{\, L} )$. The above definition easily generalizes to relative $G$-CW-complexes. The $n$-th cohomology of a relative $G$-CW-complex $(X,A)$ with $G$-equivariant local coefficients $\underline{\, L}$ is defined as follows: $$H ^n_G(X,A;\underline{\, L} )=H^n\left(\Gamma ^*_G(X,A;\underline{\, L} ),\underline{\, \delta }\right).$$ We will be using the Bredon cohomology with a particular local coefficient system that comes from a $G$-fibration. We now introduce this coefficient system. Let $p:E\to B$ be a $G$-fibration. Then $p^H:E^H\to B^H$ is a fibration for all $H\leq G$. Assume that for all $H\leq G$ and for all $b\in B$, the space $p^{-1}(b)^H$ is a path connected simple space. Associated to the fibration $p$, there is a $G$-equivariant local coefficient system on the $G$-space $B$ defined as the functor $$\pi _n ({\mathcal F}): \operatorname{Or}_{G}\to {\mathcal L}$$ which sends $G/H$ to $(B^H, \pi _n ({\mathcal F}^H))$ and $\hat{a}$ to $(a,a_*)$ where $\pi _n ({\mathcal F}^H):\Pi(B^H)\to {\mathcal Ab}$ is the functor which sends $b \in B$ to $\pi _n (p^{-1}(b)^H)$ and sends a path $\gamma $ with $\gamma(0)=c$ and $\gamma(1)=b$ to a homomorphism from $\pi _n (p^{-1}(b)^H)$ to $\pi _n (p^{-1}(c)^H)$ which is induced by a map admissible over $\gamma $ (see [@whitehead page 185]). Let $(X,A)$ be a finite $G$-CW-complex, $p:E\to B$ be a $G$-fibration, and $u:X\to E$ be a $G$-equivariant map. As above, assume that for all $H\leq G$ and for all $b\in B$, the space $p^{-1}(b)^H$ is a path connected simple space, and let $\pi _q ({\mathcal F})$ be the $G$-equivariant local coefficient system on $B$ which was introduced above. By abuse of notation, we will again write $\pi _q ({\mathcal F})$ for the $G$-equivariant local coefficients system on $X$ induced from $\pi _q ({\mathcal F})$ via the map $p\circ u$. Let $F_u(X,A;E,B)^G$ denote the space of all equivariant maps $v:X\to E$ such that $v|_A=u|_A$ and $p\circ v=p\circ u$ with compact open topology. \[thm:equivFedererSS\] There is a spectral sequence with $E^2$-term $$E^2_{pq}=H^{-p}_G(X,A;\pi _q ({\mathcal F}))$$ for $p+q\geq 0$ and $E^2_{pq}=0$ otherwise, converging to $\pi _{p+q}(F_u(X,A;E,B)^G,u)$ when $p+q>0$. The spectral sequence above is called the equivariant Federer spectral sequence since it is the equivariant version of a spectral sequence introduced by Federer [@federer]. We will be using this spectral sequence for the following special case: Let $X$ be a finite $G$-CW-complex such that $X^H$ is a path connected simple space for all $H \leq G$. Take $A=\emptyset $, $E=X$, $B=*$, $p:E\to B$ to be the constant map, and $u:X\to E$ to be the identity map. Then $F_{{\mathrm{id}}} (X, \emptyset ; X , * )$ will be homotopy equivalent to the identity component of $\operatorname{Aut}_G(X)$. Since all the components of $\operatorname{Aut}_G(X)$ have the same homotopy type, we have $$\pi _n (\operatorname{Aut}_G(X)){\cong}\pi _n (F_{{\mathrm{id}}} (A, \emptyset; X, *))$$ for all $n>0$. So, we can use the equivariant Federer spectral sequence to calculate the homotopy groups of $\operatorname{Aut}_G (X)$. Also note that in the situation we consider, the local coefficient system is constant on orbits. Bredon cohomology with coefficients in a $G$-equivariant local coefficients system has an alternative description when the coefficient system is constant on $G$-orbits. This description involves the modules over the orbit category which we will define now. Let $G$ be a finite group and let $\Gamma $ denote the orbit category $\operatorname{Or}_G$. A functor from $\operatorname{Or}_G$ to the category of abelian groups ${\mathcal Ab}$ is called an ${\mathbb Z}{\Gamma}$-module. Morphisms between ${\mathbb Z}{\Gamma}$-modules are given by natural transformations. Given a $G$-CW-complex $X$, we define a chain complex of ${\mathbb Z}{\Gamma}$-modules by taking $C_n(X^?)$ as the functor $\operatorname{Or}_G\to {\mathcal Ab}$ which sends $G/H$ to the $n$-th cellular chains $C_n(X^H)$ and sends $\hat{a}:G/H\to G/K$ to the group homomorphism $a_*:C_n(X^K)\to C_n(X^H)$. Let $\underline{L}$ be a $G$-equivariant local coefficient system of $X$. Suppose that there exists a ${\mathbb Z}{\Gamma}$-module $M$ such that $L(G/H)(x)=M(G/H)$ and $L(G/H)(\gamma )={\mathrm{id}}_{M(G/H)}$ for all $x\in X^H$ and all paths $\gamma $ in $X^H$. Then we have $$\Gamma ^n_G(X;\underline{L} )\cong \operatorname{Hom}_{{\mathbb Z}{\Gamma}}(C_n(X^?),M)$$ where the isomorphism is given by sending $\underline{\, c}=(c(G/H))$ in $\Gamma ^n_G(X; \underline{L} )$ to the homomorphism $\alpha : C_n(X^?) \to M$ defined by $\alpha (G/H)(\sigma )= c(G/H)(\sigma )$ for all $H\leq G$. The boundary maps at each $H$ are compatible with respect to inclusions and conjugations, so they combine together to give a ${\mathbb Z}{\Gamma}$-module map ${\partial}: C_n (X^?) \to C_{n-1} (X^{?})$ for every $n$. Using these boundary maps, we obtain a cochain complex of abelian groups $$C^n (X, M) =\operatorname{Hom}_{{\mathbb Z}{\Gamma}}(C_n(X^?),M) \cong \bigoplus _{[\sigma ]\in {\mathcal I}_n}M(G/G_{\sigma })$$ where ${\mathcal I}_n$ is a set of $G$-orbits of $n$-cells in $X$. Note that the last isomorphism comes from the standard properties of free ${\mathbb Z}{\Gamma}$-modules (see [@lueck Sec. 9]). The cohomology of this cochain complex is denoted by $H^n _G (G, M)$ and we have an isomorphism $H^n _G (X; \underline{L}) \cong H^n _G (X; M)$ for all $n\geq 0$ when $\underline{L}$ is a $G$-equivariant local coefficients system on $X$ and $M$ is a ${\mathbb Z}{\Gamma}$-module such that $M(G/H)=\underline L (G/H)(x)$ for all $H \leq G$ and $x \in X^H$. In our situation, this gives an isomorphism $$H ^n _G(X; \pi _q ({\mathcal F}) )\cong H ^n _G(X; \pi _q (X^? ) )$$ where $\pi _q (X^?)$ is the ${\mathbb Z}{\Gamma}$-module $\pi_q(X^?):\operatorname{Or}_G\to {\mathcal Ab}$ which sends $G/H$ to $\pi _q(X^H)$ and sends $\hat{a}:G/H\to G/K$ to $a_*:\pi _q (X^K)\to \pi _q (X^H)$. Also note that on the cochain level, we have $$C^n (X, \pi _q(X^?))=\operatorname{Hom}_{{\mathbb Z}{\Gamma}}(C_n(X^?),\pi _q(X^?))\cong \bigoplus _{[\sigma ]\in {\mathcal I}_n}\pi _q(X^{G_\sigma }).$$ So, we have an explicit description of the $E_{pq}^2$-terms of the equivariant Federer spectral sequence. Now we are ready to prove the main theorem of this section. Let $G$ be a finite group and $X$ be a finite $G$-CW-complex which is $G$-homotopy equivalent to ${\mathbb S}(V)$ for some complex representation $V$ of $G$. In fact, we only need $X$ to be a $G$-homotopy representation with odd dimensional fixed point spheres for our arguments to work (see [@lueck pg. 392] for a definition of homotopy representation). Let $n$ be a fixed positive integer. We want to show that there is an $m\geq 1$ such that $\pi_n (\operatorname{Aut}_G (\ast _k X))$ is finite for all $k \geq m$. Let $X_k=*_k X$ denote the $k$-fold join of $X$. By Theorem \[thm:equivFedererSS\], there is a spectral sequence with $$E^2_{pq}=H^{-p}_G(X_k;\pi _q (X_k^?))$$ for $p+q\geq 0$ and $E^2_{pq}=0$ otherwise, converging to $\pi _{p+q}(\operatorname{Aut}_G(X_k))$ when $p+q>0$. Since $X_k$ is finite dimensional, to show that $\pi _n (\operatorname{Aut}_G (X_k))$ is finite it is enough to show $H^{-p}_G(X_k;\pi _q (X_k^?))$ is finite for every pair $(p, q)$ with $p+q = n$. Note that for this we need to show that there is an $m \geq 1$ such that for all $k\geq m$, the cohomology group $H^{q-n}_G(X_k;\pi _q (X_k ^?)\otimes {\mathbb Q})$ is zero for all $q \geq n$. Let $\{ n_1, n_2, \dots, n_s\}$ be the set of all distinct dimensions of fixed subspaces $V^H$ over all subgroups $H \leq G$. Assume that $n_1< n_2 < \dots < n_s$. Note that the fixed point spheres $X_k^H$ have dimensions $\{kn_i-1 \ |\ i=1,\dots, s\}.$ Since homotopy groups $\pi _i (S^{2j-1}) $ of an odd dimensional sphere are all finite except when $i=2j-1$, we have $\pi _ q (X_k^?) \otimes {\mathbb Q}=0$ for all $q$ which is not equal to $kn_i-1 $ for some $i$. If $q=kn_i-1$ for some $i$, then we have $$H_G ^{q-n} (X_k ; \pi _q (X_k^{?} )\otimes {\mathbb Q})=H_G ^{kn_i-n-1 } (X_k; M_i)$$ where $M_i$ is the ${\mathbb Z}{\Gamma}$-module such that $M_i (H)={\mathbb Q}$ for all subgroups $H \leq G$ satisfying $\dim V^H=n_i$. To complete the proof we need to show that this cohomology group is zero for all $i\in \{ 1, \dots, s\}$. Note that there is a well-known first quadrant spectral sequence with $E_2$-term $$E_2 ^{pq}=\operatorname{Ext}_{{\mathbb Z}{\Gamma}} ^p (H_q (X_k^?), M_i )$$ which converges to $H_G ^{p+q} (X_k; M_i)$ (see [@unlu-yalcin2 Prop. 3.3]). Since the coefficient module $M_i$ takes only the values ${\mathbb Q}$, we can replace $H_p(X_k^?)$ with $H_p (X_k^? ; {\mathbb Q})$ and take the ext-groups over ${\mathbb Q}{\Gamma}$. Note that the ${\mathbb Q}{\Gamma}$-module $H_{p} (X_k^?; {\mathbb Q})$ is zero at all dimensions except when $p=kn_i-1$ for some $i$. Let $N_i$ denote the ${\mathbb Q}{\Gamma}$-module $H_{kn_i-1} (X_k^?; {\mathbb Q})$ for all $i=1,\dots ,s$. To prove that $H_G ^{kn_i -n-1} (X_k; M_i)=0$ for all $i$, it is enough to show that the ext-group $$\operatorname{Ext}_{{\mathbb Q}{\Gamma}} ^{k(n_i-n_j)-n} ( N_j, M_i)$$ is zero for all $j\leq i-1$. Let $l_j $ denote the length of the ${\mathbb Q}{\Gamma}$-module $N_j$ for every $j$ (see [@lueck pg. 325] for a definition). Then, by [@lueck prop. 17.31], the above ext-group is zero if $k(n_i-n_j)-n \geq l_j$. Let $l=\max_j \{l_j\}$. Then if $k\geq n+l$, then the above inequality will hold for every $j\leq i-1$. This completes the proof. Construction of spherical $G$-fibrations {#sect:construction} ======================================== We start with proving a proposition which is an important tool for constructing $G$-fibrations. In different forms, this proposition also appears in [@connolly-prassidis], [@klaus], and [@unlu-thesis]. Here we give a proof of it for completeness since it is the main ingredient in the proof of Theorem \[thm:main\]. \[pro:mainconsttool\] Let $G$ be a finite group, $B$ be a $G$-CW-complex, and let $\{V_H\}$ be a compatible family of complex representations over all $H \in \operatorname{Iso}(B)$. Let $q_n : E_n \to B^{(n)} $, $n\geq 2$, be a $G$-fibration with fiber type $\{ {\mathbb S}(V_H )\}$ where $B^{(n)}$ denotes the $n$-skeleton of $B$. Then there is an integer $k\geq 1$ and a $G$-fibration $q_{n+1}:E_{n+1}\to B^{(n+1)}$ such that the restriction of $q_{n+1}$ to $B^{(n)}$ is $G$-fiber homotopy equivalent to $\ast _k q_n$. In particular, the fiber type of $q_{n+1}$ is $\{ {\mathbb S}(V_H ^{\oplus k} )\}$. By the definition of $G$-CW-complexes, there exists a pushout diagram $$\label{eqn:pushout} \vcenter{\xymatrix{\displaystyle \coprod _{i\in I_{n+1}} G/H_i\times {\mathbb S}^{n} \ar[r]^-{\coprod f_i}\ar[d] & B^{(n)} \ar[d] \\ \displaystyle \coprod _{i\in I_{n+1}}G/H_i\times {\mathbb D}^{n+1} \ar[r]^-{\coprod g_i} & B^{(n+1)}}}$$where $I_{n+1}$ is an indexing set of orbits of $(n+1)$-cells in $B$. For each $i\in I_{n+1}$, let $q_{n,i}$ denote the $G$-fibration obtained by the following pullback diagram $$\xymatrix{ E_{n, i} \ar[r]\ar[d]^{q_{n,i}} & E_n \ar[d] \ar[d]^{q_n} \\G/H_i\times {\mathbb S}^{n} \ar[r]^-{f_i}& B^{(n)}\ . }$$ Restricting $q_{n,i}$ to the sphere ${\mathbb S}^n$ in $G/H_i \times {\mathbb S}^n$ which is fixed by $H_i$, we obtain an $H_i$-fibration $q_{n,i} |_{{\mathbb S}^n} : q_{n,i} ^{-1} ({\mathbb S}^n ) \to {\mathbb S}^n $ such that the $H_i$-action on the base space is trivial. By Theorem \[thm:classification\] and by the argument in the proof of Lemma \[lem:jointrivial\], such a fibration is classified by a homotopy class $\alpha_i \in \pi _{n-1} (\operatorname{Aut}_{H_i} ({\mathbb S}(V_{H_i}) ))$. By Theorem \[thm:finiteness of homotopy\], for each $H \in \operatorname{Iso}(B)$, there is an $m_{H} \geq 1$ such that $\pi _{n-1} (\operatorname{Aut}_{H} ({\mathbb S}(V_{H} ^{\oplus k} ))$ is finite for all $k \geq m_H$. Let $m=\max \{m_H \, |\, H \in \operatorname{Iso}(B)\}$. Then the group $$\pi _{n-1} (\operatorname{Aut}_{H} ({\mathbb S}(V_{H} ^{\oplus m } ))$$ has finite order, say $d_H$, for all $H \in \operatorname{Iso}(B)$. Let $d=\prod _H d_H$. By Lemma \[lem:jointrivial\], the $H_i$-fibration $\ast _{dm} (q_{n,i} |_{{\mathbb S}^n })$ is $H_i$-fiber homotopy equivalent to the trivial fibration for all $i \in I_{n+1}$. This implies that the $G$-fibration $p$ obtained by the following pullback diagram $$\xymatrix{ W \ar[r]^{f}\ar[d]^{p} & \ast _{dm} E_n \ar[d]^{\underset{dm}{\ast}\, q_n} \\ \displaystyle \coprod _{i\in I_{n+1}}G/H_i\times {\mathbb S}^{n}\ar[r]^-{\coprod f_i} & B^{(n)} }$$ is $G$-homotopy equivalent to the trivial fibration. Let $$\varphi : \coprod _{i \in I_{n+1}} G \times _{H_i} {\mathbb S}(V_{H_i} ^{\oplus dm})\times {\mathbb S}^n \to W$$ be a $G$-fiber homotopy equivalence between the trivial fibration and $p$. We can use $\varphi$ to glue the cone of the trivial fibration and obtain a quasifibration $$\xymatrix{\left( \displaystyle \coprod _{i\in I_{n+1}}G\times _{H_i} {\mathbb S}(V_{H_i} ^{\oplus dm})\times {\mathbb D}^{n+1} \right) \cup _{f \circ \varphi}\left( \underset{dm}{*}E_n\right) \ar[rr] & & B^{(n+1)} }.$$ There is a construction called gammafication that converts a quasifibration to a fibration and this construction also works for $G$-quasifibrations (see [@waner1 pg. 375]). Applying gammafication to the above $G$-quasifibration, we obtain a $G$-equivariant spherical fibration $q_{n+1}:E_{n+1}\to B^{(n+1)}$ whose fiber type is $\{ {\mathbb S}(V_H ^{\oplus dm})\}$. Another possible way of completing the final step of the above construction is to attach trivial $G$-fibrations over $(n+1)$-cells with the space $\ast _{dm} E_n$ using $G$-tubes (see [@guclukanpaper Theorem 3.1]). When these $G$-tubes are used one does not need the gammafication construction since one directly gets $G$-fibrations. This method is explained in detail in [@guclukanthesis] and [@guclukanpaper]. As a corollary of Proposition \[pro:mainconsttool\], we obtain the following which is also proved in [@klaus] as Proposition 2.7. Let $G$ be a finite group and $B$ be a finite dimensional $G$-CW-complex. Let $\{ V_H\}$ be a compatible family of complex representations over all $H \in \operatorname{Iso}(B)$. Then there exists an integer $k \geq 1$ and a $G$-equivariant spherical fibration $q:E\to B$ such that the fiber type of $q$ is $\{ {\mathbb S}(V_H ^{\oplus k} ) \}$. Let $${\bf A}=(\rho_H) \in \lim _{\underset{H \in {\mathcal H}}{\longleftarrow}} \operatorname{Rep}(H, U(n))$$ where ${\mathcal H}=\operatorname{Iso}(B)$ and $\rho_H$ is a representation for $V_H$ for every $H \in {\mathcal H}$. Let $q: E_{{\mathcal H}} (G, {\bf A} )\to B _{{\mathcal H}} (G, {\bf A})$ denote the universal $G$-equivariant vector bundle with fiber type ${\bf A}$ (see [@unlu-yalcin2 def. 2.4]). Since $B_{{\mathcal H}} (G, {\bf A} )^H=BC_{U(n)} (\rho_H)$ is simply connected for all $H \in {\mathcal H}$, by standard obstruction theory there is a $G$-map $B^{(2)}\to B_{{\mathcal H}} (G, {\bf A})$ (see the proof of Theorem 4.3 in [@unlu-yalcin2] for details). Pulling back the universal $G$-equivariant bundle via this map, we obtain a $G$-equivariant vector bundle over $B^{(2)}$. The sphere bundle of this bundle is spherical $G$-fibration over $B^{(2)}$ with fiber type $\{{\mathbb S}(V_H )\}$. Now the result follows from the repeated application of Proposition \[pro:mainconsttool\]. We often want the total space of a $G$-fibration to be $G$-homotopy equivalent to a finite $G$-CW-complex. The following theorem gives a very useful criteria for this condition: \[pro:finiteness\] Let $G$ be a finite group, $B$ be a finite $G$-CW-complex, and $p : E\to B$ be a $G$-fibration with fiber type $\{ F_H \}$. If $F_H$ is $H$-homotopy equivalent to a finite $H$-CW-complex for every $H \in \operatorname{Iso}(B)$, then $E$ is $G$-homotopy equivalent to a finite $G$-CW-complex. We will prove this lemma by induction over the skeletons of $B$. We already know that $p^{-1}(B^{(0)})$ is $G$-homotopy equivalent to a finite $G$-CW-complex. Now assume that $p^{-1}(B^{(n)})$ is $G$-homotopy equivalent to a finite $G$-CW-complex $Z$ for some $n\geq 0$. We want to show that $p^{-1}(B^{(n+1)})$ is $G$-homotopy equivalent to a finite $G$-CW-complex. The pushout diagram given in $(1)$ induces a diagram of $G$-spaces $$\label{eqn:pushout2} \vcenter{\xymatrix{f^*(E_n) \ar[r]^-{\overline f}\ar[d]^{\overline j} & E_n \ar[d]^{\overline J} \\ g^* (E_{n+1}) \ar[r]^-{\overline g} & E_{n+1}}}$$ where the spaces in the diagram are the total spaces of the fibrations obtained by taking pullbacks of the fibration $q_{n+1}: E_{n+1} \to B^{(n+1)}$ via the maps $f$, $g$, $j$, and $J$. Here $f=\coprod f_i$, $g=\coprod g_i$, $$j: \coprod _{i\in I_{n+1}} G/H_i\times {\mathbb S}^{n}\to \coprod _{i\in I_{n+1}} G/H_i\times {\mathbb D}^{n+1}$$ is the disjoint union of inclusion maps, and $J: B^{(n)}\to B^{(n+1)}$ is the inclusion map. Since the inclusion map ${\mathbb S}^n \to {\mathbb D}^{n+1}$ is a cofibration map, the $G$-map $j$ is a $G$-cofibration. So, by [@lueck Lemma 1.26], the diagram is a pushout diagram and $\overline j$ is a $G$-cofibration. Since ${\mathbb D}^{n+1}$ is contractible, there is a $G$-fiber homotopy equivalence $$\displaystyle \coprod _{i\in I_{n+1}} G\times_{H_i} F_{H_i} \times{\mathbb D}^{n+1} \maprt{\gamma } g^* (E_{n+1}).$$ This gives a commutative diagram of the following form $$\xymatrix{E_{n}& & f^* (E_n) \ar[ll]_{\overline f} \ar[rr]^{\overline j} & & g^* (E_{n+1}) \\ E_n \ar[u]_{{\mathrm{id}}} & & \displaystyle \coprod _{i\in I_{n+1}} G\times_{H_i} F_{H_i} \times {\mathbb S}^{n}\ar[u]_-{\gamma '} \ar[rr]^{{\mathrm{id}}\times j} \ar[ll]_-{\overline f \circ \gamma '} & & \displaystyle \coprod _{i\in I_{n+1}} G\times_{H_i} F_{H_i} \times {\mathbb D}^{n+1} \ar[u]_-{\gamma}}$$ where $\gamma '$ is the restriction of $\gamma$ to the boundary spheres. Such a restriction makes sense since $\gamma$ is a fiber homotopy equivalence. Now, since both ${\mathrm{id}}\times j$ and $\overline j$ are $G$-cofibrations, by [@lueck Lemma 2.13], the $G$-space $E_{n+1}$, which is the pushout of the the diagram in the first line, is $G$-homotopy equivalent to the pushout of the diagram in the second line. To find a further homotopy equivalence, note that by induction assumption $E_n$ is $G$-homotopy equivalent to a finite $G$-CW-complex $Z$. So, using a similar diagram as above, we can conclude that $E_{n+1}$ is $G$-homotopy equivalent to the pushout of a diagram of the following form $$\xymatrix{Z & & \displaystyle \coprod _{i\in I_{n+1}} G\times_{H_i} F_{H_i} \times {\mathbb S}^{n} \ar[rr]^{{\mathrm{id}}\times j} \ar[ll]_-{\varphi} & & \displaystyle \coprod _{i\in I_{n+1}} G\times_{H_i} F_{H_i} \times {\mathbb D}^{n+1} }.$$ Now, we can replace the map $\varphi$ with a cellular one (up to homotopy) and conclude that $E_{n+1}$ is $G$-homotopy equivalent to a finite $G$-CW-complex since the spaces $Z$ and $F_{H_i}$ for all $i\in I_{n+1}$ are finite $G$-CW-complexes. This completes the $n$-th stage of our induction. Since $B$ is a finite $G$-CW-complex, the induction will stop in finite steps. So, the proof is complete. Proof of the main theorem {#sect:mainthm} ========================= Now, we are ready to prove the main theorem of the paper. First we introduce some notation and recall some basic facts about Stiefel manifolds. For more details, we refer the reader to [@unlu-yalcin1]. Let $F$ denote the field of real numbers ${\mathbb R}$, complex numbers ${\mathbb C}$, or quaternions ${\mathbb H}$. For a real number the conjugation is defined by $\overline x =x$, for a complex number $x=a+ib$ by $\overline x=a-ib$, and for a quaternion $x=a+ib+jc+kd$ by $\overline x=a-ib-jc-kd$. On the vector space $F^n$, we can define an inner product $(v,w)$ by taking $$(v,w)=v_1 \overline w_1+ v_2 \overline w_2 + \cdots + v_n \overline w_n.$$ The Stiefel manifold $V_k (F ^n )$ is defined as the subspace of $F^{nk}$ formed by the $k$-tuples of vectors $(v_1, v_2, \dots, v_k)$ such that $v_i \in F^n$ and for every pair $(i,j)$, we have $(v_i , v_j )=1$ if $i=j$ and zero otherwise. There is a sequence of fiber bundles $$V_n (F^n)\to \cdots \to V_{k+1} (F^n ) \maprt{q_k} V_k (F^n) \to \cdots \to V_2 (F^n) \maprt{q_1} V_1 (F^n )$$ where the map $q_k: V_{k+1} (F^n ) \to V_{k} (F^n) $ is defined by $q_k(v_1, \dots, v_{k+1} )=(v_1,\dots, v_k )$ and the fiber of $q_k$ is $V_1 (F^{n-k})={\mathbb S}^{c(n-k)-1}$ where $c=\dim _{{\mathbb R}} F $ (see Theorem 3.8 and Corollary 3.9 in Chapter 8 of [@husemoller]). Note that the sphere bundle $q_k: V_{k+1}(F^n) \to V_k (F^n )$ is the sphere bundle of the vector bundle $\overline q_k: \overline V_{k+1} (F^n ) \to V_k (F^n)$ where $\overline V_{k+1} (F^n)$ is the space formed by $(k+1)$-tuples $(v_1, \dots, v_{k+1})$ satisfying $(v_1,\dots, v_k)\in V_k (F^n)$ and $(v_i, v_{k+1})=0$ for all $i=1, \dots, k$. Note that if a finite group $G$ has a representation $W$ over a field $F$, then the inner product above can be replaced by a $G$-invariant one and the Stiefel manifolds have natural $G$-actions. Moreover the sphere bundles given above become $G$-equivariant bundles. If the representation $W$ has fixity $f$, then we have $\dim _{F} W^g \leq f$ for all $g \in G$. This means if $W$ is a faithful representation, i.e., $f \leq \dim _F W-1$, then the $G$-action on $V_{f+1} (W)$ is free. We will be using this observation in the proof of the main theorem. Also observe that if $G$ has a complex representation with fixity $f$, then by tensoring it with ${\mathbb H}$ over ${\mathbb C}$, we obtain a symplectic representation with the same fixity. So to prove Theorem \[thm:main\], it is enough to prove the following: \[thm:mainspversion\] Let $\rho: G\to Sp(n)$ be a faithful symplectic representation with fixity $f$. Then there exists a finite $G$-CW-complex $X$ homotopy equivalent to a product of $f+1$ spheres such that the $G$ action the homology of $X$ is trivial. Let $W$ denote the ${\mathbb H}$-space corresponding to the representation $\rho$. Define $X_1=V_1(W)$. We will construct finite $G$-CW-complexes $X_2, X_3,\dots,X_{f+1}$ recursively. For all $i$, the $G$-CW-complex $X_i$ will be homotopy equivalent to a product of $i$ spheres and will satisfy the following property: If $H\in \operatorname{Iso}(X_i)$, then $V_i(W)^H \neq \emptyset$ where $\operatorname{Iso}(X_i)$ denotes the set of isotropy subgroups of $X_i$. Assume that $X_i$ is constructed for some $i\geq 1$. Note that for every $H \leq G$, the fixed point set $V_i(W)^H$ is either empty or simply connected. Since for every $H \in \operatorname{Iso}(X_i)$ we have $V(W_i )^H \neq \emptyset$, by standard equivariant obstruction theory there exists a $G$-map $f: X^{(2)}_i \to V_i(W)$. By pulling back the $G$-bundle $q_i : V_{i+1} (W) \to V_i (W)$ via $f$, we obtain a $G$-equivariant vector bundle over $X_i^{(2)}$ with fiber type $\{ S(V_H )\}$. Note that this is a compatible family defined over all $H \in \operatorname{Iso}(X_i)$ where $V_H$ is the $H$-space $(\overline q_i) ^{-1} (b)$ for some $b \in V_i (W)^H$. Now, applying Proposition \[pro:mainconsttool\] repeatedly to this $G$-fibration, we obtain a spherical $G$-fibration $E_i \to X_i$ with fiber type $\{ S(V_H ^{\oplus k})\}$ for some $k \geq 1$. By taking further fiber joins, we can assume that $E_n$ is a trivial fibration non-equivariantly and the action on the homology of the total space is trivial. This is shown below in Lemma \[lem:homtrivial\]. Now, by Proposition \[pro:finiteness\], $E_i$ is $G$-homotopy equivalent to a finite $G$-CW-complex $Y$. Hence we can take $X_{i+1}$ as $Y$ and continue the induction until we reach $X_{f+1}$. At this stage, we have $(V_{f+1} (W))^H \neq \emptyset$ implies $H=\{1\}$, so we can conclude that the $G$-action on $X_{f+1}$ is free. \[lem:homtrivial\] Let $p: E \to B$ be a $G$-fibration over a finite $G$-CW-complex $B$ and $n$ be a positive integer. Suppose that $p$ has fiber type $\{ F_H \}$ such that $F_H $ is homotopy equivalent to the sphere ${\mathbb S}^n$ for all $H \in \operatorname{Iso}(B)$. Then, there is an integer $k\geq 1$ such that $\ast _k p : \ast _k E \to B$ is non-equivariantly homotopy equivalent to the trivial fibration. Moreover, if the $G$-action on the cohomology of $B$ is trivial, then we can choose $k$ large enough so that the $G$-action on the cohomology of $E$ is also trivial. The first part of the lemma is well-known and it follows from the fact that the homotopy groups of $\operatorname{Aut}(S^n)$ is finite. For the second part, observe that since the resulting fibration is homotopy equivalent to the trivial fibration, it is in particular an orientable fibration, i.e., $\pi _1 (B)$ action on the homology of $F$ is trivial. So, there exists a consistent choice of generators for $H^n (F_b )\cong {\mathbb Z}$ for all $b \in B$. Note that this gives a $G$-action on the cohomology of the fibers $H^n (F_b)$ which is defined by $$g^* : H^n (F_b) \to H^n (F_{gb})\cong H^n (F_b)$$ where the isomorphism on the right comes from identifications of generators that we have chosen. Observe that this action in general can be nontrivial since a generator $u$ can go to $-u$ but if we take the fiber join of $p$ with itself, then we can assume that this action is trivial for all $b \in B$. For an orientable spherical fibration there is a Serre spectral sequence $$E_2 ^{p,q} =H^p (B, H^q (F )) \Longrightarrow H^{p+q} (E).$$ Note that for a $G$-fibration, all the terms in this spectral sequence will be ${\mathbb Z}G$-modules and the differentials will be ${\mathbb Z}G$-module homomorphisms since every $g \in G$ induces a fiber preserving continuous map $g: E \to E$. In our case, we have a two line spectral sequence and it is easy to see that by choosing $k$ large enough, we can assume that $N=\dim (\ast _k {\mathbb S}^n) \geq \dim B$, and hence we can conclude that $H^i (E) \cong H^i (B)$ for $i<N$ and $H^i (E) \cong H^{i-N} (B, H^N( F) )$ for $i \geq N$. Since $G$ acts trivially on $H^* (B)$, we have trivial action on $H^*(E)$ if the $G$-action on $H^{i-N} (B, H^N(F))$ is trivial for all $i\geq N$. Note that the Serre spectral sequence has a product structure, so the action on $H^* (B, H^N (F))$ is trivial if the $G$-action on $H^0 (B, H^N (F))$ is trivial. Note that $H^0 (B, H^n (F))$ is the kernel of the map $$d^1 : \operatorname{Hom}_{{\mathbb Z}} (C_0 (B), H^N (F)) \to \operatorname{Hom}_{{\mathbb Z}} (C_1 (B), H^N (F))$$ and as a ${\mathbb Z}G$-module $$\operatorname{Hom}_{{\mathbb Z}} (C_i (B), H^N (F)) \cong \bigoplus \limits _{\sigma \in I_i} \operatorname{Hom}_{{\mathbb Z}} (\operatorname{Ind}_{G_{\sigma}} ^G {\mathbb Z}_{\sigma}, H^N (F_{\sigma} ))$$ for $i=0,1$ where the $G$-action on $H^N(F_{\sigma})$ is the one described above. Since we assumed that this action is trivial, we can conclude that $H^0 (B, H^N (F) )\cong H^0 (B)$ as ${\mathbb Z}G$-modules, and hence $H^0 (B, H^{N} (F))$ is a trivial ${\mathbb Z}G$-module. This completes the proof of the lemma. [10]{} A. Adem, J. F. Davis, and " O.  " Unl" u, *Fixity and free group actions on products of spheres*, Comment. Math. Helv. **79** (2004), 758–778. A. Adem and J. H. Smith, *Periodic complexes and group actions*, Ann. of Math. (2) **154** (2001), 407–435. G. Bredon, *Equivariant cohomology theories*, Lecture notes in mathematics 34, Springer-Verlag, 1967. F. Connolly and S. Prassidis, *Groups which act freely on ${\mathbb R}^m \times {\mathbb S}^{n-1}$*, Topology **28** (1989), 133–148. K. Ehrlich, *The obstruction to the finiteness of the total space of a fibration*, Michigan Math. J. **28** (1981), 19–38. H. Federer, *A study of function spaces by spectral sequences*, Trans. Amer. Math. Soc. 82 (1956), 340-361. C. French, *The equivariant J-homomorphism*, Homology Homotopy Appl. **5** (2003), 161–212. A. G" uçl" ukan . Ilhan, *Obstructions for constructing $G$-equivariant fibrations*, PhD. thesis (2011). A. G" uçl" ukan . Ilhan, *Obstructions for constructing equivariant fibrations*, preprint, arXiv:1110.3880. D. Husemoller, *Fibre Bundles*, Third edition, Graduate Texts in Mathematics 20, Springer-Verlag, New York, 1994. M. Klaus, *Constructing free actions of p-groups on products of spheres*, Algebr. Geom. Topol. **11** (2011), 3065–3084. W. L[ü]{}ck, *Transformation groups and algebraic [$K$]{}-theory*, Lecture Notes in Mathematics, vol. 1408, Springer-Verlag, Berlin, 1989, Mathematica Gottingensis. J. M. Møller, *On Equivariant Function Spaces*, Pacific Journal Of Mathematics Vol. 142, No. 1, 1990. R. Oliver, *Free compact group actions on products of spheres*, in: Algebraic Topology: Aarhus, Denmark 1978, Lecture Notes in Mathematics 763, Springer-Verlag, Berlin, 1979, pp. 539–548. U. Ray, *Free linear actions of finite groups on products of spheres*, J. Algebra **147** (1992), 456–490. J. Stasheff, *A classification theorem for fiber spaces*, Topology **2** (1963), 239-246 Ö. Ünlü, *Constructions of free group actions on products of spheres*, PhD. thesis (2004). Ö. Ünlü and E. Yalçin, *Quasilinear actions on products of spheres*, Bull. London Math. Soc. **42** (2010), 981–990. Ö. Ünlü and E. Yalçin, *Fusion systems and constructing free actions on products of spheres*, Math. Z. **270** (2012), 939–959. S. Waner, *Equivariant fibrations and transfer*, Trans. Amer. Math. Soc. **258** (1980), 369–384. S. Waner, *Equivariant classifying spaces and fibrations*, Trans. Amer. Math. Soc. **258** (1980), 385–405. G. W. Whitehead, *Elements of homotopy theory*, Graduate Texts in Mathematics 61, Springer, 1978. [^1]: 2010 [*Mathematics Subject Classification.*]{} Primary: 57S25; Secondary: 55R91. [^2]: Both of the authors are partially supported by T" UB. ITAK-TBAG/110T712.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Emotions widely affect human decision-making. This fact is taken into account by affective computing with the goal of tailoring decision support to the emotional states of individuals. However, the accurate recognition of emotions within narrative documents presents a challenging undertaking due to the complexity and ambiguity of language. Performance improvements can be achieved through deep learning; yet, as demonstrated in this paper, the specific nature of this task requires the customization of recurrent neural networks with regard to bidirectional processing, dropout layers as a means of regularization, and weighted loss functions. In addition, we propose *sent2affect*, a tailored form of transfer learning for affective computing: here the network is pre-trained for a different task (i.e. sentiment analysis), while the output layer is subsequently tuned to the task of emotion recognition. The resulting performance is evaluated in a holistic setting across 6 benchmark datasets, where we find that both recurrent neural networks and transfer learning consistently outperform traditional machine learning. Altogether, the findings have considerable implications for the use of affective computing.' address: - 'ETH Zurich, Weinbergstr. 56/58, 8092 Zurich, Switzerland' - 'National Institute of Informatics, 2-1-2 Hitotsubashi, 101-8430 Tokyo, Japan' author: - Bernhard Kratzwald - Suzana Ilić - Mathias Kraus - Stefan Feuerriegel - Helmut Prendinger title: 'Deep learning for affective computing: text-based emotion recognition in decision support' --- Affective computing ,Emotion recognition ,Deep learning ,Natural language processing ,Text mining ,Transfer learning Introduction ============ Emotions drive the ubiquitous decision-making of humans in their everyday lives [@Oatley2011; @Greene2002; @Schwarz2000]. Furthermore, emotional states can implicitly affect human communication, attention, and the personal ability to memorize information [@Derakshan2010; @Dolan2002]. While the recognition and interpretation of emotional states often comes naturally to humans, these tasks pose severe challenges to computational routines [[e.g.]{}, @Poria2017; @Tausczik2010a]. As such, the term *affective computing* refers to techniques for detecting, recognizing, and predicting human emotions ([e.g.]{}, joy, anger, sadness, trust, surprise, anticipation) with the goal of adapting computational systems to these states [@Picard1997]. The resulting computer systems are not only capable of exhibit empathy [@Picard1995] but can also provide decision support tailored to the emotional state of individuals. Emotional information is conveyed through a multiplicity of physical and physiological characteristics. Examples of such indicators include vital signs such as heart rate, muscle activity or sweat production on the surface of the skin [[e.g.]{}, @Lux2015; @Tao2005]. A different stream of research tries to infer emotions from the content and its mode of communication. These approaches to affective computing are primarily categorized by the modality of the message, [i.e.]{}, whether it takes the form of speech, gesture, or written information [@Calvo2010]. In this terminology, affective computing can comprise both unimodal and multimodal analyses. For instance, videos allow for the recognition of facial expressions and vocal tone [@Chen2017; @ElAyadi2011; @Shan2009]. The focus of this work is on the unimodal analysis of written materials in English. This choice reflects the prominence of textual materials as a widespread basis for decision-making [@Hogenboom2016]. Illustrative examples are as follows (a detailed review is given later in ). For instance, the use of affective language as a proxy for emotional closeness can be used to measure the strength of interpersonal ties in social networks [@Marsden2012]. Similarly, marketing utilizes the recognition of emotional states in order to predict the purchase intentions of customers [@Ang2000], satisfaction with services [@Greaves2013], and even to measure the overall brand reputation [@Al-Hajjar2015]. In a related context, decision support can leverage affective signals in financial materials in order to suggest trading decisions [@Gilbert2010] or forecast the economic climate [@Nyman2015]. Furthermore, affect can also improve processes and decision-making in the provision of healthcare [@Spiro.2016] or education [@Rodriguez2012]. Previous research on affective computing has merely utilized methods from traditional machine learning, while recent advances from the field of deep learning – namely, recurrent neural networks and transfer learning – have been widely overlooked. However, their use promises further improvements. In fact, techniques from deep learning have become prominent in various decision support activities involving sequential data [[e.g.]{}, @Evermann2017] and especially linguistic materials [[e.g.]{}, @Kraus2017; @Mahmoudi.2018], where deep learning was able to enhance the performance when deriving decisions from unstructured data. One of the inherent advantages of deep learning is that it can successfully model highly non-linear relationships. This work draws upon existing solution techniques from the realm of deep learning [@Kraus2017] and applies them to a problem domain different from that of our research objective. First and foremost, we extend existing techniques from the discipline of deep learning to the task of text-based emotion recognition in order to expand the body of knowledge. Following @Kraus2017, we also utilize long short-term memory networks (LSTMs) that can make predictions based on running texts of varying lengths. However, affective computing differs substantially from related tasks due to the high number of often imbalanced target labels. Thus, this task requires both customized network architectures and procedures. Hence, its applicability is only made possible through the several methodological innovations that we summarize in the following. In order to handle class imbalances in affective computing, we propose the following modifications beyond @Kraus2017: (i) bidirectional processing of the text, (ii) dropout layers as a means of regularization, and (iii) a weighted loss function. The latter becomes especially critical due to the imbalanced distribution of labels. In fact, without the weighted loss function, the network ends up resembling merely a majority class vote. We further propose an extension of transfer learning called *sent2affect*. That is, the network is first trained on the basis of sentiment analysis and, after exchanging the output layer, is then tuned to the task of emotion recognition. To the best of our knowledge, this presents a novel strategy for better affective computing as the inductive knowledge transfer is not merely based on a different *dataset*, but a different *task*. Even though affective computing has gained great traction over the past several years [@Ribeiro2016], there is a scarcity of widely-accepted datasets for text-based emotion recognition that can be used for benchmarking and that facilitate fair comparisons. A relatively small, but more common, dataset was provided by SemEval-2007 and consists of annotated news headlines [@Strapparava2007]. A significantly larger, but underutilized, corpus is composed of affect-labeled literary tales [@Alm2008]. Our literature review notes considerable differences across datasets that vary in their linguistic style, domain, affective dimensions, and the structure of the outcome variable. With regard to the latter, the majority of datasets involve a classification task in which exactly one affective category is assigned to a document, while others request a numerical score across multiple dimensions, [i.e.]{}, a regression task. Hence, it is a by-product of this research to contribute a holistic comparison that benchmarks different methods across datasets used in prior research. For this purpose, we conducted an extensive search for affect-labeled datasets that serves as the foundation for our computational experiments. As a result, we find that deep learning consistently outperforms the baselines from traditional machine learning. In fact, we observe performance improvements of up to in F1-score as part of classification tasks and in mean squared error as part of regression tasks. The findings of this work have direct implications for management, practice, and research. As such, various application areas of decision support – such as customer support, marketing, or recommender systems – can be improved considerably through the use of affective computing. Similarly, all systems with human-computer interactions ([e.g.]{}chatbots and personal assistants) could further benefit from emotion recognition and a deeper understanding of empathy. In fact, emotion detection could significantly impact and refine all use cases in which sentiment analysis ([i.e.]{}, only positive/negative polarity) has already proved to be a valuable approach, since these lend themselves to a more fine-grained analysis and decision-making beyond only one dimension. In academia, text-based emotion recognition supports the cognitive and social sciences as a new approach to measuring and interpreting individual and collective emotional states. The rest of this paper is structured as follows. reviews earlier works on text-based emotion recognition, including the underlying affect theories, datasets used for benchmarking, and computational approaches. This reveals a research gap with regard to both deep neural networks and transfer learning within the field of affective computing. As a remedy, introduces our methods rooted in deep learning, which are then evaluated in . Based on our findings, we detail implications for both research and management in , while concludes. Background {#sec:background} ========== We specifically point out that the terms “sentiment analysis” and “affective computing” are often used interchangeably [@Munezero2014]. However, comprehensive surveys [@Pang2006; @Yadollahi2017] recognize clear differences that distinguish each concept: sentiment analysis measures the subjective polarity towards entities in terms of only two dimensions, namely, positivity and negativity. Conversely, affective computing concerns the identification of explicit emotional states and, hence, this approach is also referred to as emotion recognition. The choice of emotional dimensions depends on the underlying affect theory and involves a wide range of mental states such as happiness, anger, sadness, or fear. For reasons of clarity, we strictly distinguish between the aforementioned concepts in our terminology. Accordingly, this section first provides an overview of prevalent emotion models as specified by affect theories and, based on their dimensions, reviews computational methods for inferring affective information from natural language. This gives rise to a variety of use cases, which are detailed subsequently. Affect theory {#sec:affecttheory} ------------- In the field of psychology, there is no consensus regarding a universal classification of emotions [@Frijda1988; @Izard2009], as physiological arousal in the proposed theories varies with causes, cognitive appraisal processes, and context. Yet a conventional approach is to distinguish emotions based on how the underlying constructs are defined. On the one hand, emotions can be defined as a set of discrete states with mutually-exclusive meanings, while, on the other hand, emotions can also be characterized by a combination of numerical dimensions, each associated with a rating of intensity. The categorization into either a discrete set or a combination of intensity labels yields later benefits with regard to computational implementation, as it directly aids in formalizing the different machine learning models. Categorical emotion models involve a variety of prevalent examples, including the so-called basic emotions. These introduce a discrete set of emotions with innate and universal characteristics [@tomkins1962; @Izard1992]. One of the first attempts by @Ekman1987 to classify emotions led to the categorization of six discrete items labeled as basic: namely, anger, disgust, fear, happiness, sadness, and surprise. The model was later extended by @averill1980theories to include trust and anticipation, resulting in eight basic emotions. An alternative categorization by Tomkins [@tomkins1962; @tomkins1963] classifies nine primary affects into positive (enjoyment, interest), neutral (surprise), and negative (anger, disgust, dissmell, distress, fear, shame) expressions. Dimensional models of emotion locate constructs in a two- or multi-dimensional space [@Poria2017]. Here the assumption of disjunct categories is relaxed such that the magnitude along each dimension can be measured separately [@Russell1980], yielding continuous intensity scores. Different variants have been proposed, out of which we summarize an illustrative subset in the following. One of the earliest examples is Russell’s circumplex model [@Russell1980], consisting of bivariate classifications into valence and arousal. Depending on the strength of each component, certain regions in the two-dimensional space are given explicit interpretations (such as tense, aroused, excited) according to 28 emotional states. The Wheel of Emotions is an extension of the circumplex model whereby eight primary emotion dimensions are represented as four pairs of opposites: joy versus sadness, anger versus fear, trust versus disgust, and surprise versus anticipation [@Plutchik2001]. Recent approaches introduce complex hybrid emotion models, such as the Hourglass of Emotions [@Cambria2012], which represents affective states through both discrete categories and four independent, but concomitant, affective dimensions. However, neither the Wheel of Emotions nor the Hourglass of Emotions has yet found its way into common datasets for affective computing. Datasets for benchmarking {#sec:datasets} ------------------------- provides a holistic overview of datasets used for text-based affective computing. These datasets exhibit fundamentally different characteristics and challenges, as they vary in size, domain, linguistic style and underyling affect theory. We summarize key observations in the following. In terms of text source, the datasets refer to tasks that utilize narrative materials from classic literature [@Alm2008], while others are based on traditional media [@Strapparava2007], and even Twitter or Facebook posts [@Preotiuc-Pietro2016]. Social media, in particular, tends to be informal and subject to variable levels of veracity, especially in comparison with more formal linguistic sources such as newspaper headlines. Similar variations become apparent in terms of where the annotations originate from. For instance, emotion labels can rely upon self-reporting of emotional experiences [@Wallbott.1986] or stem from ex post labeling efforts via crowdsourcing [@Mohammad2015]. The majority of datasets were annotated based on categorical emotion models, thereby defining a discrete set of labels. The chosen emotions largely follow suggestions from the different affect theories and predominantly focus on basic emotions (or subsets thereof) due to their prevalence. Even though the number and choice of emotions differ, one can identify four emotions that are especially common as they appear in almost all categorical models: anger, joy (happiness), fear, and sadness. Some emotions occur more often than others in the usual routines of humans [@Plutchik2001; @Ekman1987] and one thus obtains datasets  [[e.g.]{}, @Strapparava2007; @Mohammad2015] wherein the relative frequency of emotions is highly unbalanced. This imposes additional computational challenges as classifiers tend to overlook infrequent classes. In contrast, dimensional models of emotions appear less frequently. Only one dataset, composed of newspaper headlines [@Strapparava2007], provides a score for each of the six emotion categories. From a methodological point of view, this categorization into dimension-based models requires different prediction models. While categorical models refer to machine learning with single-label classification tasks in the sense that we identify the appropriate item based on a discrete label, dimensional models allow for regression tasks in the sense that we predict a score for every item and emotion. Computational methods --------------------- The automatic recognition of text-based emotions relies upon different computational techniques that comprise lexicon-based methods and machine learning. Due to wealth of approaches, we can only summarize the predominant streams of research in the following and refer to @Calvo2010 [@Poria2017] for detailed methodological surveys. ### Lexicon-based methods Lexicon-based approaches utilize pre-defined lists of terms that are categorized according to different affect dimensions [@Mohammad2012]. On the one hand, these lexicons are often compiled manually, a fact which can later be exploited for keyword matching. For instance, the Harvard IV dictionary (inside the General Inquirer software) and LIWC provide such lists with classification by domain experts [@Tausczik2010a]. These were not specifically designed for affective computing, but still include psychological dimensions ([e.g.]{}, pleasure, arousal and emotion in the case of Harvard IV; anxiety, anger, and sadness for LIWC). The NRC Word-Emotion Association lexicon was derived analogously but with the help of crowdsourcing rather than involving experts from the field of psychology research [@Mohammad2013]. The latter dictionary includes 10 granular categories such as anticipation, trust, and anger. In order to overcome the need for manual dictionary creation, heuristics have been proposed to construct affect-related wordlists. Common examples include the WordNet-Affect dictionary, which starts with a set of seed words labeled as affect and then assigns scores to all other words based on their proximity to the seed words [@Strapparava2004a]. However, the resulting affect dictionary includes only general categories of mood- or emotion-related words, rather than further distinguishing the type of emotion. More recent methods operate, for instance, via mixture models [@Bandhakavi2017], fuzzy clustering [@Poria2014], or by incorporating word embeddings [@Li2017]. The precision of dictionaries can further be improved by embedding these in linguistic rules that adjust for the surrounding context. Dictionary-based approaches are generally known for their straightforward use and out-of-the-box functionality. However, manual labeling is error-prone, costly, and inflexible as it impedes domain customization. Conversely, the vocabulary from the heuristics is limited to a narrow set of dimensions that were selected a priori and, as a result, this procedure has difficulties when generalizing to other emotions [[cf.]{} @Agrawal2012]. ### Machine learning Machine learning can infer decision rules for recognizing emotions based on a corpus of training samples with explicit labels [@Danisman2008; @Chaffar2011]. This can overcome the aforementioned limitations of lexicon-based methods concerning scalability and domain customization. Moreover, it can also learn implicit signals of emotions, since findings from a comprehensive, comparative study suggest that affect is rarely communicated through emotionally-charged lexical cues but rather via implicit expressions [@Balahur2012a]. Previous research has experimented with different models for inferring affect from narrative materials. Examples include methods that explicitly exploit the flexibility of machine learning, such as random forests [[e.g.]{}, @Gordeev2016] and support vector machines [[e.g.]{}, @Chatzakou2017; @Danisman2008], both of which have commonly been deployed in literature. Studies have shown that random forests tends to compute faster, while support vector machines yield superior performance [@Chatzakou2017]. These classifiers are occasionally, but infrequently, restricted to the subset of affect cues from emotion lexicons [@Bandhakavi2017]. However, the more common approach relies upon general linguistic features, [i.e.]{}, bag-of-words with subsequent tf-idf weighting [@Alm2005; @Strapparava2007]. Consistent with these works, we later draw upon machine learning models ([i.e.]{}, random forest and support vector machine) together with tf-idf features as our baseline. ### Deep learning In the following, we discuss the few attempts at applying deep learning to affective computing, but find that actual performance evaluations are scarce. The approach in @Gordeev2016 predicts aggression expressed through natural language using convolutional neural networks with a sliding window and subsequent max-pooling. However, this approach is subject to several limitations as the network is designed to handle only a single dimension ([i.e.]{}, aggression) and it is thus unclear how it generalizes across multi-class predictions or even regression tasks that appear in dimensional emotion models. Even though the approach utilizes a network, its network architecture can only handle texts of predefined size, analogous to traditional machine learning. In this respect, it differs from recurrent networks, which iterate over sequences and thus can handle texts of arbitrary size. The work in @Felbo2017a utilizes an LSTM that is pretrained with tweets based on the appearance of emoticons; however, this work does not report a comparison of their LSTM against a baseline from traditional machine learning. A different approach [@Gupta2017] utilizes a custom LSTM architecture in order to assign emotion labels to complete conversations in social media. However, this approach is tailored to the specific characteristics and emotions of this type of conversational-style data. In addition, the conclusion from their numerical experiments cannot be generalized to affective computing, since the authors labeled their dataset through a heuristic procedure and then reconstructed this heuristic with their classifier. Closest to our approach are experiments that include an LSTM for intensity estimation of emotions [@Goel.2017; @Lakomkin.2017; @Meisheri.2017; @Zhang.2017], but the results are limited to regression tasks where the presence of specific affective dimensions is given a priori. Up to this point, the potential performance gains from using recurrent neural networks as the state of the art in deep learning have not yet been studied in relation to text-based emotion recognition. This fact was also noted in a recent literature survey [@Poria2017]. Transfer learning ----------------- Transfer learning is a technique whereby knowledge from a source domain is leveraged in order to improve performance in a (possibly different) target domain. It is often used to overcome the constraints of limited training data, as well as for tasks that are sensitive to overfitting [@Pan.2010]. A straightforward approach to transferring knowledge in natural language applications is to draw upon pretrained word embeddings [@Kraus2017]. This approach merely requires an additional dataset without labels as it operates in unsupervised fashion. However, it only facilitates the representation of words and fails to help learning parameters inside the neural network. More complex strategies can even utilize labels and perform transfer learning from a source to a target dataset. The underlying transfer can occur either concurrently or sequentially: - The former trains two networks concurrently on both the source and the target task with shared parameters. For instance, one network learns to translate sentences, while the other recognizes named entities [@Mou.2016]. This is known to help the network concentrate on a shared understanding and, in practice, puts emphasis on more abstract relationships. - The latter sequential procedure first trains a network on a source dataset and, in a second step, applies the network to the target dataset in order to fine-tune the network parameters [[e.g.]{}, @Kratzwald.2018]. This is often accompanied by minor modifications to network architectures ([e.g.]{}, by replacing the prediction layer). While such an approach seems intriguing, it is impeded by the heterogeneous nature of baseline datasets for emotion recognition. However, natural language applications often lack suitable source datasets [@Mou.2016]. As a remedy, we propose sent2affect: that is, we employ not only a different *dataset* but also a different *task* (namely, sentiment analysis). To the best of our knowledge, this presents the first work on affective computing that attempts to accomplish an inductive knowledge transfer across tasks. Methods {#sec:methods} ======= This section presents our methods for inferring emotional states from narrative contents. We first summarize our baselines from traditional machine learning and deep learning, while the inherent nature of affective computing requires us to come up with multiple innovations concerning the network architecture. Our proposed advances are detailed in . Finally, we detail our novel approach to transfer learning, called sent2affect, whereby knowledge from the related task of sentiment analysis is applied to emotion recognition. illustrates this pipeline. ![Illustrative pipeline for inferring affective states from narrative materials. This can either happen through (i) traditional machine learning with feature engineering or, as proposed in this work, (ii) deep recurrent neural networks, optionally in conjunction with our proposed sent2affect transfer learning.[]{data-label="fig:pipeline"}](deeplearning.pdf){width="100.00000%"} Benchmarks ---------- Baselines from traditional machine learning ------------------------------------------- Traditional machine learning can only learn from a fixed-size vector of features and, for this purpose, features for machine learning are commonly built upon bag-of-words. The frequencies are further weighted by the tf-idf scheme in order to measure the relative importance of terms to a document within a corpus. Mathematically, the measure of term importance is obtained by computing the product of the term frequency and the inverse document frequency. This approach serves as a widely-accepted benchmark against which algorithms for natural language processing are evaluated. The aforementioned features are then fed into the actual predictive models from traditional machine learning. Here we chose two approaches for both classification and regression as our baseline models: namely, random forest and support vector machine ([i.e.]{}, a support vector regression for predictive numerical scores). These are known for their superior performance in previous studies [[e.g.]{}, @Chatzakou2017]. Moreover, both approaches entail high flexibility when modeling non-linear relationships and demonstrate high accuracy even in settings where the number of potential features exceeds the number of observations. ### Baselines from na[ï]{}ve deep learning Deep learning has triggered a paradigm shift in machine learning [@Kraus.2018] since it has yielded unprecedented performance results, especially for natural language processing. The theoretical argument for this is that recurrent neural networks from deep learning can iterate over the individual words of a sequence with arbitrary length. Here the input directly consists of words $x_1, \ldots, x_N$ and thus circumvents the need for feature engineering ([e.g.]{}, creating bag-of-words with tf-idf) as used in traditional machine learning. As a result, recurrent neural networks store a lower-dimensional representation of the input sequence that encodes the whole document and can even maintain the actual word order with long-ranging semantics [@Kraus.2018]. For this reason, recurrent neural networks differ from traditional machine learning, which can only adapt to short texts due to the use of $n$-grams. We draw upon @Kraus2017 as the basis for our deep neural network architecture. This basic model consists of three layers: (a) an embedding layer that maps words in one-hot encoding onto low-dimensional vectors, (b) a recurrent layer to pass information on between words, and (c) a final dense layer for making the actual prediction. All three layers are described in detail in the online appendix. We experimented with this approach, but found that its performance is almost identical to a majority class vote. Therefore, we refrain from reporting the exact results; instead, we focus on the following improvements. Proposed deep neural networks for affective computing {#sec:new_deep_learning} ----------------------------------------------------- Using the aforementioned deep learning architectures is non-trivial for the following reasons. First, they are not suited to the small datasets from affective computing and typically lead to severe overfitting. Hence, we propose the use of a dropout layer as a form of regularization. Second, our task involves complex, open-domain language, which benefits further from bidirectional processing. Third, severe class imbalances are addressed by a weighted loss function. This loss function treats each class equally in order to avoid biases towards certain classes. Altogether, these extensions were necessary for using deep learning in our research setting. ### Dropout layer Deep neural networks can easily consist of up to millions of free parameters and, consequently, these models run the risk of overfitting. This is especially a problem when the training data is scarce. As a remedy, the weights in the network are regularized by randomly dropping out a certain share of neurons in order to improve the generalizability of the network. This prevents the neurons from co-adapting too much during training [@Srivastava.2014]. We use dropout within the recurrent layer; that is, we randomly drop out connections between the recurrent LSTM cells. Dropout is disabled, [i.e.]{}, all neurons are used, during test time in order to leverage the full predictive power of the learned parameters (cf. the online appendix for a detailed specification). Furthermore, we apply dropout between the output of the recurrent layer and the input to the prediction layer. ### Bidirectional processing To further improve the predictive performance of the base model, we draw upon so-called bidirectional recurrent layers, which have shown success in various other domains. That is, we use not only one but two LSTM layers to read the text. While one layer processes the text from left to right, a second one processes the text from right to left. More formally, let $h_1$ determine the hidden state of the LSTM network that processes the input in the forward direction and $h_2$ the hidden state of the LSTM that reads the text backwards. We then use the concatenation of both hidden states, [e.g.]{}, $[h_1, h_2]$, as input for the final prediction layer. Thus we are able to cover long- and short-term dependencies in both directions. We later abbreviate the bidirectional LSTM via BiLSTM and additionally run separate experiments for comparing the performance across the LSTM and BiLSTM. ### Weighted loss functions for unbalanced data Affective computing commonly involves multiple, highly imbalanced target labels. Using a na[ï]{}ve loss function in this case would optimize towards the majority class and thus result in a performance similar to a majority vote. Such problems are typically addressed by over- or undersampling, yet these approaches yielded only marginal improvements in our experiments. As an alternative, we suggest the use of a weighted loss function. This multiplies the error of each data point with a weight that is the inverse size of the corresponding class. Assume a training sample $x_i$ with ground-truth label $y_i$, and $p_{ik}$ denoting the output of the prediction layer, [e.g.]{}, the probability of $x_i$ belonging to class $k$. Then the weighted loss for $x_i$ is calculated via $$\mathcal{L}(x_i, \theta) = w_{i} \sum_{k=1}^{K} \mathds{1}_{y_i=k} \log p_{ik}$$ with $\mathds{1}$ denoting the indicator function. The weight $w_{i}$ for input $x_i$ depends solely on its ground truth label $y_i$ and, similar to @King.2001, is calculated as $$w_{i} = \frac{N}{K \sum_{j} \mathds{1}_{y_j=y_i}} ,$$ where $K$ denotes the total number of classes and $N$ the number of samples. Sent2affect approach to transfer learning across tasks ------------------------------------------------------ Due to the large number of degrees-of-freedom, training deep neural networks is often associated with challenges ([e.g.]{}, overfitting, ineffective generalization). In practice, this is encountered by large datasets in order to prevent overfitting and, hence, a different strategy is often applied when handling smaller datasets such as those in our experiments. Here the idea is to implement transfer learning, [i.e.]{}, the inductive transfer of knowledge from a different, yet related, task to the problem under investigation. In our case, we develop a novel approach, sent2affect, as detailed in the following. The choice of the source task is non-trivial and it is mainly tasks of a semantically similar nature that result in the transferability of the network. For this purpose, we suggest the use of sentiment analysis as a related task, since it shares a certain similarity in the sense that positive and negative polarity is inferred from linguistic materials; however, sentiment analysis differs from affective computing, as it does not address affective dimensions or emotional states. The relatedness between both tasks enables the network to infer similar representation for both. Formally, our approach to transfer learning optimizes the weights of a neural network for a target task $\mathcal{T}$ and dataset $\mathcal{D_T}$ based on a different, yet related, source task $\mathcal{S}$ with dataset $\mathcal{D_S}$. After optimizing the parameters of our network for $\mathcal{S}$ on $\mathcal{D_S}$ we replace the task-specific prediction layer of the network to yield predictions for our target task $\mathcal{T}$. Therefore, we utilize the estimated parameters as an initial value for further optimization with the help of the actual dataset $\mathcal{D_T}$ [@Pan.2010]. The pseudocode of the overall process is stated in . In our experiments, we utilize a large-scale, public dataset[^1] as a basis for knowledge induction. This dataset finds widespread application in sentiment analysis and includes about samples labeled according to positive or negative sentiment. We then optimize the deep neural network with the goal of predicting the underlying sentiment scores. The resulting coefficients of the network are further trained with an actual dataset from affective computing. Here the differences in the data type of the prediction outcome ([i.e.]{}, computing a positivity/negativity score versus affective dimensions) are handled by removing the dense layer and, instead, amending a new prediction layer that targets the new output. As a result, the majority of weights benefits from transfer learning, while only the neurons in the prediction layer are training after a random initialization. The intuition of this approach is as follows: deep neural networks generally contain multiple layers, where layers closer to the final prediction layers are supposed to encode the original input at a higher level of abstraction. Given training data $\mathcal{D_T}$ for the affective computing task $\mathcal{T}$ and additional corpus $\mathcal{D_S}$ for sentiment analysis $\mathcal{S}$ $m \gets $ Initialize recurrent neural network ([i.e.]{}consisting of recurrent layer $f$, dense layer $\psi$, …) $m \gets $ Estimate parameters w.r.t. $\mathcal{S}$ using $\mathcal{D_S}$ $\psi \gets $ Replace dense layer with randomly-initialized dense layer according to the dimensions of $\mathcal{T}$ $\psi \gets $ Fine-tune $\psi$ w.r.t. $\mathcal{T}$ using $\mathcal{D_T}$ **return** Recurrent neural network $m$ Model estimation ---------------- Consistent with previous research [@manning1999foundations], we tokenize each document, convert all characters to lower-case, and remove punctuation, numbers, and stop words. Moreover, we perform stemming, which maps inflected words onto a base form; [e.g.]{}, ** and ** are both mapped onto **. We conducted all pre-processing operations to yield bag-of-words representations by using the natural language tookit NLTK. For those datasets with no designated test set, we introduced a random $80/20$ split in training and test data. For the random forest classifier, we manually optimized over the number of trees, maximum number of features for every split, and the depth. For the support vector classifier, we conducted an extensive grid-search over the hyperparameters following @hsu2003practical. In detail, we experimented with linear, radial basis function, and sigmoid kernels, optimizing the cost $C$ over $2^{-5}, 2^{-3},\ldots,2^{15}$ and the radius parameter $\gamma$ over $2^{-15},2^{-13},\ldots,2^3$. For unbalanced datasets, we weighted the loss function by class frequency in order to prevent models from predicting the majority classes only. We used different deep learning models. Depending on the specification, we used pre-trained GloVe[^2] embeddings or randomly-initialized embeddings (which are learned jointly during the training phase). The models were trained using the Adam optimizer, whereby the process was stopped once we noted an increase in the validation error. For reasons of reproducibility, we report the performance metrics averaged over 10 independent runs. Evaluation {#sec:evaluation} ========== This section reports our computational experiments evaluating the improvements gained by using deep neural networks (and especially transfer learning) for affective computing. Here we draw upon all datasets from and, according to the type of the underlying affect theory, we divide the performance measurements into classification and regression tasks. Classification according to categorical emotion models ------------------------------------------------------ We begin with classification tasks according to categorical emotion models, where the objective is to predict the predominant emotion(s). We follow previous literature [[e.g.]{}, @Chatzakou2017; @Danisman2008] and analogously choose two baselines prevalent in traditional machine learning: namely, the random forest classifier and the support vector machine. Both are fed with bag-of-words with tf-idf weighting, whereas the proposed deep neural networks circumvent the need for feature engineering. Here we compare variants that extend the LSTM[^3] with bidirectional encodings and pretrained word embeddings. The resulting performance is listed in , where we account for unbalanced distributions of labels by using the weight-averaged F1-score. The F1-score for a single class is given by the harmonic mean of precision and recall, [i.e.]{}, $$\text{F1} = 2 \frac{\text{precision} \cdot \text{recall}}{\text{precision} + \text{recall}} .$$ In addition, we report sensitivity and specificity scores. The sensitivity of a single class equals the recall, while the specificity measures the fraction of true negatives. Similar to the F1-score, we calculate both independently for each class, [i.e.]{}, $$\text{sensitivity} = \mathit{TP}/(\mathit{TP}+\mathit{FN}) \qquad \text{and} \qquad \text{specificity} = \mathit{TN}/(\mathit{TN}+\mathit{FP}),$$ where the number of true positives and true negatives is denoted by $\mathit{TP}$ and $\mathit{TN}$, and the number of false positives and false negatives is denoted by $\mathit{FP}$ and $\mathit{FN}$. For the final scores, we average over all classes weighted by the class size. Our results in consistently reveal superior performance through the use of deep learning. We observe that, regardless of the architecture, models with pre-trained GloVe embeddings outperform their counterparts with randomly-initialized word embeddings. In fact, the use of pre-trained word embeddings yields performance improvements over the best baseline in $9$ out of $10$ experiments. An explanation stems from the fact that embeddings which have not been pre-trained result in considerably more degrees-of-freedom and thus a greater chance of overfitting. Our initial expectations are met as the imposed dropout layers and loss-weighting successfully diminish the problem of overfitting. Furthermore, our imposed architectural enhancements surpass the performance of previous deep learning architectures, such as that proposed by [@Kraus2017]. As such, the bidirectional recurrent layers outperform the variant with a unidirectional layer in four out of five experiments, yielding the only architecture that consistently outperforms the traditional baseline on all datasets, with improvements between and across the datasets. We experimented with the naïve network from @Kraus2017, but it failed in three out of five datasets resulting in merely predicting the majority class; hence, we omitted the results. The performance gains from our proposed architectural improvements are a result of the class imbalance and the language noise of the source. For instance, the highest relative improvement over traditional machine learning is achieved in the case of the dataset of headlines [@Strapparava2007], constructed of four equally-sized classes and proper English. On the other hand, the dataset of election tweets [@Mohammad2015], which is composed of highly unbalanced classes and considerable language noise, yields the lowest improvement. reports sensitivity and specificity scores as an additional robustness check. The results confirm our findings, [i.e.]{}, we witness the largest performance improvements for datasets with less noise. For the election tweet dataset [@Mohammad2015], the best bidirectional LSTM model achieves a sensitivity of $56.9$, while the best baseline achieves a slightly better score of $57.1$. We can significantly strengthen our results for this challenging dataset by applying transfer learning, as reported in \[sec:results\_transfer\_learning\]. Regression according to dimensional affect models ------------------------------------------------- Depending on the affect theory, one can also model emotional categories according to dimensional ratings and, as a result, this is implemented as a regression task, where the intensity of emotional states is predicted. We choose the same baselines as in the previous experiments and compare them to deep neural networks. All models are evaluated based on the mean squared error (MSE). reports our results. These show a consistent improvement of up to as a result of using deep learning as compared to traditional machine learning. Similar to the classification task, our findings identify the BiLSTM with pre-trained word embeddings as the superior method in all seven experiments. We further note that the BiLSTM appears to outperform the unidirectional LSTM in all experiments. The relative performance increases vary between the different affective dimensions. Transfer learning via sent2affect {#sec:results_transfer_learning} --------------------------------- The previous experiments revealed consistent improvements through the use of deep learning; however, several benchmark datasets entail only a fairly small set of samples, which could impede the training of deep neural networks. For instance, the dataset of inferring emotions from election tweets [@Mohammad2015] comprises only 1,646 samples for training. A potential remedy is utilizing large-scale datasets from other tasks and then inducing knowledge to affective computing. More precisely, we now experiment with the potential performance improvements to be gained by additionally applying our transfer learning approach . By inducing network parameters from sentiment analysis to affective computing, we benefit from the considerably larger datasets that are used in sentiment analysis, since the sentiment dataset consists of about tweets that are associated with positive and negative labels. compares our transfer learning approach against two baselines: (i) a na[ï]{}ve BiLSTM and (ii) the transfer learning approach of @Kraus2017, where only GloVe word-embeddings are pre-trained. We choose the election tweets [@Mohammad2015] and general tweets [@SemEval2018Task1] datasets to demonstrate how we can transfer the knowledge from thousands of sentiment-labeled tweets to the task of emotion recognition. Furthermore, na[ï]{}ve deep learning alone yields an inferior performance. While the BiLSTM with pre-trained word embeddings has previously represented the best-performing architecture, we still observe that transfer learning yields additional improvements. These amount to for the election tweets and for the general tweets. Evidently, transfer learning can successfully benefit from the large-scale dataset for sentiment analysis and, as a result, optimizes the neuron weights such that these find a more generalizable representation of emotion-laden materials. Discussion {#sec:discussion} ========== Comparison ---------- Our series of experiments reveals considerable and consistent performance improvements over default implementations of deep learning through the use of our customized networks. This points towards the need to customize deep neural networks according to the unique characteristics of the underlying task. In this paper, we refrained from evaluating performance on the basis of a single dataset and, instead, perform a holistic analysis, demonstrating that our customized networks outperformed the baselines in all experiments by up to . Interestingly, our proposed modifications, such as with regard to regularization, were even able to learn the underlying relationships from the rather small datasets of merely 1,000 observations. However, we observe an overall pattern whereby the performance improvements tend to be higher when there is less language noise. In addition, we observe further improvement through the use of word embeddings, as these reduce the high-dimensional vectors with terms as one-hot encoding to lower-dimensional spaces. In the majority of experiments, the superior results stem from using a bidirectional LSTM as compared to a simple LSTM. We note that not only traditional machine learning but all network architectures required extensive training in order to ensure that embeddings and dropout layer functioned well together. Finally, the task of emotion recognition in affective computing is related to sentiment analysis, which infers a positive/negative polarity from linguistic materials. Hence, it is interesting to study whether one can further improve performance through an inductive transfer of knowledge from a different task (rather than a different dataset), despite the distinct objective, linguistic style, and annotation scheme. As a result, our sent2affect implementation of transfer learning establishes additional improvements of up to . Deep-learning-based affect computing for decision support in social media ------------------------------------------------------------------------- As a proof of concept, we utilize our bidirectional LSTM to support the notoriously difficult task of classifying news into factual and non-factual. This demonstrates how affective computing can eventually facilitate decision support for social media platforms seeking to recognize and prevent the spread of . We utilize the dataset of [@Shu.2017] and predict whether a news item is factual. The prediction model is given by a logistic regression that is fed with the output of our affect prediction layer. Our approach achieves an accuracy of when using the affective dimensions of the headlines and when using separate affective dimensions of both headlines and content. This almost matches the reported baseline performance from prior research [@Rubin.2015], where a content-based classifier was used to detect fabricated news items. However, we refrain from learning towards certain linguistic devices or individual stories. Instead, our approach ensures generalizability by identifying highly polarizing language as part of its decision support. Further use cases of deep-learning-based affective computing for better decision support {#sec:applications} ---------------------------------------------------------------------------------------- Text-based affective computing drives decision support in a variety of application areas in which understanding the emotional state of individuals is crucial. provides an overview of interesting examples from research, as well as actual use cases from businesses. This table is intended to give an overview of areas where decision support could potentially be improved through the use of our deep-learning-based models for affective computing. It is evident that affective computing facilitates decision-making in all operational areas of businesses, such as management, marketing, and finance. For instance, firms can infer the perceived emotion of customers from online product reviews and base managerial decisions on this data in order to support product development [@Ullah2016] and advertising [@Ang2000]. In a financial context, emotional media content has been identified as a driver in the decision-making of investors [@Prollochs2016], which can thus serve as a decision rule for stock investments [@Gilbert2010]. Beyond that, deep learning for emotion recognition could also facilitate public decision support with respect to politics and even education, as well as healthcare for individuals. For instance, affective computing can infer emotion concerning personal health conditions [@Anderson2011; @Desmet2013; @Greaves2013; @VanDerZanden2014] and during learning processes [@Rodriguez2012]. Notably, all of the prior references engage in affect-aware decision-making, but have not yet evaluated the use of deep learning. Implications for management and practice ---------------------------------------- Even though deep learning has gained considerable traction lately, its use cases outside of academia remain scarce. A possible reason is located in the complexity of operationalizing deep neural networks. While recurrent architectures have previously been applied to sentiment analysis, the task of emotion recognition requires several modifications in order to obtain a better-than-random performance. This specifically applies to the proposed bidirectional processing of texts, regularization, and loss functions that can handle highly imbalanced datasets. As a direct recommendation for use cases of affective computing, we propose a shift towards customized network architectures, even for fairly small datasets of around 1,000 training samples, as in our case. Altogether, this highlights the need for a thorough understanding by practitioners of the available tools in order to benefit from deep learning. Affective computing for linguistic materials yields new opportunities for business models and consumer-centered services [@Li.2011; @Doucet.2012; @Dai.2015; @Yin.2014]. Detecting and subsequently responding to the emotional states of users, customers, patients, and employees has the potential to significantly accelerate and improve management processes and optimize human-computer interactions. Here text remains a critical form of communication, while attempts have also been made to apply affective computing to speech or other multimodal input [@Calvo2010], including visual data [@Chen2017; @ElAyadi2011; @Shan2009]. Management should assess potential use cases in critical areas of operations from their own organizations. Our overview in provides illustrative examples, while further applications are likely to arise with recent methodological innovations. Implications for research ------------------------- The process of improving the performance of affective computing would benefit considerably from a rigorous suite of baseline datasets. In the status quo, a variety of datasets with distinct goals and purposes is commonly used for benchmarking methodological innovations for affective computing. For instance, our literature survey identified four different strategies for annotating, including simple labels, multi-class labels, and numerical scores. Moreover, the set of affective dimensions varied between two ([i.e.]{}, valence, arousal without explicitly naming emotions) and a set of 8 emotions ([e.g.]{}, anger, disgust, surprise). However, this directly links to challenges concerning comparability and generalizability. In this sense, a network architecture that has been found effective for one annotation scheme might not work out for other datasets. On top of that, different labels prohibit transfer learning and thus impede performance. We therefore suggest a standardized approach to annotations. According to our literature review, datasets for affective computing vary in size from 1,000 instances to 7,902, and yet all of them remain fairly small when compared to other applications of deep learning. As a result, this is known to limit the performance of bidirectional LSTMs and other deep neural network architectures, which generally require large-scale datasets. For instance, datasets for sentiment analysis, such as the one used for our transfer learning approach, consist of up to labeled samples. Future research should thus aim at creating larger datasets in order to enable the effective exploitation of deep learning. Conclusion {#sec:conclusion} ========== Affective computing allows one to infer individual and collective emotional states from textual data and thus offers an anthropomorphic path for the provision of decision support. Even though deep learning has yielded considerable performance improvements for a variety of tasks in natural language processing, na[ï]{}ve network architectures struggle with the task of emotion recognition. As a remedy, several modifications are presented in this paper: namely, bidirectional processing, dropout regularization, and weighted loss functions in order to cope with imbalances in the datasets. Our computational experiments span categorical and dimensional emotion models, which require tailored algorithmic implementations involving, [e.g.]{}, multi-class classification, as well as regression tasks and transfer learning. Our results show that pre-trained bidrectional LSTMs consistently outperform the baseline models from traditional machine learning. The performance improvements can even range up to in F1-score for classification and in MSE for regression. We propose sent2affect, a customized strategy of transfer learning that draws upon the different task of sentiment analysis (as opposed to different datasets, as is usually the case), which is responsible for further performance improvements of between and . Acknowledgements {#acknowledgements .unnumbered} ================ The authors gratefully acknowledge the financial support for Suzana Ilić from Prof. Kotaro Nakayama and Prof. Yutaka Matsuo, Graduate School of Engineering, The University of Tokyo, Tokyo, Japan. [^1]: Kaggle: Twitter sentiment analysis, retrieved from <https://www.kaggle.com/c/twitter-sentiment-analysis2>, March 21, 2018. [^2]: The pre-trained word embeddings can be retrieved from <http://nlp.stanford.edu/data/glove.6B.zip>. [^3]: We use the acronym LSTM when referring to the unidirectional model. Whenever we refer to the bidirectional LSTM model, we use the explicit designation BiLSTM.
{ "pile_set_name": "ArXiv" }
--- author: - 'Ying-Chun Wei' - 'Cheng-Min Zhang' - 'Yong-Heng Zhao' - 'Qiu-He Peng' - 'Xin-Ji Wu' - 'A-Li Luo [^1]' title: The meaning inferred from the spin period distribution of normal pulsars --- Introduction ============ Pulsars—a kind of charming compact objects in the sky, are generally known to be born in the supernova exploration. Exploring them will provide the opportunity for people to understand the fundamental physics and astrophysics (Cordes 2004). There are many questions about pulsars deserved to be studied. In this paper we would concentrate on one of the attractive problems that is how the pulsar number distributes according to the spin period. In section 2 we introduce the usual statistical method at present on the spin periods of normal pulsars; in section 3 we point out the problem existing in this present statistical method, and introduce the concept of “generation" of normal pulsars; in section 4 we introduce a new statistical method on the spin period distribution of normal pulsars; in section 5 we make a conclusion. Usual Statistical Method on the Spin Periods of Normal Pulsars at Present ========================================================================= ![The normal pulsar number distribution according to the log values of spin periods. The data are from the ATNF pulsar catalogue for pulsars with spin periods larger than 30 ms. See the internet Web: http://www.atnf.csiro.au/research/pulsars/psrcat (Manchester et al. 2005).[]{data-label="fig:taud"}](f1.eps){width="8cm"} As is well known pulsar spin periods usually span three magnitudes or so from milliseconds to seconds. So people like to do statistics for the log values of the spin periods (Manchester, Hobbs, Teoh & Hobbs 2005; Manchester 2009), or directly under the log coordinate of the spin periods (Manchester 2009), because the log function can transform the different scales of spin periods to the same scale. In Fig. \[fig:taud\] we illustrate this method for the normal pulsars whose spin periods exceed 30 ms (Lyne & Smith 2006) in the ATNF pulsar catalogue (Manchester et al. 2005). It seems to accord with the following Gaussian distribution very well: $$y=y0 + \frac{A}{w\sqrt{\pi/2}}e^{-2(\frac{x-xc}{w})^2},$$ where $y0=11.741\pm3.963$, $A=243.163\pm8.562$, $w=0.678\pm0.020$, $xc=-0.225\pm0.008$ are the fitting parameters. The coefficient of determination (COD) is 0.99438. The smaller the relative errors and the closer to 1 the COD, the better the fitting. Apart from $y0$, the relative errors of the other parameters are all less than $5\%$. So it seems safely to say that the normal pulsars distribute Gaussianly according to the log values of spin periods. Concept of “Generations" of Pulsars =================================== But we should not forget that these normal pulsars were not born in the same period. In fact, they are born with some birthrate $1\sim3$ per century (Vranesevic et al. 2004; Faucher-Giguère & Kaspi 2006) which can be larger five times taking into the uncertainties to form the current of pulsars in the Galaxy. At the beginning of the current, i.e., the newborn pulsars have a distribution of the spin periods which nowadays is regarded to be about 0.1-0.5 s for up to $40\%$ of all pulsars (Vranesevic et al. 2004; Lorimer 2006). So maybe we could guess that the distribution of spin periods of newborn normal pulsars is pulse form like Gaussian distribution with the distribution peak at 0.1-0.5 s. Certainly, the distribution pulse form of spin periods of the newborn pulsars could not approach to the 0 infinitely because there is a smallest rotation period of pulsars smaller than which the pulsar will be broken up and which can approximately be estimated as small as 1.5 ms when the centrifugal force is just equal to the gravity at the equator (Lyne & Smith 2006). In the initial part of the current of pulsars which can be imaged as “initial generation of pulsars", the spin period distribution should be like the spin period distribution of newborn pulsars. Due to the radiative energy taking out the rotation energy of pulsars (Gold 1968, 1969; Pacini 1968), the rotation of pulsars will turn slow gradually and this pulse-like distribution of spin periods of the initial generation will move in the right direction of spin period axis, certainly with the possibility that some pulsars could not radiate the radio pulses owing to the extinction of the polar cascade process (Chen & Ruderman 1993; Hibschman, Johann & Arons 2001) and the pulse distribution form may also change as illustrated in Fig. \[fig:taud2\]. ![The sketch map of different pulsar generations at different spin periods.[]{data-label="fig:taud2"}](f2.eps){width="8cm"} Let us focus on the statistical step sizes in Fig. \[fig:taud\]. The peak in Fig. \[fig:taud\] is at $10^{-0.225}\approx 0.6$ s approaching to the assumed 0.1-0.5 s magnitude of the distribution peak of the spin periods of the initial generation pulsars. So we can guess there are not many generations of pulsars left to the peak in Fig. \[fig:taud\]. In addition the statistical step size left to the peak is even so smaller that it could not contain one pulsar generation and in fact to some extent reflect on the distribution pulse form inner structure of spin periods of the initial several generations of pulsars. Whereas the statistical step sizes right to the peak in Fig. \[fig:taud\] are generally very large that can contain a few generations in the same statistical step. So in fact the statistics right to the peak in Fig. \[fig:taud\] reflects the change rule of pulsar numbers of different generations with spin periods. Now we make clear that the statistical step sizes in Fig. \[fig:taud\] are suitable to do different things that reflect different physical meanings: one is for the distribution of spin periods in the pulse-distribution-form of the initial generations; the other is for the distribution of pulsar numbers among different pulsar generations. The log transformation mixes the different study scales of the two different questions, consequently confuses the two questions. The result is that the physical meaning of Gaussian distribution in Fig. \[fig:taud\] is very limited. New Statistical Method on the Spin Periods of Normal Pulsars ============================================================ Since nowadays the pulsars observed are including various generations, we can study the pulsar number distribution among different pulsar generations, i.e., we would study the pulsar number distribution with the spin periods, not with the log values of spin periods. Fig. \[fig:taud3\] is the result. It accord with the following exponential decay with the very surprising COD 0.99992: $$y = A1~e^{-x/t1} + c,$$ where $A1=2408.115\pm15.750$, $t1=0.738\pm0.006$ s, $c=3.614\pm1.203$ in Fig. \[fig:taud3\]. The fitting of Fig. \[fig:taud3\] is obviously better than that of Fig. \[fig:taud\]. ![The pulsar number distribution in different pulsar generations. The samples are the same as in Fig. \[fig:taud\].[]{data-label="fig:taud3"}](f3.eps){width="8cm"} We know for pulsars the larger the age, the longer the spin period, and the weaker the radiative ability. Fig. \[fig:taud3\] tells us that the active pulsar number would gradually reduce with the increasing of spin periods. Meanwhile it also tells the active pulsar number drops with the increasing of time. If the spin period has linear relationship with the age, then the active pulsar number would also drop exponentially with time. But the spin periods do not have simple relationship with the time (Lyne & Smith 2006; Camenzind 2007; Gonthier, Van Guilder & Harding 2004; Contopoulos & Spitkovsky 2006). In spite of this we still can introduce the half-decay spin period for pulsars like the half-decay period for radiative elements: the value from Fig. \[fig:taud3\] is about $0.516\pm0.004$ s. Conclusion ========== The pulsars and radiative elements belong to very different scales of the universe respectively, and have very different inner active mechanisms, whereas they have such similar decay rules, i.e., their activities drop exponentially though with different variables: one with the spin period; the other with the time. Fig. \[fig:taud3\] does not take into count the selective effect of observation, the most important selective effect is the luminosity, because the radiative capability of pulsars declines with age. It is very difficult to count this selective effect. Besides this, we do not forget that pulsars are the high velocity objects (Vranesevic et al. 2004; Arzoumanian, Chernoff & Cordes 2002; Hobbs, Lorimer, Lyne & Kramer 2005), which would make the number of pulsars in some region of the sky have some change. So Taking into account various effects will bring very large uncertainties. In spite of all these, we still believe that the law disclosed in Fig. \[fig:taud3\] is true, partly from the philosophical consideration that the different scales in the universe might abide by some similar law. We thank G.J. Qiao, K.F. Wu, X.Y. Chen, H.B. Zhang, L.D. Zhang and J.J. Zhou for helpful discussions. This research has been supported by NSF of China (No.10778611, No.10773017, No.10973021 and No. 10573026) and National Basic Research Program of China (No. 2009CB824800). The authors express the sincere thanks to the critical comments. Arzoumanian, Z., Chernoff, D.F., Cordes, J.M.: 2002, ApJ 568, 289 Camenzind, M.: 2007, Compact Objects in Astrophysics 284 (Springer Press) Chen, Kaiyou, Ruderman, M.: 1993, ApJ 408, 179 Contopoulos, I., Spitkovsky, A.: 2006, ApJ 643, 1139 Cordes, J.M.: 2004, New Astronomy Reviews 48, 1413 Faucher-Giguère, Claude-André, Kaspi, V.M.: 2006, ApJ 643, 332 Gold, T.: 1968, Nature 218, 731 Gold, T.: 1969, Nature 221, 25 Gonthier, P.L., Van Guilder, R., Harding, A.K.: 2004, ApJ 604, 775 Hibschman, Johann A., Arons, J.: 2001, ApJ 546, 382 Hobbs, G., Lorimer, D.R., Lyne, A.G., Kramer, M.: 2005, MNRAS 360, 974 Lorimer, D.R., et al.: 2006, MNRAS 372, 777 Lyne, A.G., Smith, F.G.: 2006, Pulsar Astronomy (Cambridge: Cambridge University Press) Manchester, R.N.: 2009, in Neutron Stars and Pulsars (ed Becker, W.) 19 (Springer Press) Manchester, R.N., Hobbs, G.B., Teoh, A., Hobbs, M.: 2005, AJ 129, 1993 Pacini, F.: 1968, Nature 219, 145 Vranesevic, N., et al.: 2004, ApJ 617, L139 [^1]: Corresponding author: zhangcm@bao.ac.cn; ycwei@bao.ac.cn
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider the metastable dynamics of a flattened dipolar condensate. We develop an analytic model that quantifies the energy barrier to the system undergoing local collapse to form a density spike. We also develop a stochastic Gross-Pitaevskii equation (SGPE) theory for a flatted dipolar condensate, which we use to perform finite temperature simulations verifying the local collapse scenario. We predict that local collapses play a significant role in the regime where rotons are predicted to exist, and will be an important consideration for experiments looking to detect these excitations.' author: - 'E. B. Linscott' - 'P. B. Blakie' title: Thermally activated local collapse of a flattened dipolar condensate --- Introduction ============ Tremendous recent progress with trapping and cooling highly magnetic atoms has enabled the production of dipolar Bose-Einstein condensates (BECs) [@Griesmaier2005a; @Beaufils2008a; @Mingwu2011a; @Aikawa2012a]. In these condensates the atoms interact via an appreciable magnetic dipole-dipole interaction (DDI) that is both long-ranged and anisotropic, opening up a number of new many-body phenomena for exploration [@Baranov2008; @Lahaye_RepProgPhys_2009]. A flattened dipolar condensate is produced by applying tight external confinement along one direction, and can be used to stabilize the system against the attractive component of the dipolar interaction [@Koch2008a; @Muller2011a]. Novel predictions for dipolar condensates in this regime include density oscillating ground states [@Ronen2007a; @Lu2010a; @Asad-uz-Zaman2010a; @Martin2012a], roton-like excitations [@Santos2003a; @Ronen2007a; @Nath2010a; @Hufnagl2011a; @Blakie2012a; @Corson2013a; @JonaLasinio2013; @Bisset2013a; @Bisset2013b; @Fedorov2014a], modified collective and superfluid properties [@Wilson2010a; @Ticknor2011a; @Bismut2012a], and stable 2D bright solitons [@Pedri2005a]. Many of these predictions require having a condensate in the dipole-dominated regime, i.e. where the DDI is stronger than the short ranged contact interaction. Theoretical studies of this regime have mainly focussed on the elementary excitation spectrum, which can be calculated using Bogoliubov theory. However, density fluctuations in this regime can be large [@Blakie2013a; @Bisset2013a; @Baillie2014a] and recent work has shown that Bogoliubov theory may be quite limited in applicability, particularly at finite temperature [@Boudjemaa2013a]. To date, experiments in the flattened system have focused on quantifying the stability boundary [@Koch2008a; @Muller2011a], which can be explored by reducing the contact interaction (using Feshbach resonances) until the condensate becomes unstable. Theoretical work suggests that as the condensate crosses the stability boundary it undergoes a local collapse, in which it breaks up into a set of sharp density peaks [@Bohn2009a; @Wilson2009a] (also see [@Parker2009a]). In this paper we show that a dipolar BEC is metastable against local collapses even far from the stability boundary. To do this we develop an analytic model in which we consider sharp density spikes (i.e. a local collapse) forming on top of a condensate. This enables us to quantify the energy barrier to collapse. We then introduce a finite temperature dynamical model for the system by extending the SGPE formalism [@cfieldRev2008] to include DDIs. Our simulations with the SGPE demonstrate thermally activated local collapse events and support our density spike model. Our results indicate that metastability effects will be an important consideration for experiments aiming to verify the array of predictions that have been made for dipolar condensates in the flattened regime, such as the emergence of roton-like excitations. Model {#Sec:model} ===== Uniform ground state {#S:BGGPE} -------------------- We consider a dipolar BEC that is harmonically confined along the $z$ direction and unconfined in the radial plane. The condensate wave function $\psi_0$ satisfies the non-local Gross-Pitaevskii equation (GPE) $$\mu\psi_0({\mathbf{r}})=\left[h_{\mathrm{sp}}+\int d{\mathbf{r}}'U({\mathbf{r}}-{\mathbf{r}}')|\psi_0({\mathbf{r}}')|^2\right]\psi_0({\mathbf{r}}),\label{e:fullGPE}$$ where $\mu$ is the chemical potential and $$h_{\mathrm{sp}}=-\frac{\hbar^2\nabla^2}{2m}+\frac{m\omega_z^2z^2}{2},$$ is the single particle Hamiltonian, with $\omega_z$ being the axial trap frequency and $m$ the atomic mass. The atoms we consider are taken to have an appreciable magnetic dipole momentum $\mu_m$ polarized along the $z$-axis by an external magnetic field. In this case the associated interaction potential is $ {U_{\mathrm{dd}}}({\mathbf{r}})=\frac{3g_d}{4\pi}{[1-3(\hat{\mathbf{z}}\cdot\hat{\mathbf{r}})^2]}/{r^3},$ where $g_d=\mu_0\mu_m^2/3$ is the DDI coupling constant and $\hat{\mathbf{r}}=\mathbf{r}/|\mathbf{r}|$. The particles can also interact by a short ranged contact interaction with coupling constant $g_s=4\pi a_s\hbar^2/m$, where $a_s$ is the scattering length, so that the full interaction is $U({\mathbf{r}})=g_s\delta({\mathbf{r}})+U_{\mathrm{dd}}({\mathbf{r}})$ (e.g. see [@Yi2000a; @Yi2001a; @Lahaye_RepProgPhys_2009]). The condensate solution to Eq. (\[e:fullGPE\]) takes the form $\psi_0({\mathbf{r}})=\sqrt{n_0}\chi_\sigma(z)$, where $n_0$ is the areal density, and $\chi_\sigma$ is a normalized axial mode. Here we approximate $\chi_\sigma$ as a Gaussian of the form $$\chi_\sigma(z)=\frac{1}{\pi^{1/4}\sqrt{\sigma l_z}}e^{-z^2/2\sigma^2 l_z^2},$$ with length scale $l_z=\sqrt{\hbar/m\omega_z}$. We treat $\sigma$ as a variational parameter to be determined by minimizing the energy functional $$\begin{aligned} E[\psi]&=\int\!d{\mathbf{r}}\,\psi^*({\mathbf{r}})\left[ h_{\mathrm{sp}}+\frac{1}{2}\int d{\mathbf{r}}'U({\mathbf{r}}-{\mathbf{r}}')|\psi({\mathbf{r}}')|^2\right]\psi({\mathbf{r}}),\label{e:Efun}\end{aligned}$$ which, upon substituting the Gaussian ansatz, gives $$E_\sigma=n_0A\hbar\omega_z\left[\frac{1}{4\sigma^2}+\frac{\sigma^2}{4}+\frac{\nu_s+2\nu_d}{2\sqrt{2\pi}\sigma}\right].\label{e:Esigma}$$ Here $A$ is the area of the system and we have introduced $\nu_s=n_0g_s/\hbar\omega_zl_z$ and $\nu_d=n_0g_d/\hbar\omega_zl_z$ as the dimensionless contact and DDI parameters, respectively. For $|\nu_s+2\nu_d|\ll1$ the minimum value of $\sigma$ approaches $1$, i.e. the quasi-2D regime [@Fischer2006a]. In general the variational Gaussian approach we use here has been shown to provide an accurate description even for large interaction parameter values [@Baillie2014b]. Using the value of $\sigma$ that minimizes Eq. (\[e:Esigma\]), the condensate chemical potential \[c.f. Eq. (\[e:fullGPE\])\] is given by $$\mu_\sigma = \hbar\omega_z\left[\frac{1}{4\sigma^2}+\frac{\sigma^2}{4}+\frac{\nu_s+2\nu_d}{\sqrt{2\pi}\sigma}\right].\label{e:Musigma}$$ Density spike model {#S:peakmodel} ------------------- We want to consider the energetics of the system forming density spikes on top of the flat condensate ground state. To do this we propose a variational ansatz for a condensate with a Gaussian density spike of the form $$\psi_s({\mathbf{r}})=\sqrt{n_0}\chi_\sigma(z)+\sqrt{n_0}\beta\frac{\exp\left[-\frac{1}{2}\left(\frac{z^2}{\sigma_z^2l_z^2}+\frac{\rho^2}{\sigma_\rho^2l_z^2}\right)\right]}{\pi^{3/4}\sigma_\rho \sqrt{\sigma_z l_z}},\label{e:model}$$ where $\bm{\rho}=(x,y)$ is the in-plane coordinate and the last term describes the spike in terms of dimensionless height $\beta$ and width parameters $\{\sigma_\rho,\sigma_z\}$ (see Fig. \[fig:pimple\]). ![(colour online) Visualisation of the density spike ansatz \[see Eq. (\[e:model\])\] illustrating the parameters used, with $c_w=(\pi^{3/4}\sigma_\rho\sqrt{\sigma_zl_z})^{-1}$. []{data-label="fig:pimple"}](Fig1.pdf){width="3.30in"} We consider a large system, so that a single spike has negligible effect on the condensate itself. Consequently, we take the condensate variational parameter $\sigma$ to be determined by minimizing Eq. (\[e:Esigma\]) irrespective of the peak (and hence $\sigma$ is a function of $\nu_s+2\nu_d$ only). The energy associated with forming a peak on top of a condensate background is then evaluated by substituting (\[e:model\]) in Eq. (\[e:Efun\]), which yields $$\begin{aligned} E_s\equiv& E[\psi_s]-E[\psi_0]-\mu_\sigma N_s,\nonumber \\ =&n_0 l_z^2 \hbar \omega_z \left\{2\sqrt{2 \pi}\beta \sigma_\rho \left(\frac{\sigma \sigma_z}{\sigma^2+\sigma_z^2}\right)^{3/2}\left(\sigma \sigma_z + \frac{1}{\sigma \sigma_z} \right)+\frac{\beta^2}{2}\left(\frac{\sigma_z^2}{2} + \frac{1}{2\sigma_z^2} + \frac{1}{\sigma_\rho^2}\right)\right. -\mu_\sigma\left(4\sqrt{2\pi}\beta\sigma_\rho\sqrt{\frac{\sigma\sigma_z}{\sigma^2+\sigma_z^2}}+\beta^2\right)\nonumber \\ &\hspace{1.5cm}+\frac{4\beta \sigma_\rho (\nu_s+2\nu_d)}{\sqrt{\frac{3}{2}\sigma\sigma_z+\frac{1}{2}\sigma^3/\sigma_z}} +\frac{3 \beta^2}{\sqrt{\pi(\sigma^2+\sigma_z^2)}}\left(\nu_s+ \frac{2}{3}\nu_d\left[1+f\left(\frac{\sqrt{\sigma^2+\sigma_z^2}}{\sigma}\frac{\sigma_\rho}{\sigma_z}\right)\right]\right)\nonumber \\ &\left.\hspace{2cm} +\frac{4\beta^3}{3\pi\sigma_\rho\sqrt{\frac{3}{2}\sigma \sigma_z+\frac{1}{2}\sigma_z^3/\sigma}}\left[\nu_s+ \nu_df\left(\sqrt{\frac{\sigma^2+\sigma_z^2}{\sigma^2+\frac{1}{3}\sigma_z^2}}\frac{\sigma_\rho}{\sigma_z}\right)\right] +\frac{\beta^4}{2(2\pi)^{3/2}\sigma_z \sigma_\rho^2}\left(\nu_s +\nu_d f(\sigma_\rho/\sigma_z)\right)\right\}\end{aligned}$$ where $$\begin{aligned} f(\kappa) &\equiv \frac{2\kappa^2 + 1}{\kappa^2 - 1}-\frac{3\kappa^2\arctan\left(\sqrt{\kappa^2 - 1}\right)}{\left(\kappa^2 - 1\right)^{3/2}}\end{aligned}$$ is a monotonically increasing function of $\kappa$ with $f(0)=-1$ and $f(\infty)=2$ [@Giovanazzi2003a]. The term $\mu_\sigma N_s$ accounts for the energy liberated by removing atoms from the condensate to form the spike, where the number of atoms in the spike is $$\begin{aligned} N_s&\equiv\int d{\mathbf{r}}(|\psi_s|^2-|\psi_0|^2)\nonumber \\ &= n_0 l_z^2\beta\!\left(4\sqrt{2\pi} \sqrt{\frac{\sigma \sigma_z \sigma_\rho^2}{\sigma^2+\sigma_z^2}}+\beta\right)\!.\end{aligned}$$ Some examples of the spike energy $E_s(\beta,\sigma_\rho,\sigma_z)$ are presented in Fig. \[fig:esurf\]. For $\nu_s>\nu_d$ \[Fig. \[fig:esurf\](a)\] the dipolar condensate is stable, in that the energy cost of forming a density spike is positive and increases with increasing $\beta$. In contrast for the dipole dominant regime $\nu_d>\nu_s$ \[Fig. \[fig:esurf\](b)\] the condensate is metastable: the energy can be lowered by the formation of a dense narrow spike. However, spikes of intermediate densities still cost energy, presenting a barrier to the formation of a high density spike. We note that our formalism will be invalid for an extremely dense spike, but is adequate for quantifying the properties of the energy barrier and the system’s passage over it. ![(colour online) Spike formation energy surface $E_s(\beta,\sigma_\rho,\sigma_z)$. Results shown as a function of $\{\sigma_\rho,\beta\}$ for (a) stable regime $\nu_d<\nu_s$ , with $\nu_d = 0.75$, $\nu_s = 1$ and (b) metastable regime $\nu_d>\nu_s$, with $\nu_d = 1.4$, $\nu_s = -0.3$. In (a) we set $\sigma_z = \sigma = 1.22$ for simplicity. In (b), we choose $\sigma_z = 1.35$, which minimizes the activation energy $E_A$. (c) Spike energy crossing the saddle of the energy surface along path shown in (b). Activation energy $E_A$ and the value of $\beta$ at the activation point ($\beta_A$) are indicated.[]{data-label="fig:esurf"}](Fig2.pdf){width="3.2in"} In Fig. \[fig:esurf\](b) we indicate a path along which a high density peak might form. This path crosses the energy barrier at its lowest point, with the value of the energy along this path shown in Fig. \[fig:esurf\](c). We define the minimum height of the energy barrier \[at the saddle point of the function $E_s(\beta,\sigma_\rho,\sigma_z)$\] as the *activation energy* $E_A$, and label the associated value of $\beta$ at this point as $\beta_A$, corresponding to a peak areal density of $$n_A = n_0\left(1+\frac{2\beta_A}{\pi^{1/2}\sigma_\rho}\sqrt{\frac{2 \sigma \sigma_z}{\sigma^2+\sigma_z^2}}+\frac{\beta_A^2}{\pi \sigma_\rho^2}\right).$$ ![(colour online) Phase diagram and metastable energy barrier. The stable, metastable regimes (which includes the roton regime), and regions of instability are indicated. Contours indicate values of the energy barrier ${E}_A$ in units of $n_0l_z^2\hbar\omega_z$. []{data-label="fig:EA"}](Fig3.pdf){width="3.40in"} The activation energy varies as a function of the dimensionless interaction parameters $\nu_s$ and $\nu_d$, and contours of this are shown in Fig. \[fig:EA\]. For reference we have placed these contours on top of a stability diagram for the system, obtained by examining the behaviour of the condensate quasiparticles as a function of their in-plane wave vector $k_\rho$ (see [@Santos2003a; @Blakie2012a; @Baillie2014b] for additional discussion of these regimes). Notably a number of stable and unstable regions can be identified by the quasiparticle spectrum: In the *phonon instability* region a long wavelength ($k_{\rho}\to0$) quasiparticle becomes dynamically unstable (i.e. its energy becomes imaginary). In the *roton instability* region a short wavelength quasiparticle (i.e. $k_{\rho}\sim1/l_z$) is dynamically unstable. The *metastable region* occurs when interactions are dipole-dominated $\nu_d>\nu_s$ and all the quasiparticles have real positive energies. It is denoted as metastable because, as quantified by our model, the condensate is nevertheless able to lower its energy by forming density spikes, even though this is not revealed in the quasiparticle spectrum. The *roton* region is part of the metastable region, and occurs when the dispersion relation has a roton-like feature i.e. a local minimum at non-zero $k_{\rho}$. The results of Fig. \[fig:EA\] indicate that in the regime where rotons occur the activation energy $E_A$ is typically quite low, so that we would expect density spikes to form via thermal activation or tunneling. The results also show that in the roton regime and for larger values of $\nu_s$, the activation energy increases. We note that for $\nu_d=-\frac{1}{2}\nu_s$ (i.e. the upper boundary of the phonon instability region) the effective long wavelength interaction \[c.f last term in Eq. (\[e:Esigma\])\] is zero, and $E_A$ approaches 0. For the case $\nu_d<-\frac{1}{2}\nu_s$ the effective long wavelength interaction is attractive and the condensate unstable to a long-wavelength phonon collapse. It is worth noting that within this regime it has been predicted that stable bright solitons should exist (e.g. see [@Pedri2005a]). SGPE simulations ================ To verify and explore the local instability predicted by our Gaussian ansatz, we now proceed to consider a finite temperature dynamical description of a planar dipolar condensate, based on the SGPE formalism. SGPE theory for planar dipolar BEC ---------------------------------- The SGPE formalism treats the thermal dynamics of the low energy modes of a partially condensed Bose field. Essentially the formalism provides a classical field (i.e. Gross-Pitaevskii-like evolution) for the low energy modes, with additional damping and noise terms to describe the coupling to high energy (non-classical) modes of the system (e.g. see [@Stoof2001a; @Gardiner2003a; @Bradley2008a; @cfieldRev2008; @Proukakis2008a]). The SGPE evolution of this system is given by $$\begin{aligned} d\Psi = \mathcal{P}\!\left\{-\frac{(i+\gamma)}{\hbar}(\mathcal{L}-\mu)\Psi\,dt+\!\sqrt{2\gamma k_BT/\hbar}\,dW({{\bm{\rho}}})\!\right\}\!,\label{e:SGPE}\end{aligned}$$ where $\Psi=\Psi({{\bm{\rho}}})$ is the quasi-2D classical field for the system, with ${{\bm{\rho}}}=(x,y)$, $$\begin{aligned} \mathcal{L}\Psi &= -\frac{\hbar^2\nabla_{{{\bm{\rho}}}}^2}{2m}\Psi+\mathcal{F}^{-1}_{{{\bm{\rho}}}}\left\{\tilde{U}_{\mathrm{2D}}({\mathbf{k}}_{\rho})\mathcal{F}_{{{\bm{\rho}}}}\{|\Psi({{\bm{\rho}}})|^2\}\right\}\Psi,\label{e:LGPE}\end{aligned}$$ is the effective 2D Gross-Pitaevskii operator and $\mathcal{F}_{{{\bm{\rho}}}}$ is the in-plane Fourier transform. To obtain this form we have integrated out the $z$-dimension, resulting in the effective 2D interaction potential in $k_\rho$-space $$\begin{aligned} \tilde{U}_{\mathrm{2D}}({\mathbf{k}}_{\rho}) &\equiv \int dk_z \tilde{U}({\mathbf{k}}) \mathcal{F}_z\left\{|\chi_\sigma(z)|^2\right\}, \\ &=\frac{1}{\sqrt{2\pi} l_z}\left[g_s+g_d(2-3\sqrt{\pi}Qe^{Q^2}\mathrm{erfc}\,Q)\right]\end{aligned}$$ where $Q=k_\rho l_z/\sqrt{2}$. The stochastic term $dW$ is a complex Gaussian noise satisfying $\langle dW\rangle=\langle dW^2\rangle=0$, $\langle dW({{\bm{\rho}}})dW^*({{\bm{\rho}}}')\rangle=\delta({{\bm{\rho}}}-{{\bm{\rho}}}')dt$. In Eq. (\[e:SGPE\]) a projector $\mathcal{P}$ appears which is used to restrict the evolution to the low energy appreciably occupied modes of the field. Because we consider a uniform planar system this is implemented as a radially symmetric cutoff $k_{\mathrm{cut}}$ in wave-vector space, i.e. the low energy region evolved is restricted to parts of $\Psi$ with $|{\mathbf{k}}_\rho|<k_{\mathrm{cut}}$. The parameter $\gamma$ describes the coupling to high energy modes (treated as a reservoir at temperature $T$ and chemical potential $\mu$) that have been eliminated from $\Psi$ by the projector. For the case of contact interactions $\gamma\sim (a_s/\lambda_{\mathrm{dB}})^2$, where $\lambda_{\mathrm{dB}}=h/\sqrt{2\pi mk_BT}$ [@Rooney2012a]. A detailed microscopic derivation of the SGPE theory along the lines of [@Gardiner2003a] has not been performed for the case of a planar dipolar gas, however the theory is phenomenologically justified for our purposes of studying dynamics near equilibrium: the SGPE theory is a Langevin equation that provides a grand-canonical classical field description of the low energy modes of the field, with the damping (being the term in (\[e:SGPE\]) proportional to $\gamma$) and noise (the term proportional to $\sqrt{\gamma}$) being related through the fluctuation dissipation theorem[^1]. In formulating the SGPE theory for the planar system we have made the quasi-2D approximation, so that all motion in the $z$-direction is frozen in the harmonic oscillator ground state. Simulations ----------- ### Uniform simulation scheme We perform our simulations of Eq. (\[e:SGPE\]) on a square domain of area $A=L\times L$, where $L$ is the side length, and subject to periodic boundary conditions. The classical field can therefore be represented effectively in a plane wave basis, $$\Psi({{\bm{\rho}}},t)=\sum_{{\mathbf{k}}_\rho} c_{{\mathbf{k}}_\rho}(t)\frac{e^{i {{\mathbf{k}}_\rho} \cdot {{\bm{\rho}}}}}{\sqrt{A}},$$ where the in-plane wave vectors are ${{\mathbf{k}}_\rho}=2\pi(n_x,n_y)/L$, $n_x,n_y\in\mathbb{Z}$, and the $c_{{\mathbf{k}}_\rho}$ are complex time-dependent amplitudes. The numerical scheme used to simulate the SGPE is the 2D version of the fast Fourier transform-based algorithm discussed in Sec. III of Ref. [@Blakie2008a], with an additional step introduced to evaluate the convolution involving the ${\mathbf{k}}$-dependent interaction \[see Eq. (\[e:LGPE\])\]. ### Initial condition For our initial condition we sample a randomized state constructed from a condensate and Bogoliubov quasiparticles according to $$\begin{aligned} \Psi({{\bm{\rho}}},0)=\sqrt{n_0}\!+\!\sum_{{\mathbf{k}}_\rho} \left(u_{{\mathbf{k}}_\rho} \alpha_{{\mathbf{k}}_\rho} - v_{-{\mathbf{k}}_\rho} \alpha_{-{\mathbf{k}}_\rho}^*\right)\frac{e^{i {{\mathbf{k}}_\rho} \cdot {{\bm{\rho}}}}}{\sqrt{A}} ,\end{aligned}$$ where $\alpha_{{\mathbf{k}}_\rho}= \sqrt{\frac{k_B T}{2 \epsilon_{{\mathbf{k}}_\rho}}}(u_r+iu_i)$, with $u_r$ and $u_i$ being normally distributed random numbers generated for every ${{\mathbf{k}}_\rho}$. In the above expression we have introduced the Bogoliubov quasiparticle energy $\epsilon_{{\mathbf{k}}_\rho}$ and amplitudes $\{u_{{\mathbf{k}}_\rho},v_{{\mathbf{k}}_\rho}\}$, which are $$\begin{aligned} \epsilon_{\mathbf{k}}&=\sqrt{\frac{\hbar^2k_\rho^2}{2m}\left[\frac{\hbar^2k_\rho^2}{2m}+2n_0\tilde{U}_{\mathrm{2D}}({{\mathbf{k}}_\rho})\right]},\\ u_{{\mathbf{k}}_\rho}&=\sqrt{\frac{1}{2}\left(\frac{\frac{\hbar^2{k_\rho}^2}{2m}+ n_0\tilde{U}_{\mathrm{2D}}({{\mathbf{k}}_\rho})}{\epsilon_{{\mathbf{k}}_\rho}}+1\right)},\\ v_{{\mathbf{k}}_\rho}&=\sqrt{\frac{1}{2}\left(\frac{\frac{\hbar^2{k_\rho}^2}{2m}+ n_0\tilde{U}_{\mathrm{2D}}({{\mathbf{k}}_\rho})}{\epsilon_{{\mathbf{k}}_\rho}}-1\right)}\mathrm{sign}\left[\tilde{U}_{\mathrm{2D}}({{\mathbf{k}}_\rho})\right]. \end{aligned}$$ This choice of initial state ensures that every quasiparticle mode is occupied according to the classical limit of the Bose-Einstein distribution, and we find that it changes little when allowed to equilibrate via the SGPE. ### Simulation parameters For the simulations we present we take $L=80\,l_z$ and use a cutoff momentum of $k_{\mathrm{cut}}=\sqrt{10}/l_z$. For this choice $ 5097$ plane wave modes are retained in classical region for which the dynamics are simulated. We focus on the case of a condensate of density $n_0=4/l_z^2$, with interaction parameters $\nu_s=-0.301$, $\nu_d=1.404$, which is in the metastable regime, with $E_A=3.28\,\hbar\omega_z$, $\beta_A = 1.54$. The SGPE simulations are performed using reservoir parameters $\mu=\hbar \omega_z$ and temperatures in the range 0.2 to 0.45 $\hbar \omega_z/k_B$. We find that the condensate fraction of the field $\Psi$ varies from about $0.95$ at $T=0.2\hbar\omega_z/k_B$ to $0.88$ at $T=0.45\hbar\omega_z/k_B$. The results we present are for the case of $\gamma=0.1$. SGPE results ------------ ### Observed dynamics An example of the density profile during a typical SGPE evolution is shown in Fig. \[fig:sim\_data\](a). The noisy density pattern reveals the fluctuating thermal modes in the low energy region, and is similar to the typical results of SGPE evolution in the case of contact interactions (e.g. see Fig. 2 of [@Davis2002a]). However, for this dipolar simulation in the metastable regime, we eventually find that a density spike emerges \[see Fig. \[fig:sim\_data\](b)\], which persists in the field. It is useful to define the instantaneous peak density of the field $$n_{\mathrm{peak}}(t)=\max_{\boldsymbol{\rho}}\left\{|\Psi(\boldsymbol{\rho},t)|^2\right\},$$ i.e. as the maximum density occurring at any grid point. In Fig. \[fig:sim\_data\](c) we quantify the behaviour of $n_{\mathrm{peak}}$ in the evolution leading up to the density spike forming: this formation is clearly revealed by the sudden onset of rapid growth of $n_{\mathrm{peak}}$ at $t\approx45/\omega_z$. To put these values of peak density into context, in Fig. \[fig:sim\_data\](d) we show the probability density function for values of density occurring in the field. This is obtained by making a histogram of the density values occurring at every grid point using the field sampled at a discrete set of times prior to the collapse. This density distribution revels that the most likely density is $\sim4/l_z^2 = n_0$. The thermal fluctuations in the field give rise to the spread in the distribution function around the most likely value, and we emphasize that the spike formation proceeds through values that are out in the tails of this distribution \[as indicated in Fig. \[fig:sim\_data\](d)\]. The time it takes for a spike to form is stochastic and can vary significantly between different SGPE simulations for identical parameters. Spike formation times tend to get shorter the closer the system is to the roton instability boundary and as the temperature increases. Once formed, the spikes grow rapidly as shown in Fig. \[fig:sim\_data\](c). Overall these qualitative observations are consistent with the spikes occurring as a thermally activated crossing of the energy barrier consistent with our simple model of Sec. \[Sec:model\]. ![ (color online) Field density and a typical spike formation event. The field density $|\Psi|^2$ is shown (a) at $t=15/\omega_z$ (prior to spike formation) and (b) at $t=44/\omega_z$ (during spike formation). The red circle indicates the spike location. (c) The peak density in the system during the simulations, revealing the sudden formation of a spike at $t\approx45/\omega_z$. The red crosses indicate the two times corresponding to the fields plotted in (a) and (b). (d) The distribution of densities across the simulation cell prior to collapse. The red arrow indicates $n_A = 22.3/l_z^2$. The simulation parameters were $T = 0.2\,\hbar\omega_z/k_B$, $\nu_s=-0.301$, and $\nu_d=1.404$. []{data-label="fig:sim_data"}](Fig4.pdf){width="3.40in"} ### Characterizing spike formation It is evident, particularly from Fig. \[fig:sim\_data\](c) and (d), that spike formation is due to fluctuations in density to large values. We aim to measure the correlations between a peak density of some value occurring in the field and a spike forming. To do this we calculate the probability that a spike forms within a time interval of $\delta t=5/\omega_z$ after a value of $n_{\mathrm{peak}}$ occurs in in the field. We take $|\Psi|^2 > 30/l_z^2$ as an unambiguous measure of a spike having formed in the system, as this density was only ever observed to occur once a spike had formed and was growing rapidly. The probability that a spike forms was then calculated using 36 trajectories of the SGPE for the parameters of Fig. \[fig:sim\_data\] with the results shown in Fig. \[fig:np\]. These indicate that if a density fluctuates to a value exceeding $\sim 16$ then a spike is likely to form. This is a lower, but comparable, value to the density at the activation point ($n_A = 22.3/l_z^2$) as predicted by our Gaussian model[^2]. We also note that the typical widths of the observed spikes in the SGPE simulations are in quantitative agreement with the value of $\sigma_\rho$ predicted by the model at the activation point. ![(color online) The probability that a density spike forms within a time interval of $\delta t=5/\omega_z$ after a particular peak density $n_{\mathrm{peak}}$ occurs in the simulation. Calculations for $T=0.2\,\hbar\omega_z/k_B$.[]{data-label="fig:np"}](Fig5.pdf){width="3.20in"} Finally, we consider the influence of temperature on the rate at which spikes form. We define the mean spike formation time $\bar{t}_s$ to be the average evolution time until a spike forms, and calculate it by averaging the individual times spike formation times obtained from 10 – 20 SGPE simulations for each parameter set. We present results for the dependence of $\bar{t}_s$ in Fig. \[fig:tvT\] for two sets of interaction parameters, and for a range of temperatures. These results demonstrate that the mean spike formation time scales as $\bar{t}_s\sim\exp(c\hbar \omega_z/k_BT)$, which corresponds to Arrhenius’ scaling with temperature (e.g. see [@GardinerStochMethods]), where we take $c$ to be a fit parameter. The fits to the SGPE results give $c = 1.25 \pm 0.09$ and $4.1 \pm 0.4$. For comparison, the Gaussian model predicts activation energies of $E_A=3.28\hbar\omega_z$ and $E_A=5.51\hbar\omega_z$ respectively. Thus we see that as the metastable energy barrier increases, the rate of spike formation decreases. We have not systematically studied the effect of changing $\gamma$, but in simulations where we reduced $\gamma$ by two orders of magnitude[^3] we found that the mean peak formation time was changed by about a factor of 2. ![ (color online) Temperature-dependence of the mean peak formation time $\bar t_s$, plotted here for two different sets of interaction parameters: (circles) $\nu_s=-0.301$, $\nu_d=1.404$ (as in earlier results), and (triangles) $\nu_s= -0.201$, $\nu_d= 1.354$. The linear fits have slopes of $1.25 \pm 0.09$ and $4.1 \pm 0.4$. []{data-label="fig:tvT"}](Fig6.pdf){width="3.2in"} Conclusion and Outlook ====================== In this paper we have considered the energetics and finite temperature dynamics of a flattened dipolar condensate. By developing an analytic model we show that it is energetically favorable for density spikes to form in this system in the metastable dipole-dominated regime, and we have characterized the energy barrier to formation as a function of the interaction parameters. Notably, our results predict that the role of local density spikes will be important in the regime where rotons are predicted to exist in the elementary excitation spectrum. Developing the SGPE theory for this system, we have shown that thermal fluctuations can nucleate density spikes, and that their properties are consistent with our analytic model. The density spikes we discuss here realize a local collapse scenario [@Bohn2009a], whereby atoms far away from the spike remain unaffected (c.f. global collapse for condensates with attractive contact interactions [@Donley2001a]). Our theory here has only considered the formation dynamics of the spike, and does not provide a consistent model of the spike after it forms (and having passed beyond the energy barrier). It is likely that the atoms within the spike will be lost by three-body recombination (increased significantly due to the high density in the spike), and will lead to heating in the system. Because the number of atoms in a given spike is a small fraction of the system, the development of a single spike will not necessarily be detrimental to the condensate, and many such local collapses may be required to heat the condensate. Qualitatively, such a scenario seems consistent with the experiments of Koch *et al.* [@Koch2008a]. For example, in Fig. 2 of [@Koch2008a] a continuous decrease in the condensate number was observed as the stability boundary was approached. Indeed, this suggests that condensate lifetime measurements would be a possible avenue for experiments to investigate the energy barrier to local collapse in the dipole-dominated regime. It is useful to put the parameters of our calculations into context of current experiments. The case considered in Fig. \[fig:sim\_data\] corresponds to the central region of a $55\times10^3$ atom $^{164}$Dy condensate in a 3D harmonic trap with frequencies of $(f_\rho,f_z)=(15,10^3)$ Hz, and scattering length $a_s=-28\,a_0$, where $a_0$ is the Bohr radius. Translating the results of Fig. \[fig:tvT\] for this case (i.e. the filled circle results) give that at temperatures of $10\,$nK the mean spike formation times $\bar t_s$ will be $\sim 6\,$ms, decreasing to $0.2\,$ms at $25\,$nK. That said, we emphasize that a precise model of the experimental regime will require accounting for the effects of radial trapping. An important extension of the work in this paper will be to develop a more detailed analytic theory of the collapse dynamics. For example, the stochastic Lagrangian approach used in Ref. [@Duine2001a] could be extended to the dipolar case. Acknowledgments: {#acknowledgments .unnumbered} ================ We thank D. Baillie for his assistance, and A. S. Bradley for useful discussions. Support by the Marsden Fund of New Zealand (contract number UOO1220) is gratefully acknowledged. [48]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1103/PhysRevLett.94.160401) [****,  ()](\doibase 10.1103/PhysRevA.77.061601) [****,  ()](\doibase 10.1103/PhysRevLett.107.190401) [****,  ()](\doibase 10.1103/PhysRevLett.108.210401) [****, ()](\doibase http://dx.doi.org/10.1016/j.physrep.2008.04.007) [****,  ()](http://stacks.iop.org/0034-4885/72/126401) [****,  ()](\doibase 10.1038/nphys887) [****,  ()](\doibase 10.1103/PhysRevA.84.053601) [****,  ()](\doibase 10.1103/PhysRevLett.98.030406) [****,  ()](\doibase 10.1103/PhysRevA.82.023622) [****,  ()](http://stacks.iop.org/1367-2630/12/i=6/a=065022) [****,  ()](\doibase 10.1103/PhysRevA.86.053623) [****,  ()](\doibase 10.1103/PhysRevLett.90.250403) [****,  ()](\doibase 10.1103/PhysRevA.81.033626) [****,  ()](\doibase 10.1103/PhysRevLett.107.065303) [****,  ()](\doibase 10.1103/PhysRevA.86.021604) [****,  ()](\doibase 10.1103/PhysRevA.87.051605) [****,  ()](\doibase 10.1103/PhysRevA.88.013619) [****,  ()](\doibase 10.1103/PhysRevLett.110.265302) [****,  ()](\doibase 10.1103/PhysRevA.88.043606) @noop [ ()]{},  [****,  ()](\doibase 10.1103/PhysRevLett.104.094501) [****, ()](\doibase 10.1103/PhysRevLett.106.065301) [****,  ()](\doibase 10.1103/PhysRevLett.109.155302) [****, ()](\doibase 10.1103/PhysRevLett.95.200404) [****,  ()](\doibase 10.1103/PhysRevA.88.013638) @noop [ ]{} [****,  ()](\doibase 10.1103/PhysRevA.87.025601) [****,  ()](\doibase 10.1134/S1054660X09040021) [****,  ()](\doibase 10.1103/PhysRevA.80.023614) [****,  ()](\doibase 10.1103/PhysRevA.79.013617) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevA.61.041604) [****,  ()](\doibase 10.1103/PhysRevA.63.053607) [****,  ()](\doibase 10.1103/PhysRevA.73.031602) @noop [ ()]{},  [****,  ()](http://stacks.iop.org/1464-4266/5/i=2/a=381) [****,  ()](\doibase 10.1023/A:1017519118408) @noop [****,  ()]{} [****, ()](\doibase 10.1103/PhysRevA.77.033616) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevA.86.053634) [****,  ()](\doibase 10.1103/PhysRevE.78.026704) @noop [****,  ()]{} @noop [**]{} (, ) [****,  ()](\doibase 10.1103/PhysRevA.87.043620) @noop [****,  ()]{} [****, ()](\doibase 10.1103/PhysRevA.65.013603) [^1]: It is worth noting that equilibrium properties are independent of $\gamma$. [^2]: This is the model discussed in Sec. \[Sec:model\], but with $\sigma =\sigma_z=1$, consistent with the quasi-2D restriction of the SGPE model. [^3]: In this small $\gamma$ limit the theory reduces to the so called projected-GPE theory or classical field method (see [@Pawowsk2013a]), providing a micro-canonical description of the low energy system modes.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Recent work has shown that convolutional neural networks (CNNs) can be used to estimate optical flow with high quality and fast runtime. This makes them preferable for real-world applications. However, such networks require very large training datasets. Engineering the training data is difficult and/or laborious. This paper shows how to augment a network trained on an existing synthetic dataset with large amounts of additional unlabelled data. In particular, we introduce a selection mechanism to assemble from multiple estimates a joint optical flow field, which outperforms that of all input methods. The latter can be used as proxy-ground-truth to train a network on real-world data and to adapt it to specific domains of interest. Our experimental results show that the performance of networks improves considerably, both, in cross-domain and in domain-specific scenarios. As a consequence, we obtain state-of-the-art results on the KITTI benchmarks.' author: - 'Osama Makansi^\*^' - 'Eddy Ilg^\*^' - Thomas Brox bibliography: - 'egbib.bib' title: | FusionNet and AugmentedFlowNet:\ Selective Proxy Ground Truth\ for Training on Unlabeled Images --- 18SubNumber[2834]{} Introduction ============ Like all deep learning applications that follow the supervised learning paradigm, the success of learning optical flow estimation stands and falls with the availability and quality of training data. In case of optical flow, the creation of ground-truth annotation on real images is extremely tedious and virtually impossible on large datasets. For this reason, state-of-the-art networks for optical flow estimation, such as FlowNet 2.0 [@flownet2] and PWC-Net [@PWCnet] have been trained on synthetically rendered images. These networks tend to generalize comparatively well to real images – in contrast to semantic tasks, such as object detection or semantic segmentation. This is because correspondence estimation is different from recognition and does not depend so much on the content of the images. In fact, optical flow estimation is possible without any learning, thus not requiring any training data. There is a long history of unsupervised optical flow methods that implement the concept of correspondence. These classical methods perform equally well as a state-of-the-art optical flow network, yet with significantly higher runtimes. The advantage of learning comes in when correspondences cannot be established easily and priors are needed to make decisions. Typical examples are areas in the image that have homogeneous color (aperture problem) or areas that are occluded in the other image. Works from the pre-learning era used handcrafted regularizers [@schunck; @meminperez] and corresponding optimization heuristics to hallucinate optical flow in these areas. Learning such priors is much more elegant and also more successful: networks tend to outperform these classical techniques especially in occluded areas. However, such learning of priors is no longer independent of the image content: while basic hallucination strategies for occluded regions can be estimated from synthetic data, the hallucinated content should ideally depend on the objects in the scene. Thus, there is a domain gap between synthesized training images and real images, just like in semantic tasks. Real images are required for training. Multiple strategies have been proposed to integrate real images into the training procedure. These span from using the same unsupervised training loss for the network as is used in variational methods [@ahmadi; @unflow], over multi-task learning with an auxiliary task that allows learning from unlabelled images [@SZB17], to training on pseudo-ground-truth obtained from running an (unsupervised) variational method [@guided_flow_17]. This paper comprises two contributions. First, we present an assessment network that learns to predict the error for each of a set of flow fields generated with various optical flow estimation techniques. Then, a fused optical flow field can be trivially obtained by selecting for each pixel the flow vector with the smallest predicted error. We show that this assessment network, which we call FusionNet combines the advantages of a potentially large set of techniques and avoids their limitations. As a consequence, FusionNet yields results that exceed the performance of all methods that produced its input. Independent on how the state of the art will improve in the future, FusionNet can always benefit from these improvements. However, this comes at the cost of very large runtimes, since a whole set of partially slow methods must be run on the test image for the assessment network to assemble the final flow field. This is a show stopper for most optical flow applications. Thus, as a second contribution, we augment a FlowNet by training it on the flow obtained with the assessment network, which now serves as proxy ground-truth. This shifts the large runtimes to the training phase, while the final network is as fast as a regular FlowNet at test time. Training data can be generated on all sorts of unlabeled videos, which allows the augmented FlowNet to learn priors from real images. This yields the currently best accuracy-runtime trade-off and enables the specialization to target domains directly on real images without tedious modeling of synthetic scenes in such domains. We show-case this with state-of-the-art results on the KITTI benchmarks. Related Work ============ **Traditional optical flow estimation.** Optical flow estimation goes back to the works of Lucas&Kanade [@lucaskanade] and Horn&Schunck [@schunck]. Both rely on a brightness constancy term combined with a local or global smoothness assumption. Especially the variational approach of Horn&Schunck was extended by many successive works [@meminperez; @Bro04a; @pocktvl1]. While variational methods are very precise in small displacement cases, they have deficits in case of large displacements. This was taken into account by Brox et al. [@ldof], who mixed the variational method with a simple nearest-neighbor matching of local descriptors. DeepMatching [@deepmatching] elaborated on the matching, and EpicFlow [@epicflow] improved the variational refinement. FlowFields [@flowfields] builds upon EpicFlow and elaborates on the matching using a random search strategy. The present state of the art is defined by DCFlow [@dcflow] and MRFlow [@Wulff:CVPR:2017]. The accuracy of these techniques is very high and on-par or even higher than with learning based techniques. However, the combinatorial search in state-of-the-art methods leads to quite large runtimes that do not allow for interactive frame rates. **Optical flow with supervised learning.** End-to-end learning of optical flow was pioneered by the work of Dosovitskiy et al. [@flownet], which presented the two network architectures FlowNetS and FlowNetC. The former is purely convolutional, while the latter includes an explicit correlation. The networks were trained on a simplistic dataset made from Flickr and chair images to which affine transformations were applied (FlyingChairs). Mayer et al. [@dispnet] introduced a more sophisticated 3D dataset (FlyingThings3D). Ilg et al. [@flownet2] presented a stack of networks termed FlowNet 2.0 with high accuracy and fast runtime. Ranjan et al. [@spynet] presented a network architecture that contains a spatial pyramid and runs even faster than FlowNet 2.0, but at the cost of accuracy. Sun et al. [@PWCnet] extended this idea by introducing correlations at the different pyramid levels. Their network termed PWC-Net currently achieves state-of-the-art results. Other methods combine feature learning with traditional methods: FlowFieldsCNN [@FlowFieldsCNN] uses an improved hinge embedding loss to train a Siamese architecture for feature extraction, which is then used in combination with FlowFields. PatchBatch [@PatchBatch] shows that CNN features can even be improved to a level on which plain nearest-neighbor matching performs well. DeepDiscreteFlow [@DeepDiscreteFlow] combines a local network with a context network and discrete optimization. **Optical flow with unsupervised learning components.** Ahmadi et al. [@ahmadi] proposed an unsupervised learning approach by using the brightness constancy loss from variational approaches to train a CNN. In principle, their approach replaces the Gauss-Newton step in variational optimization with back-propagation on a network representation. While coming from a fully unsupervised approach, the resulting flow fields are inferior to those of unsupervised variational techniques. Meister et al. [@unflow] proposed an additional unsupervised loss based on forward-backward consistency to train the network termed UnFlow in a completely unsupervised manner. Several other methods introduce unsupervised losses in addition to supervised training on synthetic data. Yu et al. [@backtobasics] and Ren et al. [@dstflow] use the loss from variational approaches to refine the decoder stages of a pre-trained FlowNet. Lai et al. [@NIPS2017_6639] use a GAN approach to distinguish the optical flow estimated by the generator network from ground-truth optical flow. Sedaghat et al. [@SZB17] proposed the self-supervised auxiliary task of next frame prediction as additional loss. Like the above-mentioned works, this allows them to improve FlowNet on real-world data. The guided optical flow proposed by Zhu et al. [@guided_flow_17] uses the flow computed by a traditional, unsupervised method as proxy ground-truth to train a network in the usual supervised manner. The final network is limited by the performance of the traditional method that provided the proxy ground-truth. In contrast, we anticipate this drawback by training on flow fields produced by multiple different methods and locally selecting the best. This way, the final network can yield better results than any single method that produced the training data. **Optical flow fusion.** The principle to locally select the best flow vector from a set of flow fields has been implemented outside the scope of deep learning. Lempitsky et al. [@lempitskyCVPR08] proposed a combinatorial optimization approach to combine flow fields from multiple methods based on a smoothness loss. Their approach was also used in MDPFlow [@mdpflow], which locally combined multiple hypotheses from coarser pyramid levels and nearest neighbor matching. In contrast to these approaches, our selection among flow vectors is based on a deep network that learns to predict directly the optical flow error rather than just selecting based on smoothness priors. FusionNet ========= ![ Overview of the FusionNet principle. Given the input images, the optical flow is estimated with various existing methods. Each method’s optical flow estimate is used to warp the second image. The two input images, the warped image, and the flow are fed into the proposed assessment network, which is trained on predicting the error of each flow field. Finally the flow fields are merged by locally choosing the flow vector with the minimum predicted error. \[fig:fusion\_net\] ](figures/FusionNet){width="75.00000%"} We assume that various optical flow estimation methods have different strengths and weaknesses. This does not exclude that these methods may have also many difficulties in common. However, as long as there are differences, we want to exploit these differences to choose from the method that works best on a particular problem. To this end, we propose an assessment network that predicts the errors of the optical flow estimated by a set of existing methods, as shown in Figure \[fig:fusion\_net\] and is trained on synthetic data with available ground-truth optical flow. On first glance, this training on synthetic images looks like we are back at square one. However, the task of assessment is different from the task of flow estimation itself. First, we benefit from the information contained in the various input flow fields. Second, the assessment task may generalize more easily to other domains than the task of optical flow estimation, since it must only find ways to predict errors rather than predicting the flow field itself. The assessment network uses a typical encoder-decoder architecture with skip-connections; the architecture details are as in FlowNetS [@flownet2]. It takes the two input images into account together with the flow estimate and the second image warped by that flow. This error map predicted by the assessment network is used to optimally combine the estimated flow fields. We refer to the complete setup, as shown in Figure \[fig:fusion\_net\], as FusionNet. We investigate two different loss functions: an L1 loss and a hinge loss. L1 Loss ------- For training the assessment network with an L1 loss, we let the network directly estimate the pixel-wise endpoint error. Let the estimated optical flow at a certain pixel be denoted $w=(u,v)$ with the x- and the y-components $u$ and $v$. The ground-truth endpoint error $e_\mathrm{gt}$ for a pixel location is: $$e_\mathrm{gt} = \sqrt{(u - u_{\mathrm{gt}})^2+(v - v_{\mathrm{gt}})^2}.$$ Let $e$ be the error predicted by the assessment network. To improve on the predicted error, we apply back-propagation with the L1 loss: $$\mathcal{L}_1(e) = |e_\mathrm{gt} - e|.$$ In principle, one could train a separate, specialized network for each input method to be assessed. However, since we want to improve on the generalization of the assessment network, we use the same network for assessing all input flows, and rather sample the mini-batches during training from the different methods. More training details are provided in Section \[training\_details\]. Hinge Loss ---------- Directly applying an L1 loss on the error makes the network estimate the error for each method. However, for the fusion we only need to know the input methods with the lowest error. That means, the L1 loss potentially solves a harder problem than necessary to reach the actual goal[^1]. A related problem to picking the input with the smallest error is the one of designing a distance metric to match patches. This metric only needs to reflect the ranking, e.g. “A is closer to B than A is to C” [@NIPS2003_2366; @weinberger2009distance]. Many feature learning algorithms use this as a triplet loss [@dcflow; @wang01; @tripletnet; @fastnet; @wohlhart01]. With the same motivation, we use the well-known multi-class hinge loss [@multihinge; @multisvm] $$\label{joint_ass_function} \mathcal{L}_{\rm Margin}(e_1,...,e_N) = \sum_{i\neq j} \max(0,m+e_j-e_i),$$ where $j$ is the index of the method with the lowest error according to the ground-truth, and $m$ is the minimum margin between the best estimate and the other estimates. If the predicted best error corresponds to the true index $j$ and all other errors are at least $m$ larger than $e_j$, this loss will be zero. Otherwise, each error that is above the allowed margin will contribute to the loss. Since the network is allowed to rescale the errors, we can set $m=1$ without loss of generality. Note that the errors predicted by the network with this loss do no longer correspond to the L1 error but may be rescaled. The rescaling factor may even be different for each pixel. Obviously, the hinge loss implies joint training of the assessment network while giving all $N$ methods as input. ![ Using our FusionNet to augment a FlowNet: FlowNet and FusionNet are trained on labeled data. Subsequently, FusionNet is used to augment FlowNet with large amounts of unlabeled data. \[fig:domain\_transfer\] ](figures/DataDomains){width="35.00000%"} Augmented FlowNet ================= Given the FusionNet from the last section, we can apply it to any unlabelled data to estimate high-quality optical flow. However, running FusionNet is very costly, since it requires running the various, partially very slow optical flow estimation methods. In order to have fast optical flow estimation at test time, we use the optical flow fields estimated with FusionNet as proxy ground-truth in order to finetune a FlowNet, for instance, to optimize it for a specific domain or to make it run better on general real-world videos. The principle is quite straightforward and illustrated in Figure \[fig:domain\_transfer\]. Experiments =========== We evaluated the concept on the common optical flow benchmarks, where we can quantify the improvements by the fusion and by the augmentation of FlowNet directly. In addition, we demonstrate the effect of the augmentation in a motion segmentation context. Training Details {#training_details} ---------------- For training the assessment network, we followed the same training schedule as proposed in Ilg et al. [@flownet2] for training FlowNet, i.e., we first train on FlyingChairs for 1.2m iterations and subsequently on FlyingThings3D for 500k iterations. The augmented FlowNet is initialized with a FlowNet trained on the same schedule. We also applied the same data augmentation mechanism, i.e., a set of spatial and color transformations. The networks were implemented using the Caffe framework. The code will be made publicly available upon publication. Datasets -------- We used the two publicly available synthetic datasets FlyingChairs[@flownet] and FlyingThings3D[@flownet2] to train the assessment network and the initial FlowNet before augmentation. These are the two datasets for which labeled training data is available. For the unsupervised fine-tuning, we use various unlabeled datasets that we grouped to two domains: animation movies and driving. **Animation movies.** We collected several animation movies from the Blender project[@blender] and used them for unsupervised training. For such animation movies there is the potential option to derive ground-truth optical flow, as shown in Butler et al. [@Butler:ECCV:2012] and Mayer et al. [@MIFDB16], but we did not use this option here and rather used just the unlabeled videos for training. For the evaluation in this domain we used the official Sintel benchmark dataset [@Butler:ECCV:2012]. **Driving.** Driving scenes are a popular application domain for optical flow. Thus, we selected them to make a second evaluation domain. For unsupervised training, we took approximately 100k frames from the Frankfurt part of the publicly available Cityscapes dataset [@Cordts2016Cityscapes]. For the evaluation in this domain, we used the two publicly available KITTI2012 [@Geiger2012CVPR] and KITTI2015 [@Menze2015CVPR] benchmark datasets. **Motion Segmentation.** For indirect evaluation of the optical flow on a motion segmentation task, we used approximately 32k frames from the UdG-MS19 and UdG-MS20 datasets [@udg] for unsupervised training. We evaluated the motion segmentation on the FBMS benchmark dataset [@OB14b]. FusionNet --------- We evaluated the FusionNet with the following optical flow estimation techniques as input: LDOF [@ldof], DeepFlow [@deepmatching], EpicFlow [@epicflow], FlowFields [@flowfields], and FlowNet2 [@flownet2]. There are some very recent methods with even better performance, such as DCFlow [@dcflow], PWC-Net [@PWCnet], and MR-Flow[@Wulff:CVPR:2017], but their code was not operational in time to include them for the experiments. A nice property of FusionNet is that new methods can be integrated trivially at any time to improve results further. Table \[tab:fusiontable\] compares FusionNet to the state-of-the-art on the common benchmark datasets. FusionNet consistently outperforms each of the techniques that have been provided as input, which demonstrates that the assessment network is able to locally select the best optical flow vectors. As a consequence, this brings it close to the most recent state of the art and would most likely outperform it if these methods were also included for selection. Table \[tab:fusiontable\] also reports the results when selecting the flow vectors based on the ground-truth error. This oracle fusion is the lower bound that FusionNet can achieve with the respective optical flow fields given as input. The predicted error when trained with an L1 loss is shown in Figure \[fig:qualitative\_single\_ass\]. One can observe that it matches the ground-truth error quite well. Thus, in hard cases, if one of the input methods is able to estimate the motion successfully, FusionNet is able to select the best estimate. The predicted error for the margin loss is not directly interpretable due to the local scaling. However, the resulting final optical flow field is as good as the one obtained with the L1 loss. Also quantitatively comparing the L1 loss against the hinge loss, there is no significant difference. Augmented FlowNet ----------------- While FusionNet yields excellent optical flow that combines the best from all available methods, it requires 84 seconds per frame. In contrast, FlowNet 2.0 runs at 8 frames per second [^2]. In this section we test how far we can transfer the good results from FusionNet to a FlowNet, thus inheriting also the runtimes of the latter. Table \[tab:AugmentedFlowNetC\] first shows the influence of the choice of the proxy ground-truth when fine-tuning a basic FlowNetC. Augmenting the FlowNet with an optical flow field that is superior to the baseline improves results, whereas inferior flow fields can decrease the performance. When using just a single proxy method, there is the dilemma of which method to choose. Feeding a random mixture of samples from various methods during fine-tuning (Rand. Mix) does not yield the best of all involved methods, but approximately the average of those. In contrast, the use of FusionNet resolves the dilemma. We also distinguish between augmentation for a specific domain and generic augmentation. In the first case, we augment the FlowNet only on data from the respective domain, i.e., animation movies in case of Sintel and driving videos in case of KITTI; in the second case, data from both domains is used for finetuning. Specialization to a certain domain is one of the big advantages of learning-based optical flow methods, and a particular advantage for those methods that do not require supervision in that domain, as in our case. Table \[tab:AugmentedFlowNetC\] shows that domain-specific augmentation improves results considerably on KITTI, which is a very special scenario. The error is almost cut into half. However, also the generic augmentation is not much worse, as it also benefits from the training data from the special domain, even though it is now mixed with data from another domain. Obviously, the network can figure out automatically at test time from which domain the input is from, and applies the appropriate priors from that domain. Table \[tab:AugmentedFlowNetStacks\] extends the augmentation to a stacked FlowNet and compares it to UnFlow [@unflow]. UnFlow uses an unsupervised loss, thus it can be specialized conveniently to any domain. The table shows results for UnFlow trained on CityScapes or the unlabeled data from KITTI, which outperform the supervised baseline, which was trained on synthetic data outside this domain. For better comparison to our strategy, we also report results for a semi-supervised version of UnFlow, i.e., it is initialized with a FlowNet trained on synthetic data before the unsupervised training starts. Results show that the domain adaptation with the augmented FlowNet is clearly superior to the one of UnFlow[^3]. As we already observed in Table \[tab:AugmentedFlowNetC\], there is no significant difference between domain-specific training and training on a joint set of domains. This is also true for the stacked network. Table \[tab:Benchmark\_results\] compares the stacked augmented FlowNet to the state of the art. On the KITTI benchmarks, the augmented FlowNet sets the new state of the art after being finetuned also with the ground truth from the KITTI training set. But also the generic version, which has not been finetuned with ground truth data yields very good results. The direct comparison to FlowNet2 quantifies the improvement on stacked networks due to the augmentation. Interestingly, the stacked augmented FlowNet often even outperforms the FusionNet proxy. This is due to the finetuning with ground truth in case of the domain-specific network. Sometimes, also the generic network is better than FusionNet, but not consistently. Fig. \[fig:qualitative\_augmented\_FlowNet\_Sintel\] shows some qualitative examples of the augmentation by the proxy, yet with the smaller, non-stacked network. Finally, we also evaluated the augmented FlowNet in an application scenario. We augmented the FlowNet on data from UdG-MS19 and UdG-MS20 [@udg] and fed its optical flow into the motion segmentation approach from Keuper et al. [@KB15b]. The motion segmentation performance was evaluated on the FBMS benchmark [@OB14b]. Table \[tab:augmentedFlowNet\_motion\_seg\] shows that the adaptation to real images clearly helps a small FlowNetC to improve motion segmentation results. For the larger, stacked network, we do not see a significant improvement due to the augmentation. We attribute this to some saturation effect: the optical flow of FlowNet2 is already very good, such that other parts of the motion segmentation pipeline dominate the final result. Conclusion ========== In this paper, we have presented two contributions: (1) We have presented a way to assemble a high-quality flow field from a set of input flow fields being computed in an unsupervised manner using existing optical flow estimation methods. This has been achieved by training an assessment network that learns to predict the errors of the input techniques. (2) We have shown that finetuning a FlowNet on such high-quality flow fields allows for unsupervised adaptation of the network to a specific domain. With this strategy, we obtained state-of-the-art results on the KITTI benchmarks. Moreover, we have shown that this strategy is more successful on domain adaptation than a fully unsupervised approach that does not make use of any synthetic data. Acknowledgements {#acknowledgements .unnumbered} ================ We acknowledge funding by the German Research Foundation (grant ) and the EU Horizon 2020 project Trimbot2020. [^1]: That said, the prediction of the error could be valuable on its own right for a series of other purposes not discussed in this paper, for instance, uncertainty estimation. [^2]: FlowNet2 runtime is reported on an Nvidia GTX1080 GPU, while the classical methods run on the CPU. [^3]: UnFlow does not require any supervision, which makes it biologically more plausible. From the engineering perspective, however, this is irrelevant.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We show that the moduli stacks of semistable sheaves on smooth projective varieties are analytic locally on their coarse moduli spaces described in terms of representations of the associated Ext-quivers with convergent relations. When the underlying variety is a Calabi-Yau 3-fold, our result describes the above moduli stacks as critical locus analytic locally on the coarse moduli spaces. The results in this paper will be applied to the wall-crossing formula of Gopakumar-Vafa invariants defined by Maulik and the author.' author: - Yukinobu Toda title: 'Moduli stacks of semistable sheaves and representations of Ext-quivers' --- Introduction ============ Motivation ---------- The purpose of this paper is to give descriptions of moduli stacks of semistable sheaves on smooth projective varieties in terms of quivers with (formal but convergent) relations, analytic locally on their coarse moduli spaces. The relevant quiver is the Ext-quiver associated to the simple collection of coherent sheaves, determined by a polystable sheaf corresponding to a point of the coarse moduli space. Probably the main results have been folklore for experts of moduli of sheaves (at least on formal neighborhoods at closed points of the coarse moduli space), but we cannot find any reference and our purpose is to give precise statements and details of the proofs. The main results in this paper will be used in the companion paper [@TodGV] in the proof of wall-crossing formula of Gopakumar-Vafa invariants introduced by Maulik and the author [@MT]. Results ------- Let $X$ be a smooth projective variety over $\mathbb{C}$ and $\omega$ an ample divisor on it. Let ${\mathcal{M}}_{\omega}$ be the moduli stack of $\omega$-Gieseker semistable sheaves on $X$, and $M_{\omega}$ the coarse moduli space of $S$-equivalence classes of them. There is a natural morphism $$\begin{aligned} p_M \colon {\mathcal{M}}_{\omega} \to M_{\omega}\end{aligned}$$ sending a semistable sheaf to its $S$-equivalence class. A closed point of $M_{\omega}$ corresponds to a polystable sheaf, i.e. a direct sum $$\begin{aligned} \label{intro:polyE} E=\bigoplus_{i=1}^k V_i \otimes E_i\end{aligned}$$ where each $E_1, \ldots, E_k$ are mutually non-isomorphic $\omega$-Gieseker stable sheaves with the same reduced Hilbert polynomials. The *Ext-quiver* $Q$ associated to the collection $(E_1, \ldots, E_k)$ is defined by the quiver whose vertex is $\{1, \ldots, k\}$ and the number of arrows from $i$ to $j$ is the dimension of ${\mathop{\rm Ext}\nolimits}^1(E_i, E_j)$. We denote by ${\mathcal{M}}_{Q}$ the moduli stack of finite dimensional $Q$-representations with dimension vector $(\dim V_i)_{1\le i\le k}$, and $M_{Q}$ the coarse moduli space of semi-simple $Q$-representations with dimension vector as above. We have the natural morphism $$\begin{aligned} p_Q \colon {\mathcal{M}}_{Q} \to M_Q\end{aligned}$$ sending a $Q$-representation to its semi-simplification. There is a point $0 \in M_Q$ represented by the semi-simple $Q$-representation $\oplus_{i=1}^k V_i \otimes S_i$, where $S_i$ is a simple $Q$-representation corresponding to the vertex $i$. The following is the main result in this paper. \[intro:thm1\]*(Theorem \[thm:precise\])* For $p \in M_{\omega}$ represented by a polystable sheaf (\[intro:polyE\]), let $Q$ be the ${\mathop{\rm Ext}\nolimits}$-quiver associated to $(E_1, \ldots, E_k)$. Then there exist analytic open neighborhoods $p \in U \subset M_{\omega}$, $0 \in V \subset M_{Q}$, closed analytic substack ${\mathcal{Z}}\subset p_Q^{-1}(V)$ with the natural morphism to its coarse moduli space $p_Q \colon {\mathcal{Z}}\to Z$ and the commutative isomorphisms $$\begin{aligned} \xymatrix{ {\mathcal{Z}}\ar[d]_-{p_Q} \ar[r]^-{\cong} & p_M^{-1}(U) \ar[d]^{p_M} \\ Z \ar[r]^-{\cong} & U. }\end{aligned}$$ Indeed we can define the (formal but convergent) relation $I$ of the Ext-quiver $Q$, using the minimal $A_{\infty}$-structure of the dg-category generated by $(E_1, \ldots, E_k)$. The convergence of $I$ will be proved by generalizing the gauge theory arguments of [@MR1950958; @JuTu] for deformations of vector bundles to the case of resolutions of coherent sheaves by complexes of vector bundles. The substack ${\mathcal{Z}}\subset p_Q^{-1}(V)$ is then defined to be the stack of $Q$-representations satisfying the relation $I$. When $X$ is a smooth projective Calabi-Yau (CY) 3-fold, we can take the relation $I$ to be the derivation of a convergent super-potential of the quiver $Q$. So we have the following corollary of Theorem \[intro:thm1\]: \[intro:thm2\]*(Corollary \[cor:CY3\])* In the situation of Theorem \[intro:thm1\], suppose that $X$ is a smooth projective CY 3-fold. Then there is a morphism of complex analytic stacks $W \colon p_Q^{-1}(V) \to \mathbb{C}$ such that $$\begin{aligned} {\mathcal{Z}}=\{dW=0\} \stackrel{\cong}{\to} p_M^{-1}(U). \end{aligned}$$ A result similar to (\[cor:CY3\]) was already proved in [@JS; @BBBJ], where the stack ${\mathcal{M}}_{\omega}$ is described as a critical locus locally on ${\mathcal{M}}_{\omega}$. Our description is more global, as we describe the stack ${\mathcal{M}}_{\omega}$ as a critical locus on the preimage of an open subset of the coarse moduli space $M_{\omega}$. The result of Corollary \[cor:CY3\] is also compatible with the $d$-critical structure introduced by Joyce [@JoyceD]. By [@PTVV], the stack ${\mathcal{M}}_{\omega}$ is a truncation of a derived scheme with a $(-1)$-shifted symplectic structure [@PTVV]. Using this fact, it is proved in [@BBBJ] that the stack ${\mathcal{M}}_{\omega}$ has a canonical $d$-critical structure. From the construction of $W$ in Corollary \[intro:thm2\], the data $(p_M^{-1}(U), p_Q^{-1}(V), W)$ is shown to give a $d$-critical chart of the $d$-critical stack ${\mathcal{M}}_{\omega}$ (see [@TodGV Appendix A]). In the case of moduli spaces of one dimensional sheaves, we also investigate the wall-crossing phenomena of these moduli spaces with respect to the twisted stability. Let $A(X)_{\mathbb{C}}$ be the complexified ample cone of $X$ and take an element $$\begin{aligned} \sigma=B+i\omega \in A(X)_{\mathbb{C}}.\end{aligned}$$ Let $M_{\sigma}$ be the coarse moduli space of one dimensional $B$-twisted $\omega$-semistable sheaves on $X$. We will see that the result of Theorem \[intro:thm1\] is also applied for the moduli space $M_{\sigma}$ of twisted semistable sheaves. If we take $\sigma^{+} \in A(X)_{\mathbb{C}}$ to be sufficiently close to $\sigma$, we have the natural projective morphism $$\begin{aligned} \label{intro:mor:wall} q_M \colon M_{\sigma^{+}} \to M_{\sigma}.\end{aligned}$$ *(Theorem \[thm:onedim\])*\[intro:thm:onedim\] For $p\in M_{\sigma}$, let an open subset $p \in U \subset M_{\sigma}$, a quiver $Q$, and an analytic space $Z$ be as in Theorem \[intro:thm1\]. Then there is a stability condition $\xi$ on the category of $Q$-representations such that we have the commutative diagram of isomorphisms $$\begin{aligned} \label{intro:dia:onedim} \xymatrix{ Z_{\xi} \ar[r]^-{\cong} \ar[d] & q_M^{-1}(U) \ar[d]^-{q_M} \\ Z \ar[r]^-{\cong} & U. }\end{aligned}$$ Here $Z_{\xi}$ is the coarse moduli space of $\xi$-semistable $Q$-representations satisfying the relation $I$. When $X$ is a K3 surface, the morphism (\[intro:mor:wall\]) was studied by Arbarello-Saccà [@Sacca]. In this case, they showed that the morphism (\[intro:mor:wall\]) is analytic locally on $M_{\sigma}$ described as a symplectic resolution of singularities of Nakajima quiver varieties via variation of stability conditions of representations of quivers. One can check that the result of Theorem \[intro:thm:onedim\] gives the same description of the morphism (\[intro:mor:wall\]) as in [@Sacca], if we know the formality of the dg-algebra ${\mathop{{\mathbf{R}}\mathrm{Hom}}\nolimits}(E, E)$ for a polystable sheaf $[E] \in M_{\sigma}$. The results of Corollary \[intro:thm2\] and Theorem \[intro:thm:onedim\] will be used in [@TodGV] to show the wall-crossing formula of (generalization of) Gopakumar-Vafa (GV) invariants introduced by Maulik and the author [@MT]. The idea is roughly speaking as follows. In [@TodGV], we construct some perverse sheaves $\phi_{M_{\sigma^{+}}}$, $\phi_{M_{\sigma}}$ on the moduli spaces $M_{\sigma^{+}}$, $M_{\sigma}$ in (\[intro:mor:wall\]) respectively, following the analogy of BPS sheaves introduced by Davison-Meinhardt [@DaMe]. It turns out that there is a natural morphism $$\begin{aligned} \label{nat:isom} \phi_{M_{\sigma}} \to {\mathbf{R}}q_{M\ast} \phi_{M_{\sigma^{+}}}\end{aligned}$$ and we want to show that the above morphism is an isomorphism. The results of Corollary \[intro:thm2\] and Theorem \[intro:thm:onedim\] enable us to reduce to the case of quivers with convergent super-potentials. In the case of quivers with super-potentials, the similar question was addressed and solved in [@DaMe], and we can use the results and arguments in *loc.cit.* to show that (\[nat:isom\]) is an isomorphism. In a similar way, using the result of Corollary \[intro:thm2\] it should be possible to reduce several problems in Donaldson-Thomas (DT) theory on CY 3-folds to the case of representations of quivers with convergent super-potentials, which is easier in many cases. For example it is recently announced by Davison-Meinhardt that the integrality conjecture of generalized DT invariants [@JS; @K-S] on CY 3-folds can be proved using the result of Corollary \[intro:thm2\]. Plan of the paper ----------------- The organization of this paper is as follows. In Section \[sec:quiver\], we introduce the notion of quivers with convergent relations and construct the moduli spaces of their representations. In Section \[sec:moduli\], we fix some notation on the moduli spaces of semistable sheaves and state the precise form of Theorem \[intro:thm1\]. In Section \[sec:deform\], we describe deformation theory of coherent sheaves in terms of minimal $A_{\infty}$-structures. In Section \[sec:thm\], we complete the proof of Theorem \[intro:thm1\]. In Section \[sec:NC\], we recall NC deformation theory and relate it with the result of Theorem \[intro:thm1\]. In Section \[sec:one\], we prove Theorem \[intro:thm:onedim\]. Acknowledgements ---------------- The author is grateful to Ben Davison, Davesh Maulik for many useful discussions, Bingyu Xia for a comment on analytic Hilbert quotients, and the referee for the careful check of the paper with several comments. The author is supported by World Premier International Research Center Initiative (WPI initiative), MEXT, Japan, and Grant-in Aid for Scientific Research grant (No. 26287002) from MEXT, Japan. Quivers with convergent relations {#sec:quiver} ================================= In this section, we recall some basic notions on quivers, their representations and moduli spaces. We also introduce the concept of convergent relations of quivers, and moduli spaces of quiver representations satisfying such relations. Representations of quivers -------------------------- Recall that a *quiver* $Q$ consists data $$\begin{aligned} Q=(V(Q), E(Q), s, t)\end{aligned}$$ where $V(Q), E(Q)$ are finite sets and $s, t$ are maps $$\begin{aligned} s, t \colon E(Q) \to V(Q).\end{aligned}$$ The set $V(Q)$ is the set of vertices and $E(Q)$ is the set of edges. For $e \in E(Q)$, $s(e)$ is the source of $e$ and $t(e)$ is the target of $e$. For $i, j \in V(Q)$, we use the following notation $$\begin{aligned} \label{Eab} E_{i, j} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\{e \in E(Q) : s(e)=i, t(e)=j\}\end{aligned}$$ i.e. $E_{i, j}$ is the set of edges from $i$ to $j$. A *$Q$-representation* consists of data $$\begin{aligned} \label{rep:Q} \mathbb{V}=\{ (V_i, u_e) : \ i \in V(Q), \ e \in E(Q), \ u_e \colon V_{s(e)} \to V_{t(e)}\}\end{aligned}$$ where $V_i$ is a finite dimensional $\mathbb{C}$-vector space and $u_e$ is a linear map. For a $Q$-representation (\[rep:Q\]), the vector $$\begin{aligned} \label{m:vect} \vec{m}=(m_i)_{i \in V(Q)}, \ m_i=\dim V_i\end{aligned}$$ is called the *dimension vector*. Given a dimension vector (\[m:vect\]), let $V_i$ be a $\mathbb{C}$-vector space with dimension $m_i$. Let us set $$\begin{aligned} G {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\prod_{i \in Q(V)} {\mathop{\rm GL}\nolimits}(V_i), \ \mathrm{Rep}_Q(\vec{m}) {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\prod_{e \in E(V)} {\mathop{\rm Hom}\nolimits}(V_{s(e)}, V_{t(e)}).\end{aligned}$$ The algebraic group $G$ acts on $\mathrm{Rep}_Q(\vec{m})$ by $$\begin{aligned} \label{G:act} g \cdot u=\{g_{t(e)}^{-1} \circ u_e \circ g_{s(e)}\}_{e\in E(Q)}\end{aligned}$$ for $g=(g_i)_{i \in V(Q)} \in G$ and $u=(u_e)_{e\in E(Q)}$. A $Q$-representation with dimension vector $\vec{m}$ is determined by a point in $\mathrm{Rep}_Q(\vec{m})$ up to $G$-action. The moduli stack of $Q$-representations with dimension vector $\vec{m}$ is given by the quotient stack $$\begin{aligned} {\mathcal{M}}_{Q}(\vec{m}) {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\left[ \mathrm{Rep}_Q(\vec{m})/G \right]. \end{aligned}$$ It has the coarse moduli space, given by $$\begin{aligned} \label{mor:coarse} p_Q \colon {\mathcal{M}}_{Q}(\vec{m}) \to M_{Q}(\vec{m}) {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\mathrm{Rep}_Q(\vec{m}) {/\!\!/}G. \end{aligned}$$ Here in general, if a reductive algebraic group $G$ acts on an affine scheme $Y={\mathop{\rm Spec}\nolimits}R$, then its affine GIT quotient is given by $$\begin{aligned} Y{/\!\!/}G {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}{\mathop{\rm Spec}\nolimits}R^G. \end{aligned}$$ For two points in $x_1, x_2 \in Y$, they are mapped to the same point in $Y {/\!\!/}G$ iff their $G$-orbit closures intersect, i.e. $$\begin{aligned} \overline{G \cdot x_1} \cap \overline{G \cdot x_2} \neq \emptyset.\end{aligned}$$ In the case of $G$-action on $\mathrm{Rep}_{Q}(\vec{m})$, the above condition is also equivalent to that the corresponding $Q$-representations have the isomorphic semi-simplifications. The quotient space $M_Q(\vec{m})$ parametrizes semi-simple $Q$-representations with dimension vector $\vec{m}$, and the map (\[mor:coarse\]) sends a $Q$-representation to its semi-simplification (see [@MR2004218 Section 5], [@MR1315461 Section 3] for details). For $i \in V(Q)$, let $S_i$ be the simple $Q$-representation corresponding to the vertex $i$, i.e. it is the unique $Q$-representation with dimension vector $m_i=1$ and $m_j =0$ for $j\neq i$. The point $0 \in \mathrm{Rep}_Q(\vec{m})$ and its image $0 \in M_Q(\vec{m})$ by the map (\[mor:coarse\]) correspond to semi-simple $Q$-representation $\oplus_{i\in V(Q)}V_i \otimes S_i$. A $Q$-representation (\[rep:Q\]) is called *nilpotent* if any sufficiently large number of compositions of the linear maps $u_e$ becomes zero. It is easy to see that a $Q$-representation is nilpotent iff it is an iterated extensions of simple objects $\{S_i\}_{i \in V(Q)}$. In particular, the fiber $$\begin{aligned} p_Q^{-1}(0) \subset {\mathcal{M}}_Q(\vec{m})\end{aligned}$$ for the morphism (\[mor:coarse\]) consists of nilpotent $Q$-representations with dimension vector $\vec{m}$. Quivers with convergent relations {#subsec:conv} --------------------------------- Recall that a *path* of a quiver $Q$ is a composition of edges in $Q$ $$\begin{aligned} e_1 e_2 \ldots e_n, \ e_i \in E(Q), \ t(e_i)=s(e_{i+1}). \end{aligned}$$ The number $n$ above is called the *length* of the path. The *path algebra* of a quiver $Q$ is a $\mathbb{C}$-vector space spanned by paths in $Q$: $$\begin{aligned} \mathbb{C}[Q] {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\bigoplus_{n\ge 0} \bigoplus_{e_1, \ldots, e_n \in E(Q), t(e_i)=s(e_{i+1})} \mathbb{C} \cdot e_1 e_2 \ldots e_n.\end{aligned}$$ Here a path of length zero is a trivial path at each vertex of $Q$, and the product on $\mathbb{C}[Q]$ is defined by composition of paths. By taking the completion of $\mathbb{C}[Q]$ with respect to the length of the path, we obtain the formal path algebra: $$\begin{aligned} \mathbb{C}{[\![}Q {]\!]}{\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\prod_{n\ge 0} \bigoplus_{e_1, \ldots, e_n \in E(Q), t(e_i)=s(e_{i+1})} \mathbb{C} \cdot e_1 e_2 \ldots e_n.\end{aligned}$$ Note that an element $f \in \mathbb{C}{[\![}Q {]\!]}$ is written as $$\begin{aligned} \label{f:element} f=\sum_{n\ge 0, \{1, \ldots, n+1\} \stackrel{\psi}{\to} V(Q)} \sum_{e_i \in E_{\psi(i), \psi(i+1)}} a_{\psi, e_{\bullet}} \cdot e_1 e_2\ldots e_{n}. \end{aligned}$$ Here $a_{\psi, e_{\bullet}} \in \mathbb{C}$, $e_{\bullet}=(e_1, \ldots, e_n)$ and $E_{\psi(i), \psi(i+1)}$ is defined as in (\[Eab\]). The above element $f$ lies in $\mathbb{C}[Q]$ iff $a_{\psi, e_{\bullet}}=0$ for $n\gg 0$. We define the subalgebra $$\begin{aligned} \mathbb{C}\{ Q\} \subset \mathbb{C}{[\![}Q {]\!]}\end{aligned}$$ to be elements (\[f:element\]) such that $\lvert a_{\psi, e_{\bullet}} \rvert <C^n$ for some constant $C>0$ which is independent of $n$. Note that $\mathbb{C}\{Q\}$ contains $\mathbb{C}[Q]$ as a subalgebra. For an element $f \in \mathbb{C}\{Q\}$, we write it as (\[f:element\]) and consider the following ${\mathop{\rm Hom}\nolimits}(V_a, V_b)$-valued formal function of $u=(u_e)_{e \in E(Q)} \in \mathrm{Rep}_Q(\vec{m})$ $$\begin{aligned} \label{Vab} &f(a, b, \vec{m}){\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\\ &\notag \sum_{\begin{subarray}{c} n\ge 0, \psi \colon \{1, \ldots, n+1\} \to V(Q), \\ \psi(1)=a, \psi(n+1)=b \end{subarray}} \sum_{e_i \in E_{\psi(i), \psi(i+1)}} a_{\psi, e_{\bullet}} \cdot u_{e_n} \circ \cdots \circ u_{e_2} \circ u_{e_1}. \end{aligned}$$ By the definition of $\mathbb{C}\{Q\}$, the above ${\mathop{\rm Hom}\nolimits}(V_a, V_b)$-valued formal function on $\mathrm{Rep}_Q(\vec{m})$ has a convergent radius. So there is an analytic open neighborhood $$\begin{aligned} \label{open:V} 0 \in {\mathcal{U}}\subset \mathrm{Rep}_Q(\vec{m})\end{aligned}$$ such that the function (\[Vab\]) absolutely converges on it and determines the complex analytic map $$\begin{aligned} f(a, b, \vec{m}) \colon {\mathcal{U}}\to {\mathop{\rm Hom}\nolimits}(V_a, V_b). \end{aligned}$$ In particular, the equations $f(a, b, \vec{m})=0$ for all $a, b \in V(Q)$ determines the closed complex analytic subspace of ${\mathcal{U}}$. Saturated open subsets ---------------------- We will extend the arguments in the previous subsection to a preimage of an open subset in $\mathrm{Rep}_Q(\vec{m}) {/\!\!/}G$. Before doing this, we prepare some general definitions and lemmas for the action of a reductive algebraic group on affine schemes or analytic spaces. \[def:saturated\] Let $G$ be a reductive group acting on an affine algebraic $\mathbb{C}$-scheme $Y$. Then an analytic open set $U \subset Y$ is called saturated if for any $x \in U$, the orbit closure $\overline{G \cdot x} \subset Y$ is contained in $U$. Note that a saturated open subset is in particular $G$-invariant. Let $$\begin{aligned} \label{quot:Y} \pi_Y \colon Y \to Y{/\!\!/}G\end{aligned}$$ be the quotient map and $V \subset Y {/\!\!/}G$ be an analytic open subset. Then $\pi_Y^{-1}(V)$ is obviously saturated. Indeed, the converse is also true. In order to see this, we recall the following fact on the topology of affine GIT quotient $Y{/\!\!/}G$. *([@MR819554; @MR1040861])*\[thm:KN\] In the situation of Definition \[def:saturated\], let $K \subset G$ be a maximal compact subgroup of $G$. Then there is a $K$-invariant closed subset $S \subset Y$ in analytic topology, called *Kempf-Ness set*, satisfying the following: for any $x \in S$ the $G$-orbit $G \cdot x$ is closed in $Y$ and the inclusion $S \subset Y$ induces the homeomorphism $$\begin{aligned} \label{induced} \iota \colon S/K \stackrel{\cong}{\to} Y{/\!\!/}G.\end{aligned}$$ Here the topology of $S/K$ is a quotient topology induced from the analytic topology of $S$, and that of $Y{/\!\!/}G$ is the analytic topology. In particular, the analytic topology of $Y{/\!\!/}G$ is the quotient topology induced from the analytic topology of $Y$. The following lemma follows from the above theorem: \[lem:saturated\] In the situation of Definition \[def:saturated\], an analytic open subset $U \subset Y$ is saturated iff there is an analytic open set $V\subset Y{/\!\!/}G$ such that $U=\pi_Y^{-1}(V)$ where $\pi_Y \colon Y \to Y{/\!\!/}G$ is the quotient morphism. For $x \in U$ and $y \in Y$, suppose that $\pi_Y(x)=\pi_Y(y)$, i.e. $\overline{G \cdot x}$ and $\overline{G \cdot y}$ intersect. Since $U$ is saturated, we have $\overline{G \cdot x} \subset U$. Then we have $\overline{G \cdot y} \cap U \neq \emptyset$, and since $U$ is open there is $g \in G$ such that $g \cdot y \in U$. Therefore we have $y \in U$. This implies that there is a subset $V \subset Y{/\!\!/}G$ such that $U=\pi_Y^{-1}(V)$. By Theorem \[thm:KN\], the subset $V$ is analytic open, hence the lemma holds. We also have the following lemma. \[lem:saturated2\] In the situation of Definition \[def:saturated\], let $y \in Y$ be a $G$-fixed point and $U \subset Y$ a $G$-invariant analytic open subset with $y \in U$. Then there is an analytic open subset $U' \subset Y$, which is saturated and satisfies $0 \in y \in U' \subset U$. Let $S \subset Y$ be the Kempf-Ness set as in Theorem \[thm:KN\]. Since $y \in Y$ is $G$-fixed, we have $y\in S$ by the homeomorphism (\[induced\]). Then we have $y \in S \cap U$, and $S \cap U$ is a $K$-invariant open subset in $S$. Therefore we have $S \cap U=\pi_S^{-1}(V)$ for some open subset $V \subset S/K$, where $\pi_S \colon S \to S/K$ is the quotient map. Since the map $\tau$ in (\[induced\]) is a homeomorphism, the subset $\iota(V) \subset Y{/\!\!/}G$ is open. We set a saturated open subset $U' \subset Y$ to be $U'=\pi_Y^{-1}(\iota(V))$ for the quotient map (\[quot:Y\]). Since $\pi_S(y) \in V$, we have $y \in U'$. It is enough to check that $U' \subset U$. By the construction of $U'$, for $x \in U'$ there is $z \in S \cap U$ such that $\pi_Y(x)=\pi_Y(z)$, i.e. the closures of $G \cdot x$ and $G \cdot z$ intersect. Since $G \cdot z$ is closed, we have $z \in \overline{G \cdot x}$. Therefore there is $g \in G$ such that $g \cdot x \in U$. Since $U$ is $G$-invariant, we have $x \in U$, hence the lemma is proved. Analytic Hilbert quotients -------------------------- Later we will take GIT-type quotients for non-algebraic complex analytic spaces. Here we recall the basic notions for such quotients. The following definition appears in [@MR1631577; @MR3394374] for reduced complex analytic spaces. \[def:Hquot\] Let $G$ be a reductive algebraic group acting on a complex analytic space $Z$. Then a complex analytic space $Z {/\!\!/}G$ together with a morphism $$\begin{aligned} \label{AHilb} \pi_Z \colon Z \to Z {/\!\!/}G\end{aligned}$$ is called an *analytic Hilbert quotient* if the following conditions hold: 1. $\pi_Z$ is a locally Stein map, i.e. there is an open cover $Z{/\!\!/}G=\cup_{\lambda} {\mathcal{U}}_{\lambda}$ by Stein open subsets ${\mathcal{U}}_{\lambda}$ such that $\pi_Z^{-1}({\mathcal{U}}_{\lambda})$ is Stein. 2. We have $(\pi_{Z\ast}{\mathcal{O}}_Z)^G={\mathcal{O}}_{Z{/\!\!/}G}$. An analytic Hilbert quotient is known to exist when $Z$ is a reduced Stein space, which is unique up to isomorphism [@MR1103041]. In [@MR1631577; @MR3394374], analytic Hilbert quotients are discussed under the assumption that $Z$ is reduced. It seems that such quotients for non-reduced analytic spaces are not available in literatures. We don’t develop generality of such quotients for non-reduced analytic spaces, but show the existence of such quotients in some special cases discussed below, and their universality. We show the following lemma on the existence of analytic Hilbert quotients, which may be well-known, but we include it here as we cannot find a reference. \[lem:aquot\] Let $Y$ be an affine algebraic $\mathbb{C}$-scheme with $G$-action. Then for the affine GIT quotient $\pi_Y \colon Y \to Y {/\!\!/}G$, its analytification $$\begin{aligned} \pi_Y^{an} \colon Y^{an} \to (Y{/\!\!/}G)^{an}\end{aligned}$$ is an analytic Hilbert quotient. The condition (1) in Definition \[def:Hquot\] is obvious as $Y^{an}$ and $(Y{/\!\!/}G)^{an}$ are Stein, so we only prove (2). First suppose that $Y=\mathbb{C}^n$ and the $G$-action on it is linear. In this case, the condition (2) in Definition \[def:Hquot\] is proved in [@MR0423398]. In general, there is a $G$-invariant closed embedding $Y \subset \mathbb{C}^n$ where $G$ acts on $\mathbb{C}^n$ linearly, and the commutative diagram $$\begin{aligned} \label{dia:Y} \xymatrix{ Y \ar@<-0.3ex>@{^{(}->}[r] \ar[d]_{\pi_{Y}} & \mathbb{C}^n \ar[d]^{\pi_{\mathbb{C}^n}} \\ Y {/\!\!/}G \ar@<-0.3ex>@{^{(}->}[r] & \mathbb{C}^n {/\!\!/}G. }\end{aligned}$$ Here since $G$ is reductive, the functor $(-)^G$ sending a $G$-representation to its $G$-invariant part is exact. So the natural map $\Gamma({\mathcal{O}}_{\mathbb{C}^n})^G \to \Gamma({\mathcal{O}}_Y)^G$ is surjective, so the bottom arrow of (\[dia:Y\]) is a closed embedding. By taking the analytification of (\[dia:Y\]), we obtain the commutative diagram of analytic sheaves on $(\mathbb{C}^n {/\!\!/}G)^{an}$ $$\begin{aligned} \label{dia:sheaf} \xymatrix{ {\mathcal{O}}_{(\mathbb{C}^n {/\!\!/}G)^{an}} \ar[r]^(.4){\cong} \ar[d] & (\pi^{an}_{\mathbb{C}^n \ast}{\mathcal{O}}_{(\mathbb{C}^n)^{an}})^G \ar[d] \\ {\mathcal{O}}_{(Y{/\!\!/}G)^{an}} \ar[r] & (\pi^{an}_{Y \ast}{\mathcal{O}}_{Y^{an}})^G. }\end{aligned}$$ Since $\pi^{an}_{\mathbb{C}^n}$ is locally Stein, and the functor $(-)^G$ is exact, the vertical arrows of (\[dia:sheaf\]) are surjections. Therefore the bottom arrow of (\[dia:sheaf\]) is surjective. Also as ${\mathcal{O}}_{Y{/\!\!/}G}=(\pi_{Y\ast}{\mathcal{O}}_Y)^{G}$ for Zariski sheaves, we have an injection ${\mathcal{O}}_{Y{/\!\!/}G} \hookrightarrow \pi_{Y\ast}{\mathcal{O}}_Y$, which is also injective after taking completions at each closed point of ${\mathcal{O}}_{Y{/\!\!/}G}$. Hence the bottom arrow of (\[dia:sheaf\]) is also injective, so it is an isomorphism, i.e. $\pi_Y^{an}$ satisfies the condition (2) in Definition \[def:Hquot\]. By Lemma \[lem:aquot\], for an analytic open subset $U \subset Y{/\!\!/}G$ the map $$\begin{aligned} \label{piY:U} \pi_Y \colon \pi_Y^{-1}(U) \to U\end{aligned}$$ is an analytic Hilbert quotient of $\pi_Y^{-1}(U)$. We also have the following lemma: \[lem:Zquot\] Let $Z \subset \pi_Y^{-1}(U)$ be a $G$-invariant closed analytic subspace. Then there is a closed analytic subspace $Z {/\!\!/}G \hookrightarrow U$ and an analytic Hilbert quotient $\pi_Z \colon Z \to Z {/\!\!/}G$. Since (\[piY:U\]) is an analytic Hilbert quotient and the functor $(-)^G$ is exact, we have the surjection $$\begin{aligned} {\mathcal{O}}_U=(\pi_{Y\ast}{\mathcal{O}}_{\pi_Y^{-1}(U)})^G \twoheadrightarrow (\pi_{Y\ast}{\mathcal{O}}_Z)^G. \end{aligned}$$ Therefore by setting $Z{/\!\!/}G$ to be the complex analytic subspace of $U$ defined by the ideal of the above kernel, we obtain the analytic Hilbert quotient $\pi_Z=\pi_Y|_{Z} \colon Z \to Z {/\!\!/}G$. By gluing the above construction, we have the following lemma: \[lem:Zquot2\] Let $Y$ be an algebraic $\mathbb{C}$-scheme with $G$-action and $\pi_Y \colon Y \to Y'$ a $G$-invariant morphism of algebraic $\mathbb{C}$-schemes where $G$ acts on $Y'$ trivially. Suppose that $Y'=\cup_{i \in I}V_i'$ is an affine open cover such that $V_i=\pi_Y^{-1}(V_i')$ is affine and $\pi|_{V_i} \colon V_i \to V_i'$ is isomorphic to $V_i \to V_i {/\!\!/}G$. Then for an analytic open subset $U \subset Y'$ and a $G$-invariant closed analytic subspace $Z \subset \pi_Y^{-1}(U)$, the analytic Hilbert quotient $Z{/\!\!/}G$ exists as a closed analytic subspace of $U$. Let $U_i=U \cap V_i'$ and $Z_i=Z \cap V_i$. By applying Lemma \[lem:Zquot\] for $Z_i \subset \pi_Y^{-1}(U_i) \subset V_i$, we obtain the analytic Hilbert quotient $Z_i {/\!\!/}G \subset U_i$. By the construction, they glue to give a desired analytic Hilbert quotient $Z{/\!\!/}G \subset U$. \[rmk:Zquot2\] The situation of Lemma \[lem:Zquot2\] happens for a GIT quotient of semistable locus w.r.t. a $G$-linearization on a quasi-projective scheme. We next discuss the universality of analytic Hilbert quotients: \[univ\] An analytic Hilbert quotient (\[AHilb\]) satisfies the universality if for any $G$-invariant analytic map $h \colon Z \to Z'$ to a complex analytic space $Z'$, there is a unique factorization $$\begin{aligned} \label{univ:quot} h \colon Z \stackrel{\pi_Z}{\to} Z {/\!\!/}G \to Z'.\end{aligned}$$ The above universality is proved in [@MR1103041 Corollary 4] when $Z$ is a reduced Stein space and $Z'=\mathbb{C}^n$. Below show the universality for the analytic Hilbert quotients given in Lemma \[lem:Zquot2\]. We prepare the following lemma: \[lem:univ:prepare\] Let $\pi_Z \colon Z \to Z{/\!\!/}G$ be the analytic Hilbert quotient given in Lemma \[lem:Zquot2\]. Then for any family of $G$-invariant closed (not necessary analytic) subsets $\{W_{\lambda}\}_{\lambda \in \Lambda}$ in $Z$, the image $\pi_Z(W_{\lambda})$ is closed in $Z{/\!\!/}G$ and we have the identity $$\begin{aligned} \label{image:id} \pi_Z \left(\bigcap_{\lambda \in \Lambda} W_{\lambda} \right)= \bigcap_{\lambda \in \Lambda}\pi_Z(W_{\lambda}).\end{aligned}$$ The question is local on $Z{/\!\!/}G$, so we may assume that $Y$ is affine and $Y'=Y{/\!\!/}G$. Since $Z, Z{/\!\!/}G$ are closed in $\pi_Y^{-1}(U)$, $U$, we may also assume that $Z=\pi_Y^{-1}(U)$, $Z{/\!\!/}G=U$. Let $S \subset Y$ be a Kempf-Ness set as in Theorem \[thm:KN\]. Then for $S'{\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\pi_Y^{-1}(U) \cap S$, we have the homeomorphism $S'/K \stackrel{\cong}{\to}U$. Therefore for $W_{\lambda}' {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}S' \cap W_{\lambda}$, we have $\pi_Z(W_{\lambda}')=\pi_Z(W_{\lambda})$. Since each $W_{\lambda}'$ is a $K$-invariant closed subset of $S'$, its image $\pi_{Z}(W_{\lambda}')$ is a closed subset of $U$ and the identity (\[image:id\]) holds. The desired universality is proved in the following lemma: \[lem:universal\] The analytic Hilbert quotient $\pi_Z \colon Z \to Z{/\!\!/}G$ in Lemma \[lem:Zquot2\] satisfies the universality in Definition \[univ\]. Let $h \colon Z \to Z'$ be a $G$-invariant analytic map to a complex analytic space $Z'$. We take an open cover $Z'=\cup_{\lambda \in \Lambda}{\mathcal{U}}_{\lambda}'$ such that ${\mathcal{U}}_{\lambda}'$ is a closed analytic subspace of an open subset in $\mathbb{C}^n$. Let $W_{\lambda}' {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}Z' \setminus {\mathcal{U}}_{\lambda}'$ and $W_{\lambda} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}h^{-1}(W_{\lambda}')$. Then each $W_{\lambda}$ is a $G$-invariant closed subset of $Z$. By Lemma \[lem:univ:prepare\], the image $\pi_Z(W_{\lambda}) \subset Z {/\!\!/}G$ is closed and $$\begin{aligned} \bigcap_{\lambda \in \Lambda}\pi_Z(W_{\lambda}) =\pi_Z \left(\bigcap_{\lambda \in \Lambda} W_{\lambda} \right)= \pi_Z \circ h^{-1} \left(\bigcap_{\lambda \in \Lambda} W_{\lambda}'\right) =\emptyset. \end{aligned}$$ Here the last identity follows because $\{{\mathcal{U}}_{\lambda}'\}_{\lambda \in \Lambda}$ is an open cover of $Z'$. It follows that by setting ${\mathcal{U}}_{\lambda}{\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}(Z{/\!\!/}G) \setminus \pi_Z(W_{\lambda})$, we have an open cover $Z{/\!\!/}G =\cup_{\lambda \in \Lambda} {\mathcal{U}}_{\lambda}$ and the diagram $$\begin{aligned} \xymatrix{ \pi_Z^{-1}({\mathcal{U}}_{\lambda}) \ar@<-0.3ex>@{^{(}->}[r] \ar[d]_-{\pi_Z} & h^{-1}({\mathcal{U}}_{\lambda}') \ar[d]_-{h} & \\ {\mathcal{U}}_{\lambda} \ar@{.>}[r] & {\mathcal{U}}_{\lambda}' \ar@<-0.3ex>@{^{(}->}[r] & \mathbb{C}^n. }\end{aligned}$$ Here the top horizontal arrow is an open immersion, and the right horizontal arrow is a locally closed embedding. By the property (2) in Definition \[def:Hquot\], there is a unique analytic map ${\mathcal{U}}_{\lambda} \to {\mathcal{U}}_{\lambda}'$ which makes the above diagram commutes. By the uniqueness, they glue to give a desired factorization (\[univ:quot\]). Moduli spaces of representations of quivers with convergent relations --------------------------------------------------------------------- We return to the situation of Section \[subsec:conv\]. A convergent relation $I$ of a quiver $Q$ is a collection of finite number of elements $$\begin{aligned} I=(f_1, \ldots, f_l), \ f_i \in \mathbb{C}\{Q\}.\end{aligned}$$ Using the lemmas in the previous subsection, we have the following: \[lem:Qconv\] Given a convergent relation $I=(f_1, \ldots, f_l)$ of a quiver $Q$ and its dimension vector $\vec{m}$, there is an analytic open neighborhood of $0$ $$\begin{aligned} 0 \in V \subset M_{Q}(\vec{m})\end{aligned}$$ such that each ${\mathop{\rm Hom}\nolimits}(V_a, V_b)$-valued formal function $f_i(a, b, \vec{m})$ defined by (\[Vab\]) for $f=f_i$ absolutely converges on $\pi_Q^{-1}(V)$. Here $\pi_Q$ is the quotient map $$\begin{aligned} \pi_Q \colon \mathrm{Rep}_Q(\vec{m}) \to M_{Q}(\vec{m}). \end{aligned}$$ Let ${\mathcal{U}}$ be an open neighborhood of $0 \in \mathrm{Rep}_Q(\vec{m})$ as in (\[open:V\]), where each $f_{i}(a, b, \vec{m})$ absolutely converges on ${\mathcal{U}}$. Since for $g=(g_i)_{i\in V(Q)} \in G$ and $u=(u_e)_{e \in E(Q)}$ we have $$\begin{aligned} f_i(a, b, \vec{m})(g \cdot u)=g_b^{-1} \circ f_i(a, b, \vec{m})(u) \circ g_a\end{aligned}$$ the ${\mathop{\rm Hom}\nolimits}(V_a, V_b)$-valued function $f_i(a, b, \vec{m})$ absolutely converges on $G \cdot {\mathcal{U}}$. By Lemma \[lem:saturated2\], there is a saturated open subset $0 \in {\mathcal{V}}\subset G \cdot {\mathcal{U}}$. Then by Lemma \[lem:saturated\], ${\mathcal{V}}=\pi_Q^{-1}(V)$ for an open subset $0 \in V \subset M_Q(\vec{m})$. For a quiver $Q$ with a convergent relation $I=(f_1, \ldots, f_l)$, let $\vec{m}$ be its dimension vector and take an open subset $V \subset M_Q(\vec{m})$ as in Lemma \[lem:Qconv\]. By Lemma \[lem:Qconv\], we have the $G$-invariant closed analytic subspace of $\pi_{Q}^{-1}(V)$ $$\begin{aligned} \label{Rep:V} \mathrm{Rep}_{(Q, I)}(\vec{m})|_{V} \subset \pi_Q^{-1}(V)\end{aligned}$$ whose structure sheaf is given by $$\begin{aligned} {\mathcal{O}}_{\mathrm{Rep}_{(Q, I)}(\vec{m})|_{V}} ={\mathcal{O}}_{\pi_Q^{-1}(V)}/(f_i(a, b, \vec{m})_{jk}, a, b \in V(Q)).\end{aligned}$$ Here $f_i(a, b, \vec{m})_{jk}$ is the matrix component of the analytic map $$\begin{aligned} f_i(a, b, \vec{m}) \colon \pi_Q^{-1}(V) \to {\mathop{\rm Hom}\nolimits}(V_a, V_b).\end{aligned}$$ By taking the quotient by $G$, we have the following definition: \[defi:cmoduli\] Let $Q$ be a quiver with a convergent relation $I$, and $\vec{m}$ its dimension vector. Then for a sufficiently small analytic open neighborhood $0 \in V \subset M_Q(\vec{m})$, we define the complex analytic stack ${\mathcal{M}}_{(Q, I)}(\vec{m})|_{V}$ and complex analytic space $M_{(Q, I)}(\vec{m})|_{V}$ by $$\begin{aligned} {\mathcal{M}}_{(Q, I)}(\vec{m})|_{V} &{\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}[\mathrm{Rep}_{(Q, I)}(\vec{m})|_{V}/G], \\ M_{(Q, I)}(\vec{m})|_{V} &{\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\mathrm{Rep}_{(Q, I)}(\vec{m})|_{V} {/\!\!/}G. \end{aligned}$$ Here $\mathrm{Rep}_{(Q, I)}(\vec{m})|_{V} {/\!\!/}G$ is the analytic Hilbert quotient of $\mathrm{Rep}_{(Q, I)}(\vec{m})|_{V}$, given in Lemma \[lem:Zquot\]. Convergent super-potential {#subsec:potential} -------------------------- For a quiver $Q$, its convergent super-potential is defined as follows. \[def:conv:pot\] A convergent super-potential of a quiver $Q$ is an element $$\begin{aligned} W \in \mathbb{C}\{ Q \}/[\mathbb{C}\{ Q \}, \mathbb{C}\{ Q \}]. \end{aligned}$$ A convergent super-potential $W$ of $Q$ is represented by a formal sum $$\begin{aligned} \notag W=\sum_{n\ge 1} \sum_{\begin{subarray}{c} \{1, \ldots, n+1\} \stackrel{\psi}{\to} V(Q), \\ \psi(n+1)=\psi(1) \end{subarray}} \sum_{e_i \in E_{\psi(i), \psi(i+1)}} a_{\psi, e_{\bullet}} \cdot e_1 e_2\ldots e_{n}\end{aligned}$$ with $\lvert a_{\psi, e_{\bullet}} \rvert <C^n$ for a constant $C>0$. For $i, j \in V(Q)$, let $\mathbf{E}_{i, j}$ be the $\mathbb{C}$-vector space spanned by $E_{i, j}$. We set $$\begin{aligned} \label{e:dual} E_{i, j}^{\vee} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\{ e^{\vee} : e \in E_{i, j}\} \subset \mathbf{E}_{i, j}^{\vee}. \end{aligned}$$ Here for $e \in E_{i, j}$, the element $e^{\vee} \in \mathbf{E}_{i, j}^{\vee}$ is defined by the condition $e^{\vee}(e)=1$ and $e^{\vee}(e')=0 $ for any $e\neq e' \in E_{i, j}$, i.e. $E_{i, j}^{\vee}$ is the dual basis of $E_{i, j}$. For a map $\psi \colon \{1, \ldots, n+1\} \to V(Q)$ with $\psi(1)=\psi(n+1)$ and elements $e_i \in E_{\psi(i), \psi(i+1)}$, $e\in E(Q)$, we set $$\begin{aligned} \partial_{e^{\vee}}(e_1 \ldots e_n) =\sum_{a=1}^{n} e^{\vee}(e_a) e_{a+1} \ldots e_n e_1 \ldots e_{a-1}. \end{aligned}$$ Here $e^{\vee}(e_a)=0$ if $(s(e_a), t(e_a)) \neq (s(e), t(e))$. The above partial differential extends to a linear map $$\begin{aligned} \partial_{e^{\vee}} \colon \mathbb{C}\{ Q \}/[\mathbb{C}\{ Q \}, \mathbb{C}\{ Q \}] \to \mathbb{C}\{ Q \}. \end{aligned}$$ For a convergent super-potential $W$, the set of elements in $\mathbb{C}\{Q\}$ $$\begin{aligned} \partial W {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\{ \partial_{e^{\vee}}W : e \in E(Q) \}\end{aligned}$$ is a convergent relation of $Q$. For a dimension vector $\vec{m}$ of $Q$, let ${\mathop{\rm tr}\nolimits}W$ be the formal function of $u=(u_e)_{e\in E(Q)} \in \mathrm{Rep}_Q(\vec{m})$ defined by $$\begin{aligned} {\mathop{\rm tr}\nolimits}W(u) {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\sum_{n\ge 1} \sum_{\begin{subarray}{c} \{1, \ldots, n+1\} \stackrel{\psi}{\to} V(Q), \\ \psi(n+1)=\psi(1) \end{subarray}} \sum_{e_i \in E_{\psi(i), \psi(i+1)}} a_{\psi, e_{\bullet}} \cdot {\mathop{\rm tr}\nolimits}(u_{e_n} \circ u_{e_{n-1}} \circ \cdots \circ u_{e_1}).\end{aligned}$$ The above formal function on $\mathrm{Rep}_Q(\vec{m})$ is $G$-invariant. By the argument of Lemma \[lem:Qconv\], there is an analytic open neighborhood $0 \in V \subset M_Q(\vec{m})$ such that the formal function ${\mathop{\rm tr}\nolimits}W$ absolutely converges on $\pi_Q^{-1}(V)$ to give a $G$-invariant holomorphic function $$\begin{aligned} {\mathop{\rm tr}\nolimits}W \colon \pi_Q^{-1}(V) \to \mathbb{C}. \end{aligned}$$ Then for the relation $I=\partial W$, it is easy to see (and well-known when $W$ is a usual super-potential of $Q$) that the analytic subspace (\[Rep:V\]) equals to the critical locus of ${\mathop{\rm tr}\nolimits}W$ in $\pi_Q^{-1}(V)$: $$\begin{aligned} \mathrm{Rep}_{(Q, \partial W)}(\vec{m})|_{V}=\{ d({\mathop{\rm tr}\nolimits}W)=0\}. \end{aligned}$$ In particular, we have $$\begin{aligned} {\mathcal{M}}_{(Q, \partial W)}(\vec{m})|_{V}=\left[\{ d({\mathop{\rm tr}\nolimits}W)=0\}/G \right]. \end{aligned}$$ Moduli stacks of semistable sheaves {#sec:moduli} =================================== In this section, we recall some basic notions and facts on moduli spaces of semistable sheaves, whose details are available in [@MR1450870]. Then we state the precise form of Theorem \[intro:thm1\] in Theorem \[thm:precise\]. In what follows, we always assume that the varieties or schemes are defined over $\mathbb{C}$. Gieseker semistable sheaves --------------------------- Let $$\begin{aligned} (X, {\mathcal{O}}_X(1))\end{aligned}$$ be a polarized smooth projective variety with $\omega=c_1({\mathcal{O}}_X(1))$. For a coherent sheaf $E$ on $X$, its *Hilbert polynomial* is defined by $$\begin{aligned} \chi(E \otimes {\mathcal{O}}_X(m))=a_d m^d+a_{d-1}m^{d-1}+\cdots\end{aligned}$$ where $d=\dim {\mathop{\rm Supp}\nolimits}(E)$ and $a_d$ is a positive rational number. The *reduced Hilbert polynomial* is defined by $$\begin{aligned} \overline{\chi}(E, m) {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\frac{\chi(E \otimes {\mathcal{O}}_X(m))}{a_d} \in \mathbb{Q}[m]. \end{aligned}$$ For polynomials $p_i(m) \in \mathbb{Q}[m]$ with $i=1, 2$, we write $p_1(m) \succ p_2(m)$ if $\deg p_1<\deg p_2$ or $\deg p_1=\deg p_2$, $p_1(m) >p_2(m)$ for $m\gg 0$. Then $(\mathbb{Q}[m], \succ)$ is an ordered set. By definition, a coherent sheaf $E$ on $X$ is said to be *$\omega$-Gieseker (semi)stable* if for any non-zero subsheaf $E' \subsetneq E$, we have the inequality $$\begin{aligned} \overline{\chi}(E', m) \prec (\preceq) \overline{\chi}(E, m).\end{aligned}$$ For any Gieseker semistable sheaf $E$ on $X$, it has a filtration (called *Jördar-Hölder (JH) filtration*) $$\begin{aligned} 0=F_0 \subset F_1 \subset F_2 \subset \cdots \subset F_k=E\end{aligned}$$ such that each $F_i/F_{i-1}$ is $\omega$-Gieseker stable whose reduced Hilbert polynomial coincides with $\overline{\chi}(E, m)$. The JH filtration is not necessary unique, but its subquotient $$\begin{aligned} {\mathop{\rm gr}\nolimits}(E) {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\bigoplus_{i=1}^k F_i/F_{i-1}\end{aligned}$$ is uniquely determined up to isomorphism. For two $\omega$-Gieseker semistable sheaves $E, E'$ on $X$, they are called *S-equivalent* if ${\mathop{\rm gr}\nolimits}(E)$ and ${\mathop{\rm gr}\nolimits}(E')$ are isomorphic. Moduli spaces of semistable sheaves {#subsec:moduli} ----------------------------------- Let ${\mathcal{M}}$ be the 2-functor $$\begin{aligned} \label{stack:M} {\mathcal{M}}\colon {\mathcal{S}}ch/\mathbb{C} \to {\mathcal{G}}roupoid\end{aligned}$$ which sends a $\mathbb{C}$-scheme $S$ to the groupoid of $S$-flat coherent sheaves on $X \times S$. The stack ${\mathcal{M}}$ is an algebraic stack locally of finite type over $\mathbb{C}$. Let $\Gamma$ be the image of the Chern character map $$\begin{aligned} \Gamma {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}{\mathop{\rm Im}\nolimits}({\mathop{\rm ch}\nolimits}\colon K(X) \to H^{\ast}(X, \mathbb{Q})). \end{aligned}$$ For each $v \in \Gamma$, we have an open substack of finite type $$\begin{aligned} {\mathcal{M}}_{\omega}(v) \subset {\mathcal{M}}\end{aligned}$$ consisting of flat families of $\omega$-Gieseker semistable sheaves with Chern character $v$. The stack ${\mathcal{M}}_{\omega}(v)$ is constructed as a global quotient stack of a quasi-projective scheme. For $[E] \in {\mathcal{M}}_{\omega}(v)$, we take $m\gg 0$ and a vector space $\mathbf{V}$ satisfying $$\begin{aligned} \dim \mathbf{V} =\chi(E(m))=\dim H^0(E(m)). \end{aligned}$$ The above condition depends only on $v$, and independent of $E$ for $m \gg 0$. Let $\mathrm{Quot}(\mathbf{V}, v)$ be the Grothendieck Quot scheme parameterizing quotients $$\begin{aligned} \label{quot:s} s \colon \mathbf{V} \otimes {\mathcal{O}}_X(-m) \twoheadrightarrow E\end{aligned}$$ in ${\mathop{\rm Coh}\nolimits}(X)$ with ${\mathop{\rm ch}\nolimits}(E)=v$. Then there is an open subscheme $$\begin{aligned} \mathrm{Quot}^{\circ}(\mathbf{V}, v) \subset \mathrm{Quot}(\mathbf{V}, v)\end{aligned}$$ parameterizing quotients (\[quot:s\]) such that $E$ is $\omega$-Gieseker semistable and the induced linear map $\mathbf{V} \to H^0(E(m))$ is an isomorphism. The algebraic group $\mathrm{GL}(\mathbf{V})$ acts on $\mathrm{Quot}^{\circ}(\mathbf{V}, v)$ by $$\begin{aligned} g \cdot (\mathbf{V} \otimes {\mathcal{O}}_X(-m) \stackrel{s}{\twoheadrightarrow} E) =(\mathbf{V} \otimes {\mathcal{O}}_X(-m) \stackrel{s \circ g}{\twoheadrightarrow} E)\end{aligned}$$ and the stack ${\mathcal{M}}_{\omega}(v)$ is described as $$\begin{aligned} {\mathcal{M}}_{\omega}(v)=[\mathrm{Quot}^{\circ}(\mathbf{V}, v)/{\mathop{\rm GL}\nolimits}(\mathbf{V})]. \end{aligned}$$ The above construction is compatible with the Geometric Invariant Theory (GIT). If we take the closure of $\mathrm{Quot}^{\circ}(\mathbf{V}, v)$, $$\begin{aligned} \overline{\mathrm{Quot}}^{\circ}(\mathbf{V}, v) \subset \mathrm{Quot}(\mathbf{V}, v)\end{aligned}$$ then there is a ${\mathop{\rm GL}\nolimits}(\mathbf{V})$-linearized polarization on $\overline{\mathrm{Quot}}^{\circ}(\mathbf{V}, v)$ such that its open locus $\mathrm{Quot}^{\circ}(\mathbf{V}, v)$ is the GIT semistable locus with respect to the above ${\mathop{\rm GL}\nolimits}(\mathbf{V})$-linearized polarization. In particular, we have the good quotient morphism (which is in particular a good moduli space in the sense of [@MR3237451]) $$\begin{aligned} p_M \colon {\mathcal{M}}_{\omega}(v) \to M_{\omega}(v) {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\mathrm{Quot}^{\circ}(\mathbf{V}, v){/\!\!/}{\mathop{\rm GL}\nolimits}(\mathbf{V}).\end{aligned}$$ Namely, there is a ${\mathop{\rm GL}\nolimits}(\mathbf{V})$-invariant affine open cover $$\begin{aligned} \mathrm{Quot}^{\circ}(\mathbf{V}, v)=\bigcup_i U_i, \ U_i={\mathop{\rm Spec}\nolimits}R_i\end{aligned}$$ such that $M_{\omega}(v)$ has the following affine open cover $$\begin{aligned} M_{\omega}(v)=\bigcup_i U_i {/\!\!/}{\mathop{\rm GL}\nolimits}(\mathbf{V}), \ U_i {/\!\!/}{\mathop{\rm GL}\nolimits}(\mathbf{V})={\mathop{\rm Spec}\nolimits}R_i^{{\mathop{\rm GL}\nolimits}(\mathbf{V})}. \end{aligned}$$ By the GIT construction of $M_{\omega}(v)$, two points $x_1, x_2 \in \mathrm{Quot}^{\circ}(\mathbf{V})$ are mapped to the same point by $p_M$ if and only if their orbit closures intersect, i.e. $$\begin{aligned} \overline{{\mathop{\rm GL}\nolimits}(\mathbf{V}) \cdot x_1} \cap \overline{{\mathop{\rm GL}\nolimits}(\mathbf{V}) \cdot x_2} \neq \emptyset. \end{aligned}$$ It is also known that the above condition is equivalent to that, if $x_i$ corresponds to a $\omega$-Gieseker semistable sheaf $E_i$, then $E_1$ and $E_2$ are $S$-equivalent. In fact, the projective scheme $M_{\omega}(v)$ is the coarse moduli space of $S$-equivalence classes of $\omega$-Gieseker semistable sheaves with Chern character $v$. So every point $p \in M_{\omega}(v)$ is represented by a direct sum of $\omega$-Gieseker stable sheaves $E$ (called a *polystable sheaf*), written as $$\begin{aligned} \label{polystable} E=\bigoplus_{i=1}^k V_i \otimes E_i. \end{aligned}$$ Here each $V_i$ is a finite dimensional vector space, $E_i$ is a $\omega$-Gieseker stable sheaf with $\overline{\chi}(E_i, m)=\overline{\chi}(E, m)$ for all $i$. Ext-quiver {#subsec:Extquiver} ---------- Suppose that $E \in {\mathop{\rm Coh}\nolimits}(X)$ is of the form (\[polystable\]). Then the collection of the sheaves $(E_1, \ldots, E_k)$ forms a simple collection, defined below: A collection of coherent sheaves $(E_1, \ldots, E_k)$ is called a simple collection if ${\mathop{\rm Hom}\nolimits}(E_i, E_j)=\mathbb{C} \cdot \delta_{ij}$. Let $E_{\bullet}=(E_1, \ldots, E_k)$ be a simple collection of coherent sheaves on $X$. For each $1\le i, j \le k$, we fix a finite subset $$\begin{aligned} E_{i, j} \subset {\mathop{\rm Ext}\nolimits}^1(E_i, E_j)^{\vee}\end{aligned}$$ giving a basis of ${\mathop{\rm Ext}\nolimits}^1(E_i, E_j)^{\vee}$. We define the quiver $Q_{E_{\bullet}}$ as follows. The set of vertices and edges are given by $$\begin{aligned} V(Q_{E_{\bullet}})=\{1, 2, \ldots, k\}, \ E(Q_{E_{\bullet}})=\coprod_{1\le i, j \le k} E_{i,j}. \end{aligned}$$ The maps $s, t \colon E(Q_{E_{\bullet}}) \to V(Q_{E_{\bullet}})$ are given by $$\begin{aligned} s|_{E_{i, j}}=i, \ t|_{E_{i, j}}=j. \end{aligned}$$ The resulting quiver $Q_{E_{\bullet}}$ is called the *Ext-quiver* of $E_{\bullet}$. We can now state the precise statement of Theorem \[intro:thm1\]: \[thm:precise\] Let $X$ be a smooth projective variety, and let ${\mathcal{M}}_{\omega}(v)$ be the moduli stack of $\omega$-Gieseker semistable sheaves on $X$ with Chern character $v$. We have the natural morphism to its coarse moduli space $$\begin{aligned} p_{M} \colon {\mathcal{M}}_{\omega}(v) \to M_{\omega}(v). \end{aligned}$$ For $p \in M_{\omega}(v)$, it is represented by a sheaf $E$ of the form $$\begin{aligned} E=\bigoplus_{i=1}^k V_i \otimes E_i\end{aligned}$$ where $E_{\bullet}=(E_1, \ldots, E_k)$ is a simple collection. Let $Q_{E_{\bullet}}$ be the corresponding Ext-quiver and $\vec{m}$ its dimension vector given by $\vec{m}=(m_1, \ldots, m_k)$, where $m_i=\dim V_i$. Then there is a convergent relation $I_{E_{\bullet}}$ of $Q_{E_{\bullet}}$, analytic open neighborhoods $p \in U \subset M_{\omega}(v)$, $0 \in V \subset M_{Q_{E_{\bullet}}}(\vec{m})$ and commutative isomorphisms $$\begin{aligned} \label{dia:comiso} \xymatrix{ {\mathcal{M}}_{(Q_{E_{\bullet}, I_{E_{\bullet}}})}(\vec{m})|_{V} \ar[r]^-{\cong} \ar[d]_-{p_Q} & p_M^{-1}(U) \ar[d]^-{p_M} \\ M_{(Q_{E_{\bullet}, I_{E_{\bullet}}})}(\vec{m})|_{V} \ar[r]^-{\cong} & U. }\end{aligned}$$ Here the bottom arrow sends $0$ to $p$. The proof of Theorem \[thm:precise\] will be completed in Proposition \[prop:complete\] below. Deformations of coherent sheaves {#sec:deform} ================================ In this section, we describe deformation theory of coherent sheaves via dg-algebras and their minimal $A_{\infty}$-models. The arguments are already known for vector bundles [@MR1950958; @JuTu] and we apply similar arguments for resolutions of coherent sheaves by vector bundles. The above description will give local atlas of the moduli stack ${\mathcal{M}}$ in Subsecton \[subsec:moduli\] via finite dimensional $A_{\infty}$-algebras. More precisely for a given coherent sheaf $E$ on a smooth projective variety $X$, we compare the following three descriptions of the deformation space of $E$: 1. An open neighborhood of the algebraic stack ${\mathcal{M}}$ given in Subsection \[subsec:moduli\] at the point $[E] \in {\mathcal{M}}$. 2. The Mauer-Cartan locus associated with the infinite dimensional dg-algebra ${\mathop{{\mathbf{R}}\mathrm{Hom}}\nolimits}(E, E)$. 3. The Mauer-Cartan locus associated with the finite dimensional minimal $A_{\infty}$-algebra ${\mathop{\rm Ext}\nolimits}^{\ast}(E, E)$. We will compare the above descriptions by first constructing the map $(3) \Rightarrow (2)$ in Lemma \[prop:restrict\]. Then we will construct a map $(2) \Rightarrow (1)$, and then composing we get a desired atlas $(3) \Rightarrow (1)$ in Proposition \[prop:rest2\]. Deformations of vector bundles {#subsec:dg} ------------------------------ We recall some basic facts on the deformation theory of vector bundles via gauge theory, and fix some notation (see [@MR1950958] for details). For a holomorphic vector bundle ${\mathcal{E}}\to X$ on a smooth projective variety $X$, we denote by ${\mathcal{A}}^{p, q}({\mathcal{E}})$ the sheaf of ${\mathcal{E}}$-valued $(p, q)$-forms on $X$, and set $$\begin{aligned} A^{p, q}({\mathcal{E}}) {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\Gamma(X, {\mathcal{A}}^{p, q}({\mathcal{E}})). \end{aligned}$$ The holomorphic structure on ${\mathcal{E}}$ is given by the Dolbeaut connection $$\begin{aligned} \overline{\partial}_{{\mathcal{E}}} \colon {\mathcal{A}}^{0, 0}({\mathcal{E}}) \to {\mathcal{A}}^{0, 1}({\mathcal{E}}). \end{aligned}$$ The Dolbeaut connection extends to the Dolbeaut complex $$\begin{aligned} 0 \to {\mathcal{A}}^{0, 0}({\mathcal{E}}) \to {\mathcal{A}}^{0, 1}({\mathcal{E}}) \to \cdots \to {\mathcal{A}}^{0, i}({\mathcal{E}}) \to {\mathcal{A}}^{0, i+1}({\mathcal{E}}) \to \cdots\end{aligned}$$ giving a resolution of ${\mathcal{E}}$. The complex ${\mathcal{A}}^{0, \ast}({\mathcal{E}})$ is an elliptic complex (see [@MR0515872 Chapter IV, Section 5]), whose global section computes $H^{\ast}(X, {\mathcal{E}})$, i.e. $$\begin{aligned} H^{k}(X, {\mathcal{E}})={\mathcal{H}}^{k}(A^{0, \ast}({\mathcal{E}})). \end{aligned}$$ Any other holomorphic structure on ${\mathcal{E}}$ is given by the Dolbeaut connection of the form $$\begin{aligned} \overline{\partial}_{{\mathcal{E}}}+A \colon {\mathcal{A}}^{0, 0}({\mathcal{E}}) \to {\mathcal{A}}^{0, 1}({\mathcal{E}})\end{aligned}$$ for some $A \in A^{0, 1}({\mathcal{E}}nd({\mathcal{E}}))$. Conversely given $A \in A^{0, 1}({\mathcal{E}}nd({\mathcal{E}}))$, the connection $\overline{\partial}_{{\mathcal{E}}}+A$ gives a holomorphic structure on ${\mathcal{E}}$ if and only if its square is zero, i.e. $$\begin{aligned} \mathrm{ad}(\overline{\partial}_{{\mathcal{E}}})(A)+A \circ A=0. \end{aligned}$$ The above equation is the Mauer-Cartan (MC) equation of the dg-algebra $$\begin{aligned} \label{g:vect} \mathfrak{g}_{{\mathcal{E}}}^{\ast} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}A^{0, \ast}({\mathcal{E}}nd({\mathcal{E}})). \end{aligned}$$ The quotient of the solution space of the MC equation of $\mathfrak{g}_{{\mathcal{E}}}^{\ast}$ by the gauge group of ${\mathcal{C}}^{\infty}$-automorphisms of ${\mathcal{E}}$ describes the deformation space of ${\mathcal{E}}$ as holomorphic vector bundles. Deformations of complexes {#subsec:complex} ------------------------- We have a similar deformation theory for complexes of vector bundles. Let $$\begin{aligned} \label{seq:E0} {\mathcal{E}}^{\bullet}=(\cdots \to 0 \to {\mathcal{E}}^i \stackrel{d^i}{\to} {\mathcal{E}}^{i+1} \to \cdots \to {\mathcal{E}}^j \to 0 \to \cdots)\end{aligned}$$ be a bounded complex of holomorphic vector bundles on $X$. By taking the Dolbeaut complex ${\mathcal{A}}^{0, \ast}({\mathcal{E}}^i)$ for each ${\mathcal{E}}^i$, we obtain the double complex ${\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet})$. Let $\mathrm{Tot}(-)$ means the total complex of the double complex. We set $$\begin{aligned} \label{A:tot} A^{0, \ast}({\mathcal{E}}^{\bullet}) {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\mathrm{Tot}(\Gamma(X, {\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet}))). \end{aligned}$$ Similarly to the vector bundle case, the complex $\mathrm{Tot}({\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet}))$ is elliptic, and its global section computes the hyper cohomology of ${\mathcal{E}}^{\bullet}$ $$\begin{aligned} \label{compute:hyper} {\mathcal{H}}^k({\mathbf{R}}\Gamma(X, {\mathcal{E}}^{\bullet}))={\mathcal{H}}^k(A^{0, \ast}({\mathcal{E}}^{\bullet})). \end{aligned}$$ Applying the construction (\[A:tot\]) to the inner ${\mathcal{H}}om$ complex ${\mathcal{H}}om^{\ast}({\mathcal{E}}^{\bullet}, {\mathcal{E}}^{\bullet})$, we obtain the complex $$\begin{aligned} \mathfrak{g}_{{\mathcal{E}}^{\bullet}}^{\ast} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}A^{0, \ast}({\mathcal{H}}om^{\ast}({\mathcal{E}}^{\bullet}, {\mathcal{E}}^{\bullet})). \end{aligned}$$ Its degree $k$ part is given by $$\begin{aligned} \label{g:degk} \mathfrak{g}_{{\mathcal{E}}^{\bullet}}^k=\bigoplus_{p+q=k} \prod_{i} A^{0, q}({\mathcal{H}}om({\mathcal{E}}^i, {\mathcal{E}}^{i+p}))\end{aligned}$$ and the differential $d_{\mathfrak{g}}$ is induced by the Dolbeaut connections $\overline{\partial}_{{\mathcal{E}}_i}$ on each ${\mathcal{E}}_i$ together with the differentials $d^{\ast}$ in (\[seq:E0\]). Also the composition $$\begin{aligned} A^{0, q}({\mathcal{H}}om({\mathcal{E}}^i, {\mathcal{E}}^{i+p})) \times A^{0, q'}&({\mathcal{H}}om({\mathcal{E}}^{i+p}, {\mathcal{E}}^{i+p+p'})) \\ &\to A^{0, q+q'}({\mathcal{H}}om({\mathcal{E}}^{i}, {\mathcal{E}}^{i+p+p'}))\end{aligned}$$ defines the product structure $\cdot$ on $\mathfrak{g}_{{\mathcal{E}}^{\bullet}}^{\ast}$. Then it is straightforward to check that the data $$\begin{aligned} \label{g:E} (\mathfrak{g}_{{\mathcal{E}}^{\bullet}}^{\ast}, d_{\mathfrak{g}}, \cdot)\end{aligned}$$ is a dg-algebra. Let $\mathfrak{mc}$ be the map defined by $$\begin{aligned} \mathfrak{mc} \colon \mathfrak{g}_{{\mathcal{E}}^{\bullet}}^1 \to \mathfrak{g}_{{\mathcal{E}}^{\bullet}}^2, \ \alpha \mapsto d_{\mathfrak{g}}(\alpha)+ \alpha \cdot \alpha.\end{aligned}$$ Its zero set $$\begin{aligned} \label{sol:MCeq} \mathrm{MC}(\mathfrak{g}_{{\mathcal{E}}^{\bullet}}^{\ast})=\{ \alpha \in \mathfrak{g}_{{\mathcal{E}}^{\bullet}}^1 : \mathfrak{mc}(\alpha)=0\} \end{aligned}$$ is the solution of the Mauer-Cartan equation of the dg-algebra $\mathfrak{g}_{{\mathcal{E}}^{\bullet}}^{\ast}$. Note that an element $\alpha \in \mathfrak{g}_{{\mathcal{E}}^{\bullet}}^1$ satisfies the MC equation iff $$\begin{aligned} (d_{{\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet})}+\alpha)^2=0\end{aligned}$$ on ${\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet})$. In this case, the data $$\begin{aligned} \label{deform:E} ({\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet}), d_{{\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet})}+\alpha)\end{aligned}$$ determines a dg-${\mathcal{A}}^{0, \ast}({\mathcal{O}}_X)$-module. Then (\[deform:E\]) is a bounded complex of ${\mathcal{O}}_X$-modules whose cohomologies are coherent (see [@MR2648899 Lemma 4.1.5]), giving a deformation of the complex (\[seq:E0\]) in the derived category. More explicitly, by (\[g:degk\]) an element $\alpha \in \mathfrak{g}_{{\mathcal{E}}^{\bullet}}^1$ consists of data $$\begin{aligned} \label{write:alpha} \alpha=(\alpha_0^i, \alpha_1^i, \alpha_2^i, \ldots), \ \alpha_{j}^i \in A^{0, j}({\mathcal{H}}om({\mathcal{E}}^i, {\mathcal{E}}^{i-j+1}))\end{aligned}$$ Suppose that the above $\alpha$ satisfies the MC equation $\mathfrak{mc}(\alpha)=0$. Then the diagram $$\begin{aligned} \xymatrix{ \cdots \ar[r] & {\mathcal{A}}^{0, 0}({\mathcal{E}}^{i-1}) \ar[r]^{} \ar[d] & {\mathcal{A}}^{0, 0}({\mathcal{E}}^i) \ar[r]^{d^i+\alpha_0^i} \ar[d]_{\overline{\partial}_{{\mathcal{E}}^{i}}+\alpha_1^i} & {\mathcal{A}}^{0, 0}({\mathcal{E}}^{i+1}) \ar[r]\ar[d] & \cdots \\ \cdots \ar[r] & {\mathcal{A}}^{0, 1}({\mathcal{E}}^{i-1}) \ar[r]\ar[d] & {\mathcal{A}}^{0, 1}({\mathcal{E}}^i) \ar[r]\ar[d]_{\overline{\partial}_{{\mathcal{E}}^{i}}+\alpha_1^i} & {\mathcal{A}}^{0, 1}({\mathcal{E}}^{i+1}) \ar[r]\ar[d] & \cdots \\ \cdots \ar[r] & {\mathcal{A}}^{0, 2}({\mathcal{E}}^{i-1})\ar[r] & {\mathcal{A}}^{0, 2}({\mathcal{E}}^i) \ar[r] & {\mathcal{A}}^{0, 2}({\mathcal{E}}^{i+1}) \ar[r] & \cdots }\end{aligned}$$ satisfies the following: it is a complex in the horizontal direction, each square is commutative, and the compositions of vertical arrows are homotopic to zero with homotopy given by $\alpha_2^i$. In particular if $\alpha_j^i=0$ for $j\ge 2$, then the above diagram extends to a double complex. In this case $$\begin{aligned} {\mathcal{E}}^i_{\alpha}=({\mathcal{A}}^{0, 0}({\mathcal{E}}^i), \overline{\partial}_{{\mathcal{E}}^{i}}+\alpha_1^i)\end{aligned}$$ is a holomorphic structure on ${\mathcal{E}}^i$. By setting $$\begin{aligned} d_{\alpha}^i=d^i+\alpha_0^i \colon {\mathcal{A}}^{0, 0}({\mathcal{E}}^i) \to {\mathcal{A}}^{0, 0}({\mathcal{E}}^{i+1})\end{aligned}$$ we have the bounded complex of holomorphic vector bundles on $X$ $$\begin{aligned} \label{E:alpha} \cdots \to 0 \to {\mathcal{E}}^{-n}_{\alpha} \stackrel{d^{-n}_{\alpha}}{\to} \cdots \to {\mathcal{E}}^{-1}_{\alpha} \stackrel{d^{-1}_{\alpha}}{\to} {\mathcal{E}}^0_{\alpha} \to 0 \to \cdots\end{aligned}$$ giving a deformation of ${\mathcal{E}}^{\bullet}$ as complexes. Conversely given a deformation of ${\mathcal{E}}^{\bullet}$ as a complex, then it gives rise to the solution of MC equation of the form $\alpha=(\alpha_0^i, \alpha_1^i, 0, \ldots)$. For $\alpha, \alpha' \in \mathrm{MC}(\mathfrak{g}_{{\mathcal{E}}^{\bullet}}^{\ast})$, $\alpha$ and $\alpha'$ are called *gauge equivalent* if there exist $$\begin{aligned} \gamma=\{(\gamma_0^i, \gamma_1^i, \gamma_2^i, \ldots)\}_{i} \in \mathfrak{g}_{{\mathcal{E}}^{\bullet}}^0, \ \gamma_j^i \in A^{0, j}({\mathcal{H}}om({\mathcal{E}}^i, {\mathcal{E}}^{i-j}))\end{aligned}$$ where $\gamma_0^i$ gives an isomorphism ${\mathcal{E}}^i \stackrel{\cong}{\to} {\mathcal{E}}^i$ as ${\mathcal{C}}^{\infty}$-vector bundles, such that we have $$\begin{aligned} \label{isom:gauge} \gamma \circ (d_{{\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet})}+\alpha) \circ \gamma^{-1}= d_{{\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet})}+\alpha'.\end{aligned}$$ In this case, we have the isomorphism of the dg-${\mathcal{A}}^{0, \ast}({\mathcal{O}}_X)$-modules $$\begin{aligned} \gamma \colon ({\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet}), d_{{\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet})}+\alpha) \stackrel{\cong}{\to} ({\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet}), d_{{\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet})}+\alpha')\end{aligned}$$ giving isomorphic deformations of (\[seq:E0\]) in the derived category. Suppose that the complex (\[seq:E0\]) is quasi-isomorphic to a coherent sheaf $E$. Let $\mathrm{Def}_E$ be the deformation functor $$\begin{aligned} \mathrm{Def}_E \colon {\mathcal{A}}rt \to {\mathcal{S}}et\end{aligned}$$ sending a finite dimensional commutative local $\mathbb{C}$-algebra $(R, \mathbf{m})$ to the set of isomorphism classes of $R$-flat deformation of $E$ to $X \times {\mathop{\rm Spec}\nolimits}R$. Then it is shown in [@DDE Section 8] that we have the functorial isomorphism $$\begin{aligned} \mathrm{MC}(\mathfrak{g}_{{\mathcal{E}}^{\bullet}}^{\ast} \otimes \mathbf{m}) /(\mbox{gauge equivalence}) \stackrel{\cong}{\to} \mathrm{Def}_E(R) \end{aligned}$$ by sending a solution of the MC equation to the cohomology of the corresponding deformation (\[deform:E\]). Resolutions of coherent sheaves ------------------------------- For a smooth projective variety $X$, we consider deformation theory of a sheaf $$\begin{aligned} E \in {\mathop{\rm Coh}\nolimits}(X)\end{aligned}$$ in terms of dg-algebra. As we recalled in Section \[subsec:dg\], when $E$ is a vector bundle its deformation theory is described in terms of the dg-algebra (\[g:vect\]). In general, we take a resolution of $E$ by vector bundles and consider the associated dg-algebra (\[g:E\]). We first fix a resolution of $E$ by vector bundles in the following way. Let ${\mathcal{O}}_X(1)$ be an ample line bundle on $X$. Then for $m_0 \gg 0$ we have the surjection $$\begin{aligned} H^0(E(m_0)) \otimes {\mathcal{O}}_{X}(-m_0) \twoheadrightarrow E. \end{aligned}$$ Applying this construction to the kernel of the above morphism and repeating, we obtain the resolution of $E$ of the form $$\begin{aligned} \cdots \to W^i \otimes {\mathcal{O}}_X(-m_i) \stackrel{d^i}{\to} &W^{i+1} \otimes {\mathcal{O}}_X(-m_{i+1}) \to \cdots \\ &\cdots \to W^0 \otimes {\mathcal{O}}_X(-m_0) \to E \to 0\end{aligned}$$ for finite dimensional vector spaces $W^i$. Since $X$ is smooth, the kernel of $d^i$ for $i=-N$ with $N\gg 0$ is a vector bundle on $X$. Therefore we obtain the bounded resolution of $E$ $$\begin{aligned} \label{seq:E} 0 \to {\mathcal{E}}^{-N} \stackrel{d^{-N}}{\to} \cdots \to {\mathcal{E}}^{-1} \stackrel{d^{-1}}{\to} {\mathcal{E}}^0 \to E \to 0\end{aligned}$$ where ${\mathcal{E}}^{-N}={\mathop{\rm Ker}\nolimits}(d^{-N})$ and ${\mathcal{E}}^i=W^i \otimes {\mathcal{O}}_X(-m_i)$ for $-N<i\le 0$. By replacing $m_i$ and $n$ if necessary, the above construction can be extended to local universal family of deformations of $E$. Let ${\mathcal{M}}$ be the stack (\[stack:M\]), and take its local atlas $$\begin{aligned} \label{atlas} (A, p) \to ({\mathcal{M}}, [E])\end{aligned}$$ at $[E] \in {\mathcal{M}}$, such that $A$ is a finite type affine scheme and a point $p \in A$ is sent to $[E]$. Let $$\begin{aligned} E_{A} \in {\mathop{\rm Coh}\nolimits}(X \times A)\end{aligned}$$ be the universal family. Let ${\mathcal{O}}_{X \times A}(1)$ be the pull-back of ${\mathcal{O}}_X(1)$ to $X \times A$. For $m_0 \gg 0$, the ${\mathcal{O}}_A$-module $H^0(E_A(-m_0))$ is locally free of finite rank and we have the surjection $$\begin{aligned} H^0(E_A(m_0)) \otimes_{{\mathcal{O}}_A}{\mathcal{O}}_{X \times A}(-m_0) \twoheadrightarrow E_A. \end{aligned}$$ Similarly as above, we obtain the resolution of $E_A$ of the form $$\begin{aligned} \cdots \to {\mathcal{W}}^i \otimes_{{\mathcal{O}}_A} {\mathcal{O}}_{X \times A}(-m_i) \to &{\mathcal{W}}^{i+1} \otimes_{{\mathcal{O}}_A} {\mathcal{O}}_{X\times A}(-m_{i+1}) \to \cdots \\ &\cdots \to W^0 \otimes_{{\mathcal{O}}_A} {\mathcal{O}}_{X\times A}(-m_0) \to E_A \to 0\end{aligned}$$ for locally free ${\mathcal{O}}_A$-modules ${\mathcal{W}}^i$ of finite rank. By taking the kernel at $i=-N$ for $N\gg 0$, we obtain the resolution of $E_A$ $$\begin{aligned} \label{seq:Eu} 0 \to {\mathcal{E}}^{-N}_{A} \to \cdots \to {\mathcal{E}}_A^{-1} \to {\mathcal{E}}_A^0 \to E_{A} \to 0.\end{aligned}$$ For $N\gg 0$, each ${\mathcal{E}}^i_A$ is a vector bundle on $X \times A$, since $E_A$ is a $A$-flat perfect object. By restricting it to $X \times \{p\}$, we obtain the resolution (\[seq:E\]). Minimal $A_{\infty}$-algebras {#subsec:minimal} ----------------------------- For a coherent sheaf $E$ on $X$, we fix a resolution ${\mathcal{E}}^{\bullet}$ as in (\[seq:E\]) and consider the dg-algebra (\[g:E\]) $$\begin{aligned} \label{minimal:g} \mathfrak{g}_E^{\ast} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\mathfrak{g}_{{\mathcal{E}}^{\bullet}}^{\ast}. \end{aligned}$$ When $E$ is a vector bundle, we just take the dg-algebra (\[g:vect\]) in the argument below. By (\[compute:hyper\]) we have $$\begin{aligned} {\mathop{\rm Ext}\nolimits}^k(E, E) ={\mathcal{H}}^k(\mathfrak{g}_E^{\ast}). \end{aligned}$$ By the homological transfer theorem, there exists a minimal $A_{\infty}$-algebra structure $\{m_n\}_{n\ge 2}$ on ${\mathop{\rm Ext}\nolimits}^{\ast}(E, E)$, and a quasi-isomorphism $$\begin{aligned} \label{quasi:I} I \colon ({\mathop{\rm Ext}\nolimits}^{\ast}(E, E), \{m_n\}_{n\ge 2}) \to (\mathfrak{g}_E^{\ast}, d_{\mathfrak{g}}, \cdot)\end{aligned}$$ as $A_{\infty}$-algebras. Here the $A_{\infty}$-structure on ${\mathop{\rm Ext}\nolimits}^{\ast}(E, E)$ consists of linear maps $$\begin{aligned} \label{def:mn} m_n \colon {\mathop{\rm Ext}\nolimits}^{\ast}(E, E) \to {\mathop{\rm Ext}\nolimits}^{\ast+2-n}(E, E), \ n\ge 2\end{aligned}$$ and the quasi-isomorphism (\[quasi:I\]) is a collection of linear maps $$\begin{aligned} I_n \colon {\mathop{\rm Ext}\nolimits}^{\ast}(E, E)^{\otimes n} \to \mathfrak{g}_E^{\ast+1-n}.\end{aligned}$$ Both of $m_n$ and $I_n$ satisfy the $A_{\infty}$-constraints. The maps $m_n$ and $I_n$ are explicitly described in terms of Kontsevich-Soibelman’s tree formula [@MR1882331] given as follows. Let us choose a Kähler metric on $X$, Hermitian metrics on vector bundles ${\mathcal{E}}^i$, and fix them. A standard argument in Hodge theory for elliptic complexes (for example, see [@MR0515872]) yields linear embedding $$\begin{aligned} i \colon {\mathop{\rm Ext}\nolimits}^{\ast}(E, E) \hookrightarrow \mathfrak{g}_E^{\ast}\end{aligned}$$ which identifies ${\mathop{\rm Ext}\nolimits}^{\ast}(E, E)$ with $\Delta=0$ where $\Delta$ is the Laplacian operator $$\begin{aligned} \Delta=d_{\mathfrak{g}} d_{\mathfrak{g}}^{\ast} +d_{\mathfrak{g}}^{\ast} d_{\mathfrak{g}} \colon \mathfrak{g}_{E}^{\ast} \to \mathfrak{g}_E^{\ast}.\end{aligned}$$ Here $d_{\mathfrak{g}}^{\ast}$ is the adjoint map of $d_{\mathfrak{g}}$ with respect to the above chosen Kähler metric on $X$ and Hermitian metrics on ${\mathcal{E}}^i$. Moreover we have linear operators $$\begin{aligned} \label{hodge} p \colon \mathfrak{g}_E^{\ast} \twoheadrightarrow {\mathop{\rm Ext}\nolimits}^{\ast}(E, E), \ h \colon \mathfrak{g}_E^{\ast} \to \mathfrak{g}_E^{\ast-1}\end{aligned}$$ satisfying the following relations $$\begin{aligned} \label{relation} p\circ i={\textrm{id}}, \ i \circ p={\textrm{id}}+ d_{\mathfrak{g}} \circ h + h\circ d_{\mathfrak{g}}. \end{aligned}$$ The homotopy operator $h$ is given by $$\begin{aligned} \label{h:homotopy} h=-d_{\mathfrak{g}}^{\ast} \circ G\end{aligned}$$ where $G$ is the Green’s operator, which is an operator of order $-2$ (see [@MR0515872 Chapter IV]), hence $h$ is of order $-1$. The $A_{\infty}$-product (\[def:mn\]) is described by Kontsevich-Soibelman’s tree formula as $$\begin{aligned} \label{KS:m} m_n=\sum_{T \in {\mathcal{O}}(n)} \pm m_{n, T}\end{aligned}$$ where ${\mathcal{O}}(n)$ is the set of isomorphism classes of binary rooted trees with $n$-leaves. Here $m_{n, T}$ is the operation given by the composition associated to $T$, by putting $i$ on leaves, the product map $\cdot$ of $\mathfrak{g}_E^{\ast}$ on internal vertices, the homotopy $h$ on internal edges, and the projection $p$ on the root of $T$. For example, $m_3$ is given by $$\begin{aligned} m_3(x_1, x_2, x_3)= \pm p(h(i(x_1) \cdot i(x_2)) \cdot i(x_3)) \pm p(i(x_1) \cdot h(i(x_1) \cdot i(x_2))). \end{aligned}$$ The operation $I_n$ is similarly given by $$\begin{aligned} \label{KS:I} I_n=\sum_{T \in {\mathcal{O}}(n)} \pm I_{n, T}\end{aligned}$$ where $I_{n, T}$ is defined by replacing $p$ by $h$ in the construction of $m_{n, T}$. For example, $I_3$ is given by $$\begin{aligned} I_3(x_1, x_2, x_3)= \pm h(h(i(x_1) \cdot i(x_2)) \cdot i(x_3)) \pm h(i(x_1) \cdot h(i(x_1) \cdot i(x_2))). \end{aligned}$$ By [@JuTu Appendix A], there exists another $A_{\infty}$-homomorphism $$\begin{aligned} \label{P:inverse} P \colon (\mathfrak{g}_E^{\ast}, d_{\mathfrak{g}}, \cdot) \to ({\mathop{\rm Ext}\nolimits}^{\ast}(E, E), \{m_n\}_{n\ge 2})\end{aligned}$$ which is a homotopy inverse of $I$, i.e. $$\begin{aligned} P \circ I={\textrm{id}}, \quad I \circ P \stackrel{\rm{homotopic}}{\sim}{\textrm{id}}.\end{aligned}$$ Here two $A_{\infty}$-morphisms $f_1, f_2 \colon A_1 \to A_2$ between $A_{\infty}$-algebras $A_1$, $A_2$ are called *homotopic* if there is an $A_{\infty}$-homomorphism $$\begin{aligned} H \colon A_1 \to A_2 \otimes \Omega_{[0, 1]}^{\ast}\end{aligned}$$ such that $H(0)=f_1$ and $H(1)=f_2$, where $\Omega_{[0, 1]}^{\ast}$ is the dg-algebra of ${\mathcal{C}}^{\infty}$-differential forms on the interval $[0, 1]$. The $A_{\infty}$-homomorphism $P$ consists of linear maps $$\begin{aligned} P_n \colon (\mathfrak{g}_E^{\ast})^{\otimes n} \to {\mathop{\rm Ext}\nolimits}^{\ast+1-n}(E, E)\end{aligned}$$ which are also described in terms of tree formula, whose details we omit (see [@JuTu Appendix A] for details). Later we will use some boundedness properties of linear maps $m_n$, $I_n$ and $P_n$. Let us take an even number $l \gg 0$, e.g. $l>2\dim X$, and consider the Sobolev $(l, 2)$-norm $\lVert - \rVert_l$ on $\mathfrak{g}_E^{\ast}$. It also induces a norm $\lVert - \rVert_{l}$ on ${\mathop{\rm Ext}\nolimits}^{\ast}(E, E)$ by the embedding $i$ in (\[hodge\]). We denote by $$\begin{aligned} \mathfrak{g}_E^{\ast} \subset \widehat{\mathfrak{g}}_{E, l}^{\ast}\end{aligned}$$ the completion of $\mathfrak{g}_E^{\ast}$ with respect to the Sobolev norm $\lVert - \rVert_{l}$. \[lem:mbound\] There is a constant $C>0$ independent of $n$ such that $$\begin{aligned} \lVert m_n \rVert_l<C^n, \ \lVert I_n \rVert_l<C^n, \ \lVert P_n \rVert_l<C^n. \end{aligned}$$ Here $\lVert - \rVert_l$ for linear maps mean the operator norm with respect to the norm $\lVert - \rVert_l$ on $\mathfrak{g}_E^{\ast}$ or ${\mathop{\rm Ext}\nolimits}^{\ast}(E, E)$. When $E$ is a vector bundle, the lemma is proved in [@MR1950958 Proposition 2.3.2] and [@JuTu Lemma A.1.1, Lemma A.1.2, Lemma A.1.5]. The key ingredient of the proof is that the maps $m_n$, $I_n$, $P_n$ are constructed as in (\[KS:m\]) using rooted trees, whose cardinality is bounded as $$\begin{aligned} \sharp {\mathcal{O}}(n)=\frac{(2n-2)!}{(n-1)! n!} <4^{n-1},\end{aligned}$$ and the fact that the homotopy operator $h$, the product map on $\mathfrak{g}_E^{\ast}$ are extended to bounded operators $$\begin{aligned} \widehat{\mathfrak{g}}_{E, l}^{\ast} \stackrel{h}{\to} \widehat{\mathfrak{g}}_{E, l}^{\ast}, \quad \widehat{\mathfrak{g}}_{E, l}^{\ast} \times \widehat{\mathfrak{g}}_{E, l}^{\ast} \stackrel{\cdot}{\to} \widehat{\mathfrak{g}}_{E, l}^{\ast}. \end{aligned}$$ When $E$ is a coherent sheaf which is not necessary a vector bundle, the above property still hold for the complex (\[seq:E\]) without any modification: the boundedness of $h$ is a general fact for elliptic complexes (see [@MR0515872 Theorem 4.12]), as it is an operator of degree $-1$ given by (\[h:homotopy\]), and that of the product $\cdot$ follows from our choice of $l\gg 0$ and a standard result of Sobolev spaces (for example see [@Willie Theorem 25]). Therefore the same argument for the vector bundle case proves the lemma. Deformations by $A_{\infty}$-algebras ------------------------------------- For $x \in {\mathop{\rm Ext}\nolimits}^1(E, E)$, we consider the infinite series $$\begin{aligned} \label{kappa} \kappa(x){\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\sum_{n\ge 2}m_n(x, \ldots, x) \end{aligned}$$ where each term $m_n(x, \ldots, x)$ is an element of ${\mathop{\rm Ext}\nolimits}^2(E, E)$. By Lemma \[lem:mbound\], there is an analytic open neighborhood $$\begin{aligned} \label{open:U} 0 \in {\mathcal{U}}\subset {\mathop{\rm Ext}\nolimits}^1(E, E)\end{aligned}$$ such that the series (\[kappa\]) absolutely converges on ${\mathcal{U}}$ to give a complex analytic morphism $$\begin{aligned} \label{kappa:U} \kappa \colon {\mathcal{U}}\to {\mathop{\rm Ext}\nolimits}^2(E, E). \end{aligned}$$ The equation $\kappa(x)=0$ is the Mauler-Cartan equation for the $A_{\infty}$-algebra (\[def:mn\]). We set $T$ to be $$\begin{aligned} \label{T:uU} T {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\kappa^{-1}(0) \subset {\mathcal{U}}\end{aligned}$$ i.e. $T$ is the closed complex analytic subspace defined by the ideal of zero of the map (\[kappa:U\]). On the other hand, for $x \in {\mathop{\rm Ext}\nolimits}^1(E, E)$ we also consider the infinite series $$\begin{aligned} \label{series:I} I_{\ast}(x) {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\sum_{n\ge 1}I_n(x, \ldots, x) \end{aligned}$$ where each term $I_n(x, \ldots, x)$ is an element of $\mathfrak{g}_E^1$. By Lemma \[lem:mbound\], for sufficiently small open subset (\[open:U\]) the series (\[series:I\]) absolutely converges on ${\mathcal{U}}$ to give a morphism of Banach analytic spaces $$\begin{aligned} \label{I:analytic} I_{\ast} \colon {\mathcal{U}}\to \widehat{\mathfrak{g}}_{E, l}^1. \end{aligned}$$ \[prop:restrict\] The morphism (\[I:analytic\]) restricts to the morphism of Banach analytic spaces $$\begin{aligned} \label{restrict:Banach} I_{\ast} \colon T\to \mathrm{MC}(\mathfrak{g}_E^{\ast}). \end{aligned}$$ Here $\mathrm{MC}(\mathfrak{g}_E^{\ast})$ is the solution of the Mauer-Cartan equation (\[sol:MCeq\]) of the dg-algebra $\mathfrak{g}_E^{\ast}$. The result is proved in [@JuTu Section 2.2, Lemma A.1.3] when $E$ is a vector bundle, and the same argument applies for the complex (\[seq:E\]). Since $I_{\ast}$ is an $A_{\infty}$-homomorphism, it preserves the MC locus, so it sends $T$ to $\mathrm{MC}(\widehat{\mathfrak{g}}_{E, l}^{\ast})$. For $x \in T$, the smoothness of $I_{\ast}(x)$ follows along with the argument of [@JuTu Lemma A.1.3], by replacing $\overline{\partial}$ in *loc.cit.* by the differential $d_{\mathfrak{g}}$ of $\widehat{\mathfrak{g}}_{E, l}^{\ast}$. Therefore we obtain the morphism (\[restrict:Banach\]). Let ${\mathcal{M}}$ be the moduli stack of coherent sheaves on $X$, and we regard it as a complex analytic stack. The above lemma implies the following proposition: \[prop:rest2\] By shrinking ${\mathcal{U}}$ if necessary, the morphism (\[I:analytic\]) induces the morphism of complex analytic stacks $$\begin{aligned} \label{mor:stack} I_{\ast} \colon T \to {\mathcal{M}}. \end{aligned}$$ The map in Lemma \[prop:restrict\] corresponds to the element $$\begin{aligned} \alpha \in \mathfrak{g}_E^{1} \otimes \Gamma({\mathcal{O}}_T)\end{aligned}$$ satisfying the MC equation of the dg-algebra $\mathfrak{g}_E^{\ast} \otimes \Gamma({\mathcal{O}}_T)$. Then we obtain the dg-${\mathcal{A}}^{0, \ast}({\mathcal{O}}_X) \otimes {\mathcal{O}}_T$-module $$\begin{aligned} \label{A:coh} ({\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet}) \otimes {\mathcal{O}}_T, d_{{\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet}) \otimes {\mathcal{O}}_T}+\alpha). \end{aligned}$$ Here ${\mathcal{E}}^{\bullet}$ is the complex (\[seq:E\]). The dg-module (\[A:coh\]) is a bounded complex of ${\mathcal{O}}_{X \times T}$-modules. We can show that each cohomology of (\[A:coh\]) is a coherent ${\mathcal{O}}_{X \times T}$-module as in [@MR2648899 Lemma 4.1.5], which essentially follows the argument in [@MR1079726 p51-52]. Indeed for each $t \in T$ and $x \in X$, by the proof of [@MR2648899 Lemma 4.1.5] there is an open neighborhood $x \in U$ such that there is a degree zero ${\mathcal{C}}^{\infty}$-isomorphism $$\begin{aligned} \label{phi_t} \phi_t \colon {\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet})|_{U} \stackrel{\cong}{\to} {\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet})|_{U}\end{aligned}$$ satisfying that $$\begin{aligned} \notag \phi_t^{-1} \circ (d_{{\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet})}+\alpha_t) \circ \phi_t =d_{{\mathcal{A}}^{0, \ast}({\mathcal{E}}^{\bullet})}+\beta_t.\end{aligned}$$ Here in the notation (\[write:alpha\]), $\beta_t$ is of the form $$\begin{aligned} \beta_t=((\beta_0^{i})_t, 0, 0, \ldots), \ (\beta_0^i)_t \in {\mathop{\rm Hom}\nolimits}({\mathcal{E}}^i|_{U}, {\mathcal{E}}^{i+1}|_{U}).\end{aligned}$$ This implies that the dg-module (\[A:coh\]) restricted to $U \times \{t\}$ is gauge equivalent to a complex which is quasi-isomorphic to a bounded complex of holomorphic vector bundles on $U$. The isomorphism (\[phi\_t\]) can be found by solving a certain differential equation, as in [@MR1079726 p51-52]. As remarked in [@MR1079726 p52], the solution $\phi_t$ is analytic in $t \in T$ as $\alpha_t$ is. Therefore by shrinking $U$, $T$ if necessary we see that (\[A:coh\]) restricted to $U \times T$ is gauge equivalent to a complex which is quasi-isomorphic to a bounded complex of analytic vector bundles on $U \times T$. In particular, each cohomology of (\[A:coh\]) is coherent. Therefore (\[A:coh\]) determines an object $$\begin{aligned} {\mathcal{E}}_T^{\bullet} \in D^b_{{\mathop{\rm Coh}\nolimits}(X \times T)}({\mathop{\rm Mod}\nolimits}{\mathcal{O}}_{X \times T}).\end{aligned}$$ We show that by shrinking ${\mathcal{U}}$ if necessary, the object ${\mathcal{E}}_T^{\bullet}$ is quasi-isomorphic to a $T$-flat sheaf $$\begin{aligned} E_T {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}{\mathcal{H}}^0({\mathcal{E}}_T^{\bullet}) \in {\mathop{\rm Coh}\nolimits}(X \times T). \end{aligned}$$ By the construction of ${\mathcal{E}}_T^{\bullet}$, at $t=0$ we have ${\mathcal{E}}_T^{\bullet} {\stackrel{\textbf{L}}{\otimes}}_{{\mathcal{O}}_T} {\mathcal{O}}_{\{0\}} \cong E$. We have the spectral sequence $$\begin{aligned} E_2^{p, q}= {\mathcal{T}}or_{-p}^{{\mathcal{O}}_{X \times T}}({\mathcal{H}}^q({\mathcal{E}}_T^{\bullet}), {\mathcal{O}}_{X \times \{0\}}) \Rightarrow {\mathcal{H}}^{p+q}(E).\end{aligned}$$ Let $q_0$ be the maximal $q \in \mathbb{Z}$ such that ${\mathcal{H}}^q({\mathcal{E}}_T^{\bullet}) \neq 0$. If $q_0>0$, then by the above spectral sequence we have ${\mathcal{H}}^{q_0}({\mathcal{E}}_T^{\bullet})|_{t=0}=0$. Therefore by shrinking ${\mathcal{U}}$, we have $q_0 \le 0$, and as $E\neq 0$ it follows that $q_0=0$ by the above spectral sequence. Moreover we have $E_2^{-1, 0}=0$, which implies that $E_T$ is flat at $t=0$, hence $E_2^{p, 0}=0$ for any $p<0$. Then by the above spectral sequence again, we have $E_2^{0, -1}=0$, hence we may assume ${\mathcal{H}}^{-1}({\mathcal{E}}_{T}^{\bullet})=0$. Inductively, by shrinking ${\mathcal{U}}$ we see that ${\mathcal{H}}^q({\mathcal{E}}_T^{\bullet})=0$ for any $q<0$. Therefore the above claim holds. By the universal property of ${\mathcal{M}}$, the sheaf $E_T$ defines the morphism (\[mor:stack\]). \[prop:rest3\] The morphism of complex analytic stacks $I_{\ast} \colon T \to {\mathcal{M}}$ in (\[mor:stack\]) is smooth of relative dimension $\dim {\mathop{\rm Aut}\nolimits}(E)$. We first show that $I_{\ast} \colon T \to {\mathcal{M}}$ is smooth. Let $(S, s)$ be a complex analytic space and $(S, s) \to ({\mathcal{M}}, [E])$ a morphism of complex analytic stacks which sends $s$ to $[E]$. It is enough to show that, after replacing $S$ by its open neighborhood at $s \in S$ if necessary, we have the factorization $$\begin{aligned} \label{fact:S} (S, s) \to (T, 0) \stackrel{I_{\ast}}{\to} ({\mathcal{M}}, [E]).\end{aligned}$$ By shrinking $S$ if necessary, we may assume that $S \to {\mathcal{M}}$ factors through $$\begin{aligned} (S, s) \stackrel{f_1}{\to} (A, p) \to ({\mathcal{M}}, [E])\end{aligned}$$ where the right morphism is the local atlas in (\[atlas\]). Let ${\mathcal{E}}_A^{\bullet}$ be the complex on $X \times A$ constructed in (\[seq:Eu\]). By pulling ${\mathcal{E}}_A^{\bullet}$ back by $f^{\ast}_1$, we obtain the complex $$\begin{aligned} {\mathcal{E}}_S^{\bullet}=f^{\ast}_1{\mathcal{E}}_A^{\bullet}. \end{aligned}$$ Then as described in Section \[subsec:dg\], the complex structures of each term of ${\mathcal{E}}_S^{\bullet}$ and their differentials give rise to the solution of the MC equation of the dg-algebra $\mathfrak{g}_E^{\ast} \otimes {\mathcal{O}}_S(S)$. Thus we obtain a map of Banach analytic spaces $$\begin{aligned} f_2 \colon (S, s) \to (\mathrm{MC}(\mathfrak{g}_E^{\ast}), 0). \end{aligned}$$ We are left to prove the existence of the morphism $f_3 \colon (S, s) \to (T, 0)$ such that the composition $$\begin{aligned} (S, s) \stackrel{f_3}{\to} (T, 0) \stackrel{I_{\ast}}{\to} (\mathrm{MC}(\mathfrak{g}_E^{\ast}), 0)\end{aligned}$$ differs from $f_2$ only up to gauge equivalence. The existence of such $f_3$ is proved in [@JuTu Theorem 2.2.2] when $E$ is a vector bundle, and the same argument applies for the complex of vector bundles (\[seq:E\]). Below we give an outline of the proof. For $y \in \mathfrak{g}_E^{1}$, consider the series $$\begin{aligned} P_{\ast}(y) {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\sum_{n\ge 1}P_n(y, \ldots, y)\end{aligned}$$ where $P$ is the homotopy inverse of $I$ in (\[P:inverse\]). By Lemma \[lem:mbound\], there is an open neighborhood $0 \in {\mathcal{U}}' \subset \mathfrak{g}_E^1$ in $\lVert - \rVert_{l}$-norm such that $P_{\ast}$ gives the analytic map $$\begin{aligned} P_{\ast} \colon {\mathcal{U}}' \to {\mathop{\rm Ext}\nolimits}^1(E, E). \end{aligned}$$ Since $P$ is an $A_{\infty}$-homomorphism, after shrinking ${\mathcal{U}}'$ if necessary the above map induces the morphism of Banach analytic spaces $$\begin{aligned} P_{\ast} \colon \mathrm{MC}(\mathfrak{g}_E^{\ast}) \cap {\mathcal{U}}' \to T. \end{aligned}$$ Therefore by shrinking $S$ if necessary so that $f_2(S) \subset {\mathcal{U}}'$, we have the analytic map $$\begin{aligned} f_3= P_{\ast} \circ f_2 \colon (S, s) \to (T, 0). \end{aligned}$$ It remains to show that two maps $$\begin{aligned} I_{\ast} \circ f_3=I_{\ast} \circ P_{\ast} \circ f_2, \ f_2 \colon (S, s) \to (\mathrm{MC}(\mathfrak{g}_E^{\ast}), 0)\end{aligned}$$ are gauge equivalent. Since $P$ is a homotopy inverse of $I$, there is an $A_{\infty}$-homomorphism $$\begin{aligned} H \colon \mathfrak{g}_E^{\ast} \to \mathfrak{g}_E^{\ast} \otimes \Omega_{[0, 1]}^{\ast}\end{aligned}$$ such that $H(0)={\textrm{id}}$ and $H(1)=I \circ P$. Then $H$ also satisfies the boundedness property as in Lemma \[lem:mbound\] (see [@JuTu Corollary A.2.7]), so that after shrinking ${\mathcal{U}}'$ if necessary the $A_{\infty}$-homomorphism $H$ induces the analytic map $$\begin{aligned} H_{\ast} \colon \mathrm{MC}(\mathfrak{g}_E^{\ast}) \cap {\mathcal{U}}' \to \mathrm{MC}(\mathfrak{g}_E^{\ast} \otimes \Omega_{[0, 1]}^{\ast}). \end{aligned}$$ Then the analytic map $$\begin{aligned} H_{\ast} \circ f_2 \colon S \to \mathrm{MC}(\mathfrak{g}_E^{\ast} \otimes \Omega_{[0, 1]}^{\ast})\end{aligned}$$ satisfies $$\begin{aligned} H_{\ast} \circ f_2(0)=f_2, \quad H_{\ast} \circ f_2(1)= I_{\ast} \circ P_{\ast} \circ f_2.\end{aligned}$$ This implies that $f_2$ and $I_{\ast} \circ P_{\ast} \circ f_2$ are gauge equivalent in the sense of [@MR1950958 Definition 2.2.2]. As proved in [@MR1950958 Lemma 2.2.2], this notion of gauge equivalence coincides with the gauge equivalence in (\[isom:gauge\]). Therefore the smoothness of $I_{\ast}$ follows. Finally, the relative dimension of $I_{\ast} \colon T \to {\mathcal{M}}$ is $\dim {\mathop{\rm Aut}\nolimits}(E)$ since the dimension of the tangent space of $T$ at $0$ is $\dim {\mathop{\rm Ext}\nolimits}^1(E, E)$, and that of ${\mathcal{M}}$ at $[E]$ is $\dim {\mathop{\rm Ext}\nolimits}^1(E, E)-\dim {\mathop{\rm Aut}\nolimits}(E)$. Local descriptions of moduli stacks of semistable sheaves {#sec:thm} ========================================================= In this section, we use the results in the previous sections to prove Theorem \[thm:precise\]. By applying the arguments to the CY 3-fold case, we also obtain Corollary \[cor:CY3\]. Convergent relation of the Ext-quiver ------------------------------------- For a smooth projective variety $X$, let $$\begin{aligned} E_{\bullet}=(E_1, \ldots, E_k)\end{aligned}$$ be a simple collection of coherent sheaves on $X$, and $Q_{E_{\bullet}}$ the associated Ext-quiver (see Subsection \[subsec:Extquiver\]). Here we construct a convergent relation of $Q_{E_{\bullet}}$ from the minimal $A_{\infty}$-structure on the derived category of coherent sheaves on $X$. Let us consider the sheaf on $X$ of the form $$\begin{aligned} \label{object:E} E=\bigoplus_{i=1}^k V_i \otimes E_i\end{aligned}$$ for vector spaces $V_i$, and set $m_i=\dim V_i$. Note that we have the decomposition $$\begin{aligned} \label{Ext:decom0} {\mathop{\rm Ext}\nolimits}^{\ast}(E, E) &=\bigoplus_{1\le a, b\le k} {\mathop{\rm Hom}\nolimits}(V_a, V_b) \otimes {\mathop{\rm Ext}\nolimits}^{\ast}(E_a, E_b).\end{aligned}$$ Let us take a resolution ${\mathcal{E}}^{\bullet} \to E$ as in (\[seq:E\]). From its construction, it naturally decomposes into the direct sum of resolutions of $E_i$. Namely, let $$\begin{aligned} \notag 0 \to {\mathcal{E}}^{-N}_i \stackrel{d^{-N}_i}{\to} \cdots \to {\mathcal{E}}^{-1}_i \stackrel{d^{-1}_i}{\to} {\mathcal{E}}^0_i \to E_i \to 0\end{aligned}$$ be the resolution (\[seq:E\]) applied for $E_i$. By taking $N\gg0$, we may assume that $N$ is independent of $i$. Then the complex ${\mathcal{E}}^{\bullet}$ in (\[seq:E\]) is $$\begin{aligned} {\mathcal{E}}^{\bullet}=\bigoplus_{i=1}^k V_i \otimes {\mathcal{E}}_i^{\bullet}. \end{aligned}$$ Therefore we have the decompositions $$\begin{aligned} \label{Ext:decom} \mathfrak{g}_E^{\ast} &=\bigoplus_{1\le a, b \le k} {\mathop{\rm Hom}\nolimits}(V_a, V_b) \otimes A^{0, \ast} ({\mathcal{H}}om^{\ast}({\mathcal{E}}_a^{\bullet}, {\mathcal{E}}_b^{\bullet})). \end{aligned}$$ Here $\mathfrak{g}_E^{\ast}$ is the dg-algebra (\[minimal:g\]), defined via the above complex ${\mathcal{E}}^{\bullet}$. The decomposition of $\mathfrak{g}_E^{\ast}$ is compatible with the Laplacian operator $\Delta$. Indeed each complex ${\mathcal{A}}^{0, \ast}({\mathcal{H}}om^{\ast} ({\mathcal{E}}_a^{\bullet}, {\mathcal{E}}_b^{\bullet}))$ is elliptic and hence we have linear operators $$\begin{aligned} &i_{a, b} \colon {\mathop{\rm Ext}\nolimits}^{\ast}(E_a, E_b) \hookrightarrow A^{0, \ast}({\mathcal{H}}om^{\ast}({\mathcal{E}}_a^{\bullet}, {\mathcal{E}}_b^{\bullet})) \\ &p_{a, b} \colon A^{0, \ast}({\mathcal{H}}om^{\ast}({\mathcal{E}}_a^{\bullet}, {\mathcal{E}}_b^{\bullet})) \twoheadrightarrow {\mathop{\rm Ext}\nolimits}^{\ast}(E_a, E_b) \\ & h_{a, b} \colon A^{0, \ast}({\mathcal{H}}om^{\ast}({\mathcal{E}}_a^{\bullet}, {\mathcal{E}}_b^{\bullet})) \to A^{0, \ast-1}({\mathcal{H}}om^{\ast}({\mathcal{E}}_a^{\bullet}, {\mathcal{E}}_b^{\bullet}))\end{aligned}$$ satisfying the same relations as (\[relation\]) and $$\begin{aligned} \label{u:sum} \star=\bigoplus_{1\le a, b\le k} {\textrm{id}}_{{\mathop{\rm Hom}\nolimits}(V_a, V_b)} \otimes \star_{a, b}\end{aligned}$$ where $\star$ is either $i$ or $p$ or $h$ given in Subsection \[subsec:minimal\]. Let $\overline{E}$ be the coherent sheaf on $X$ defined by $$\begin{aligned} \label{E:bar} \overline{E} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\bigoplus_{i=1}^k E_i\end{aligned}$$ and consider the $A_{\infty}$-product $$\begin{aligned} \label{mn:Eb} m_n \colon {\mathop{\rm Ext}\nolimits}^1(\overline{E}, \overline{E})^{\otimes n} \to {\mathop{\rm Ext}\nolimits}^2(\overline{E}, \overline{E}). \end{aligned}$$ By the relation (\[u:sum\]) and the explicit formula (\[def:mn\]) of the $A_{\infty}$-product, the map (\[mn:Eb\]) only consists of the direct sum factors of the form $$\begin{aligned} \notag m_n \colon {\mathop{\rm Ext}\nolimits}^1(E_{\psi(1)}, E_{\psi(2)}) \otimes &{\mathop{\rm Ext}\nolimits}^1(E_{\psi(2)}, E_{\psi(3)}) \otimes \cdots \\ \cdots \otimes &{\mathop{\rm Ext}\nolimits}^1(E_{\psi(n)}, E_{\psi(n+1)}) \label{factor} \to {\mathop{\rm Ext}\nolimits}^2(E_{\psi(1)}, E_{\psi(n+1)})\end{aligned}$$ for maps $\psi \colon \{1, \ldots, n+1\} \to \{1, \ldots, k\}$, which give a minimal $A_{\infty}$-category structure on the dg-category generated by $(E_1, \ldots, E_k)$. By taking the dual and the products of (\[factor\]) for all $n\ge 2$, we obtain the linear map $$\begin{aligned} \mathbf{m}^{\vee} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\prod_{n\ge 2} m_n^{\vee} \colon {\mathop{\rm Ext}\nolimits}^2(\overline{E}, \overline{E})^{\vee} \to \prod_{n\ge 2} \bigoplus_{\begin{subarray}{c} \{1, \ldots, n+1 \} \\ \stackrel{\psi}{\to} \{1, \ldots, k\} \end{subarray}} &{\mathop{\rm Ext}\nolimits}^1(E_{\psi(1)}, E_{\psi(2)})^{\vee} \otimes \cdots \\ &\cdots \otimes {\mathop{\rm Ext}\nolimits}^1(E_{\psi(n)}, E_{\psi(n+1)})^{\vee}. \end{aligned}$$ Note that an element of the RHS is an element of $\mathbb{C}{[\![}Q_{E_{\bullet}} {]\!]}$ by (\[f:element\]). Let $\{\mathbf{o}_1, \ldots, \mathbf{o}_l\}$ be a basis of ${\mathop{\rm Ext}\nolimits}^2(\overline{E}, \overline{E})^{\vee}$ and set $$\begin{aligned} f_i=\mathbf{m}^{\vee}(\mathbf{o}_i) \in \mathbb{C}{[\![}Q_{E_{\bullet}} {]\!]}. \end{aligned}$$ Then by Lemma \[lem:mbound\], we have $f_i \in \mathbb{C}\{Q_{E_{\bullet}}\}$. We obtain the convergent relation of $Q_{E_{\bullet}}$ $$\begin{aligned} \label{relation:I} I_{E_{\bullet}} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}(f_1, \ldots, f_l). \end{aligned}$$ Deformations of direct sums of simple collections ------------------------------------------------- We consider the deformations of sheaves of the form (\[object:E\]). By the decomposition (\[Ext:decom0\]), the space ${\mathop{\rm Ext}\nolimits}^1(E, E)$ is identified with the space of $Q_{E_{\bullet}}$-representations $$\begin{aligned} \label{identify:Ext} {\mathop{\rm Ext}\nolimits}^1(E, E)=\mathrm{Rep}_{Q_{E_{\bullet}}}(\vec{m}). \end{aligned}$$ Here $\vec{m}$ is the dimension vector of $Q_{E_{\bullet}}$ given by $m_i=\dim V_i$. We also have $$\begin{aligned} \label{G:aut} G={\mathop{\rm Aut}\nolimits}(E)=\prod_{i=1}^k {\mathop{\rm GL}\nolimits}(V_i)\end{aligned}$$ and the adjoint action of ${\mathop{\rm Aut}\nolimits}(E)$ on ${\mathop{\rm Ext}\nolimits}^1(E, E)$ coincides with the action (\[G:act\]) under the identification (\[identify:Ext\]). Recall that in (\[kappa:U\]) and (\[I:analytic\]), we constructed analytic maps $$\begin{aligned} \label{construct:k} \kappa \colon {\mathcal{U}}\to {\mathop{\rm Ext}\nolimits}^2(E, E), \ I_{\ast} \colon {\mathcal{U}}\to \widehat{\mathfrak{g}}_{E, l}^{\ast}\end{aligned}$$ for a sufficiently small analytic open subset $0 \in {\mathcal{U}}\subset {\mathop{\rm Ext}\nolimits}^1(E, E)$. Explicitly under the identification (\[identify:Ext\]), for a $Q_{E_{\bullet}}$-representation $$\begin{aligned} u=(u_e)_{e \in E(Q_{E_{\bullet}})} \in {\mathcal{U}}, \ u_e \colon V_{s(e)} \to V_{t(e)},\end{aligned}$$ we have the following identities by the decompositions (\[Ext:decom0\]), (\[Ext:decom\]), (\[u:sum\]). $$\begin{aligned} \label{kI:explict} &\kappa(u)=\sum_{\begin{subarray}{c} n\ge 2, \\ \{1, \ldots, n+1\} \stackrel{\psi}{\to} \{1, \ldots, k\} \end{subarray}} \sum_{e_i \in E_{\psi(i), \psi(i+1)}} m_n(e_1^{\vee}, \ldots, e_n^{\vee}) \cdot u_{e_n} \circ \cdots \circ u_{e_2} \circ u_{e_1}, \\ &\notag I_{\ast}(u)=\sum_{\begin{subarray}{c} n\ge 2, \\ \{1, \ldots, n+1\} \stackrel{\psi}{\to} \{1, \ldots, k\} \end{subarray}} \sum_{e_i \in E_{\psi(i), \psi(i+1)}} I_n(e_1^{\vee}, \ldots, e_n^{\vee}) \cdot u_{e_n} \circ \cdots \circ u_{e_2} \circ u_{e_1}. \end{aligned}$$ Here for $e \in E_{i, j}$, the element $e^{\vee} \in {\mathop{\rm Ext}\nolimits}^1(E_i, E_j)$ is defined as in (\[e:dual\]). \[lem:Extsat\] There is a saturated open subset ${\mathcal{V}}$ in ${\mathop{\rm Ext}\nolimits}^1(E, E)$ w.r.t. the $G$-action on ${\mathop{\rm Ext}\nolimits}^1(E, E)$, satisfying $$\begin{aligned} 0 \in {\mathcal{V}}\subset G \cdot {\mathcal{U}}\subset {\mathop{\rm Ext}\nolimits}^1(E, E)\end{aligned}$$ such that the maps in (\[construct:k\]) induce $G$-equivariant analytic maps $$\begin{aligned} \kappa \colon {\mathcal{V}}\to {\mathop{\rm Ext}\nolimits}^2(E, E), \ I_{\ast} \colon {\mathcal{V}}\to \widehat{\mathfrak{g}}_{E, l}^{\ast}\end{aligned}$$ Here $G$ acts on ${\mathop{\rm Ext}\nolimits}^2(E, E)$ and $\widehat{\mathfrak{g}}_{E, l}^{\ast}$ by adjoint. The formal series $\kappa$ and $I_{\ast}$ in (\[kI:explict\]) are obviously $G$-equivariant. Therefore for a choice of ${\mathcal{U}}$ in (\[kappa:U\]), (\[I:analytic\]), the maps $\kappa$, $I_{\ast}$ can be extended to analytic maps $$\begin{aligned} \kappa \colon G \cdot {\mathcal{U}}\to {\mathop{\rm Ext}\nolimits}^2(E, E), \ I_{\ast} \colon G \cdot {\mathcal{U}}\to \widehat{\mathfrak{g}}_{E, l}^{\ast}.\end{aligned}$$ By Lemma \[lem:saturated2\], there is a saturated analytic open subset ${\mathcal{V}}\subset G \cdot {\mathcal{U}}$ which contains $0 \in {\mathop{\rm Ext}\nolimits}^1(E, E)$, so the lemma follows. Let ${\mathcal{V}}\subset {\mathop{\rm Ext}\nolimits}^1(E, E)$ be as in Lemma \[lem:Extsat\]. By Lemma \[lem:saturated\], it is written as $$\begin{aligned} {\mathcal{V}}=\pi_{Q_{E_{\bullet}}}^{-1}(V)\end{aligned}$$ for some analytic open subset $0 \in V \subset M_{Q_{E_{\bullet}}}(\vec{m})$, where $\pi_{Q_{E_{\bullet}}}$ is the quotient map $$\begin{aligned} \pi_{Q_{E_{\bullet}}} \colon \mathrm{Rep}_{Q_{E_{\bullet}}}(\vec{m}) \to M_{Q_{E_{\bullet}}}(\vec{m}). \end{aligned}$$ Let $R \subset {\mathcal{V}}$ be the closed analytic subspace given by $$\begin{aligned} R {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\kappa^{-1}(0) \subset {\mathcal{V}}\subset {\mathop{\rm Ext}\nolimits}^1(E, E).\end{aligned}$$ By the definition of $I_{E_{\bullet}}$ in (\[relation:I\]), under the identification (\[identify:Ext\]) we have $$\begin{aligned} R=\mathrm{Rep}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V}.\end{aligned}$$ Here we have used the notation (\[Rep:V\]) for the RHS. Therefore in the notation of Definition \[defi:cmoduli\], we have $$\begin{aligned} {\mathcal{M}}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V}=[R/G]. \end{aligned}$$ \[lem:etale\] By shrinking $V$ if necessary, the map $I_{\ast}$ in Lemma \[lem:Extsat\] induces the smooth morphism of relative dimension zero $$\begin{aligned} \label{induce:MQ} I_{\ast} \colon {\mathcal{M}}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V} \to {\mathcal{M}}. \end{aligned}$$ Here ${\mathcal{M}}$ is the moduli stack of coherent sheaves on $X$. By Lemma \[prop:restrict\] and Proposition \[prop:rest3\], the map $I_{\ast}$ in Lemma \[lem:Extsat\] gives the analytic maps $$\begin{aligned} I_{\ast} \colon R \cap {\mathcal{U}}\to \mathrm{MC}(\mathfrak{g}_E^{\ast}), \ I_{\ast} \colon R \cap {\mathcal{U}}\to {\mathcal{M}}. \end{aligned}$$ Then by the $G$-equivalence of $I_{\ast}$ and the property ${\mathcal{V}}\subset G \cdot {\mathcal{U}}$ in Lemma \[lem:Extsat\], the above maps extend to the $G$-equivariant analytic maps $$\begin{aligned} \label{induce:ana} I_{\ast} \colon R \to \mathrm{MC}(\mathfrak{g}_E^{\ast}), \ I_{\ast} \colon R \to {\mathcal{M}}. \end{aligned}$$ Here the right map is induced by the left map as in the proof of Proposition \[prop:rest3\]. By the $G$-equivalence of $I_{\ast}$, the right map of (\[induce:ana\]) descends to the quotient by $G$ to induce (\[induce:MQ\]), which is of relative dimension zero by Lemma \[lem:Extsat\]. Functoriality of $I_{\ast}$ {#subsec:functI} --------------------------- In this subsection, by the explicit description (\[kI:explict\]) of the map $I_{\ast}$ in Proposition \[prop:complete\], we see that it has some functorial property. In particular, it implies that $I_{\ast}$ sends subsheaves to subrepresentations of Ext-quivers. This fact will not be used in the rest of this section, but will be used in the proof of Theorem \[cor:equiv:I\], which will be used in Theorem \[thm:onedim\] to compare stability conditions of sheaves and quiver representations. For each $i \in V(Q_{E_{\bullet}})=\{1, 2, \ldots, k\}$, let $V_i, V_i'$ be vector spaces with dimensions $m_i$, $m_i'$, and set $$\begin{aligned} E=\bigoplus_{i=1}^k V_i \otimes E_i, \ E'=\bigoplus_{i=1}^k V_i' \otimes E_i. \end{aligned}$$ Let us take $$\begin{aligned} \label{uu'} u=(u_e)_{e \in E(Q_{E_{\bullet}})}, \ u'=(u_e')_{e \in E(Q_{E_{\bullet}})}\end{aligned}$$ where $u_e, u_e'$ are linear maps $$\begin{aligned} u_e \colon V_{s(e)} \to V_{t(e)}, \ u_e' \colon V_{s(e)}' \to V_{t(e)}',\end{aligned}$$ whose operator norms are sufficiently small so that they give $Q_{E_{\bullet}}$-representations satisfying the relation $I_{E_{\bullet}}$. Let $\phi_i \colon V_i \to V_i'$ be linear maps for $1\le i\le k$ such that the following diagram commutes for each $e \in E(Q_{E_{\bullet}})$ $$\begin{aligned} \xymatrix{ V_{s(e)} \ar[r]^-{u_e} \ar[d]_-{\phi_{s(e)}} & V_{t(e)} \ar[d]^-{\phi_{t(e)}} \\ V_{s(e)}' \ar[r]_-{u_e'} & V_{t(e)}'. }\end{aligned}$$ Then each term of $$\begin{aligned} \label{Iu:MC} I_{\ast}(u) \in \mathrm{MC}(\mathfrak{g}_{E}^{\ast}), \ I_{\ast}(u') \in \mathrm{MC}(\mathfrak{g}_{E'}^{\ast})\end{aligned}$$ in (\[kI:explict\]) satisfy $$\begin{aligned} &I_n(e_1^{\vee}, \ldots, e_n^{\vee}) \cdot \phi_{t(e_n)} \circ u_{e_n} \circ \cdots \circ u_{e_1} \\ &=I_n(e_1^{\vee}, \ldots, e_n^{\vee}) \cdot u_{e_n}' \circ \cdots \circ u_{e_1}' \circ \phi_{s(e_1)}.\end{aligned}$$ This implies that the map $$\begin{aligned} \bigoplus_{i=1}^k \phi_i \otimes {\textrm{id}}\colon &\left({\mathcal{A}}^{0, \ast}\left(\bigoplus_{i=1}^k V_i \otimes {\mathcal{E}}_i^{\bullet}\right), d_{{\mathcal{A}}^{0, \ast}(\bigoplus_{i=1}^k V_i \otimes {\mathcal{E}}_i^{\bullet})}+I_{\ast}(u) \right) \\ & \to \left({\mathcal{A}}^{0, \ast}\left(\bigoplus_{i=1}^k V_i' \otimes {\mathcal{E}}_i^{\bullet}\right), d_{{\mathcal{A}}^{0, \ast}(\bigoplus_{i=1}^k V_i' \otimes {\mathcal{E}}_i^{\bullet})}+I_{\ast}(u') \right)\end{aligned}$$ is a map of dg-${\mathcal{A}}^{0, \ast}({\mathcal{O}}_X)$-modules. By taking the cohomology of the above map, we obtain the morphism of coherent sheaves $$\begin{aligned} \label{mor:coh} {\mathcal{H}}^0 \left( \bigoplus_{i=1}^k \phi_i \otimes {\textrm{id}}\right) \colon E_{u} \to E_{u'}.\end{aligned}$$ Here $E_{u}$, $E_{u'}$ are coherent sheaves corresponding to $u$, $u'$ under the map in Proposition \[prop:rest2\] respectively. \[rmk:operator\] In the above argument, we assumed that the operator norms of $u, u'$ are enough small so that $I_{\ast}$ is defined. We can relax this condition in the following cases. First suppose that each $\phi_i$ is injective or surjective. Then the operator norm of $u$ is bounded by that of $u'$, so if the operator norm of $u'$ is enough small then so is $u$ and $I_{\ast}(u)$ is defined. Next if $u, u'$ correspond to nilpotent $Q_{E_{\bullet}}$-representations, then whatever the operator norms of $u, u'$ the infinite sums $I_{\ast}(u), I_{\ast}(u')$ in (\[kI:explict\]) are finite sums. So in the above cases, $E_u, E_{u'}$ and the morphism (\[mor:coh\]) are well-defined. Étale slice ----------- Below we use the notation in Subsection \[subsec:moduli\]. Let ${\mathcal{M}}_{\omega}(v)$ be the moduli stack of $\omega$-Gieseker semistable sheaves on $X$ with Chern character $v$, $M_{\omega}(v)$ its coarse moduli space. Let $E$ be a polystable sheaf of the form (\[polystable\]), and take closed points $$\begin{aligned} p=[E] \in M_{\omega}(v), \ p'=[E] \in {\mathcal{M}}_{\omega}(v).\end{aligned}$$ For $m\gg 0$, let $\mathbf{V}$ be the vector space given by $$\begin{aligned} \mathbf{V}=H^0(E(m))=\bigoplus_{i=1}^k V_i \otimes H^0(E_i(m)). \end{aligned}$$ Let $q \in \mathrm{Quot}^{\circ}(\mathbf{V}, v)$ be a point which is mapped to $p'$ under the quotient morphism $\mathrm{Quot}^{\circ}(\mathbf{V}, v) \to {\mathcal{M}}_{\omega}(v)$. Then we have $$\begin{aligned} {\mathop{\rm Stab}\nolimits}_{{\mathop{\rm GL}\nolimits}(\mathbf{V})}(q)=G \subset {\mathop{\rm GL}\nolimits}(\mathbf{V})\end{aligned}$$ where $G$ is given as in (\[G:aut\]). By Luna’s étale slice theorem [@MR0342523], there is an affine locally closed $G$-invariant subscheme $$\begin{aligned} q \in Z \subset \mathrm{Quot}^{\circ}(\mathbf{V}, v)\end{aligned}$$ such that the natural ${\mathop{\rm GL}\nolimits}(\mathbf{V})$-equivariant morphism $$\begin{aligned} {\mathop{\rm GL}\nolimits}(\mathbf{V}) \times_G Z \to \mathrm{Quot}^{\circ}(\mathbf{V}, v)\end{aligned}$$ is étale. Moreover by taking the quotients by ${\mathop{\rm GL}\nolimits}(\mathbf{V})$, we obtain the Cartesian diagram $$\begin{aligned} \label{dia:etale} \xymatrix{ [Z/G] \ar[r] \ar[d]_{p_{Z}}\ar@{}[dr]|\square & {\mathcal{M}}_{\omega}(v) \ar[d]^{p_{M}} \\ Z {/\!\!/}G \ar[r] & M_{\omega}(v) }\end{aligned}$$ such that each horizontal arrows are étale. Therefore there is a saturated analytic open subset ${\mathcal{W}}\subset Z$ (w.r.t. the $G$-action on $Z$) which contains $q$ and the Cartesian diagram of complex analytic stacks $$\begin{aligned} \xymatrix{ [{\mathcal{W}}/G] \ar[r] \ar[d]_{p_{{\mathcal{W}}}}\ar@{}[dr]|\square & {\mathcal{M}}_{\omega}(v) \ar[d]^{p_{M}} \\ {\mathcal{W}}{/\!\!/}G \ar[r] & M_{\omega}(v) }\end{aligned}$$ such that each horizontal arrows are analytic open immersions. On the other hand, let us consider the morphism $I_{\ast}$ in Lemma \[lem:etale\] applied for the above polystable sheaf $p'=[E] \in {\mathcal{M}}_{\omega}(v)$. By the openness of stability, by shrinking ${\mathcal{U}}$ in Lemma \[lem:Extsat\] if necessary, the map $I_{\ast}$ in Lemma \[lem:etale\] factors through the open substack ${\mathcal{M}}_{\omega}(v) \subset {\mathcal{M}}$: $$\begin{aligned} \label{Iast:M} I_{\ast} \colon {\mathcal{M}}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V} \to {\mathcal{M}}_{\omega}(v). \end{aligned}$$ Now the following proposition completes the proof of Theorem \[thm:precise\]. \[prop:complete\] By shrinking ${\mathcal{V}}$ in Lemma \[lem:Extsat\] and ${\mathcal{W}}$ if necessary (while keeping the condition to be saturated in ${\mathop{\rm Ext}\nolimits}^1(E, E)$, $Z$ respectively) the map (\[Iast:M\]) induces the commutative isomorphisms $$\begin{aligned} \label{dia:MW} \xymatrix{ [R/G]={\mathcal{M}}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V} \ar[r]_-{\cong}^-{I_{\ast}} \ar[d]_{p_Q} & [{\mathcal{W}}/G] \ar[d]^{p_{{\mathcal{W}}}} \\ R{/\!\!/}G=M_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V} \ar[r]_-{\cong} & {\mathcal{W}}{/\!\!/}G. }\end{aligned}$$ The map (\[Iast:M\]) induces the analytic map $R{/\!\!/}G \to M_{\omega}(v)$. So by shrinking $0 \in V \subset M_{Q_{E_{\bullet}}}(\vec{m})$ if necessary, we may assume that the above map factors through $R{/\!\!/}G \to {\mathcal{W}}{/\!\!/}G$. Then we have the commutative diagram $$\begin{aligned} \xymatrix{ [R/G] \ar[r]^-{I_{\ast}} \ar[d]_{p_Q} & [{\mathcal{W}}/G] \ar[d]^-{p_{{\mathcal{W}}}} \\ R{/\!\!/}G \ar[r] & {\mathcal{W}}{/\!\!/}G. }\end{aligned}$$ Let $K\subset G$ be a maximal compact subgroup, and take a sufficiently small $K$-invariant analytic open subset $q \in {\mathcal{W}}_1 \subset {\mathcal{W}}$. Then as in the proof of Proposition \[prop:rest3\], the composition $$\begin{aligned} {\mathcal{W}}_1 \to {\mathcal{W}}\to [{\mathcal{W}}/G] \subset {\mathcal{M}}_{\omega}(v)\end{aligned}$$ admits a lift $\phi \colon {\mathcal{W}}_1 \to R$ using the homotopy inverse $P$ of $I$. Moreover the proof in *loc.cit.* immediately implies that $\phi$ can taken to be $K$-equivariant. (Indeed if the map $f_2$ in *loc.cit.* is $K$-equivariant, then so is $f_3$ as $P_{\ast}$ is $K$-equivariant.) So we have the commutative diagram $$\begin{aligned} \label{2commute2} \xymatrix{ R \ar[d] & {\mathcal{W}}_1 \ar[d] \ar[l]_-{\phi} \\ [R/G] \ar[r]^{I_{\ast}} & [{\mathcal{W}}/G]. }\end{aligned}$$ Note that the bottom arrow is a smooth morphism of relative dimension zero by Lemma \[lem:etale\]. Let $0 \in R_1 \subset R$ be a sufficiently small $K$-invariant analytic open neighborhood. Since both of $R_1$ and ${\mathcal{W}}_1$ are the bases of versal families of flat deformations of $E$ with tangent space ${\mathop{\rm Ext}\nolimits}^1(E, E)$, and $\phi$ is isomorphism at the tangent by the diagram (\[2commute2\]), the $K$-equivariant map $\phi$ gives an isomorphism $\psi \colon {\mathcal{W}}_1 \stackrel{\cong}{\to} R_1$ for some suitable choices of ${\mathcal{W}}_1$, $R_1$. By setting $\psi=\phi^{-1}$, we obtain the commutative diagram $$\begin{aligned} \label{2commute} \xymatrix{ R_1 \ar[r]^-{\psi}_-{\cong} \ar[d] & {\mathcal{W}}_1 \ar[d] \\ [R/G] \ar[r]^{I_{\ast}} & [{\mathcal{W}}/G]. }\end{aligned}$$ By Lemma \[lem:welldef\] below, after shrinking $R_1$ if necessary we can extend the $K$-equivariant isomorphism $\psi \colon R_1 \stackrel{\cong}{\to} {\mathcal{W}}_1$ to a $G$-equivariant isomorphism between $G$-invariant open subsets in $R$ and ${\mathcal{W}}$ $$\begin{aligned} \label{isom:R2} \widetilde{\psi} \colon R_2 {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}G \cdot R_1 \stackrel{\cong}{\to} {\mathcal{W}}_2 {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}G \cdot {\mathcal{W}}_1\end{aligned}$$ by sending $g \cdot x$ to $g \cdot \psi(x)$ for $g \in G$ and $x \in R_1$. Then by Lemma \[lem:Zsat\] below, the isomorphism (\[isom:R2\]) restricts to the isomorphism of saturated open subsets. By taking the quotients of $G$-actions, we obtain the desired isomorphisms (\[dia:MW\]). In the proof of the above proposition, we postponed the following two lemmas: \[lem:welldef\] The map (\[isom:R2\]) is well-defined and an isomorphism. The lemma is essentially proved in the proof of [@JS Theorem 5.5]. In order to show that (\[isom:R2\]) is well-defined, it is enough to show that if $g_1 R_1 \cap g_2 R_1 \neq \emptyset$ for $g_1, g_2 \in G$, then we have the identity $g_1 \psi g_1^{-1}=g_2 \psi g_2^{-1}$ on $g_1 R_1 \cap g_2 R_1$. By applying $g_2^{-1}$, we may assume that $g_2=1$. Let $G' \subset G$ be the open subset given by $$\begin{aligned} G' {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\{g\in G : gR_1 \cap R_1 \neq \emptyset\}.\end{aligned}$$ If we define $G''$ to be $$\begin{aligned} G'' {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\{g \in G' : g\psi g^{-1}=\psi \mbox{ on } g R_1 \cap R_1\}\end{aligned}$$ then $G''$ is a closed analytic subset of $G'$ which contains $K$. Therefore if $(G')^{\circ}$, $(G'')^{\circ}$ are the connected components of $G'$, $G''$ which contain $K$, then we have $(G')^{\circ}=(G'')^{\circ}$. Then we take a sufficiently small $K$-invariant open subset $0 \in R_1' \subset R_1$ satisfying the following: for any $x_1, x_2 \in R_1'$ with $G \cdot x_1=G\cdot x_2$, the connected component of $(G \cdot x_1) \cap R_1$ containing $x_1$ should contain $x_2$. The above choice of $R_1'$ implies that $$\begin{aligned} G'''{\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\{g \in G : gR_1' \cap R_1' \neq \emptyset\} \subset (G')^{\circ}.\end{aligned}$$ Therefore as $(G')^{\circ}=(G'')^{\circ}$, for $g \in G'''$ we have $g \psi g^{-1}=\psi$ on $gR_1' \cap R_1' \neq \emptyset$. By replacing $R_1$ with $R_1'$, we see that (\[isom:R2\]) is well-defined. Applying the above argument for the inverse of $\psi \colon R_1 \stackrel{\cong}{\to} {\mathcal{W}}_1$, we have the inverse of (\[isom:R2\]), showing that (\[isom:R2\]) is an isomorphism. \[lem:Zsat\] There exist saturated open subsets $\widetilde{{\mathcal{V}}} \subset {\mathop{\rm Ext}\nolimits}^1(E, E)$, $\widetilde{{\mathcal{W}}} \subset Z$ satisfying $0 \in R \cap \widetilde{{\mathcal{V}}} \subset R_2$, $q \in \widetilde{{\mathcal{W}}} \subset {\mathcal{W}}_2$ such that the isomorphism (\[isom:R2\]) restricts to the isomorphism $$\begin{aligned} \widetilde{\psi} \colon R \cap \widetilde{{\mathcal{V}}} \stackrel{\cong}{\to} \widetilde{{\mathcal{W}}}.\end{aligned}$$ Let ${\mathcal{W}}_3 \subset Z$ be a saturated open subset in $Z$ satisfying $q \in {\mathcal{W}}_3 \subset {\mathcal{W}}_2$, which exists by Lemma \[lem:saturated2\], and set $R_3 {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\widetilde{\psi}^{-1}({\mathcal{W}}_3) \subset R_2$. Then $R_3$ is written as $R_3=R \cap {\mathcal{V}}'$ for some $G$-invariant open subset $0 \in {\mathcal{V}}' \subset {\mathcal{V}}$. Let ${\mathcal{V}}'' \subset {\mathop{\rm Ext}\nolimits}^1(E, E)$ be a saturated open subset satisfying $0 \in {\mathcal{V}}'' \subset {\mathcal{V}}'$, which again exists by Lemma \[lem:saturated2\], and set $R_4 {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}R \cap {\mathcal{V}}'' \subset R_3$. Let ${\mathcal{W}}_4 {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\widetilde{\psi}(R_4)$. We show that ${\mathcal{W}}_4$ is a saturated open subset in $Z$. Indeed for $x \in {\mathcal{W}}_4$, the orbit closure $\overline{G \cdot x}$ in $Z$ is contained in ${\mathcal{W}}_3$ since ${\mathcal{W}}_3$ is saturated. Take $y \in \overline{G \cdot x}$ and consider $\widetilde{\psi}^{-1}(y) \in R_3$. Then since ${\mathcal{V}}''$ is saturated, we have $\widetilde{\psi}^{-1}(y) \in R_4$, hence $y \in {\mathcal{W}}_4$ as desired. Now ${\mathcal{V}}''$, ${\mathcal{W}}_4$ are saturated in ${\mathop{\rm Ext}\nolimits}^1(E, E)$, $Z$. By setting $\widetilde{{\mathcal{V}}}={\mathcal{V}}''$, $\widetilde{{\mathcal{W}}}={\mathcal{W}}_4$, we obtain the lemma. Calabi-Yau 3-fold case ---------------------- We keep the situation in the previous subsections. Suppose furthermore that $X$ is a smooth projective CY 3-fold, i.e. $$\begin{aligned} \dim X=3, \ {\mathcal{O}}_X(K_X) \cong {\mathcal{O}}_X.\end{aligned}$$ In this case, the $A_{\infty}$-structure (\[mn:Eb\]) is cyclic (see [@MR1876072]), i.e. for a map $$\begin{aligned} \psi \colon \{1, \ldots, n+1\} \to \{1, \ldots, k\}, \ \psi(1)=\psi(n+1)\end{aligned}$$ and elements $$\begin{aligned} a_i \in {\mathop{\rm Ext}\nolimits}^1(E_{\psi(i)}, E_{\psi(i+1)}), \ 1\le i\le n,\end{aligned}$$ we have the relation $$\begin{aligned} \label{cyclic} (m_{n-1}(a_1, \ldots, a_{n-1}), a_{n})=(m_{n-1}(a_2, \ldots, a_{n}), a_1). \end{aligned}$$ Here $m_n$ is the $A_{\infty}$-product (\[factor\]), $(-, -)$ is the Serre duality pairing $$\begin{aligned} \label{Serre} (-, -) \colon {\mathop{\rm Ext}\nolimits}^{j}(E_a, E_b) \times {\mathop{\rm Ext}\nolimits}^{3-j}(E_b, E_a) \to {\mathop{\rm Ext}\nolimits}^3(E_a, E_a) \stackrel{\int_X \mathrm{tr}}{\to} \mathbb{C}.\end{aligned}$$ Let $W_{E_{\bullet}} \in \mathbb{C}{[\![}Q_{E_{\bullet}} {]\!]}$ be defined by $$\begin{aligned} \notag W_{E_{\bullet}} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\sum_{n\ge 3} \sum_{\begin{subarray}{c} \{1, \ldots, n+1 \} \stackrel{\psi}{\to} \{1, \ldots, k\} \\ \psi(1)=\psi(n+1) \end{subarray}} \sum_{e_i \in E_{\psi(i), \psi(i+1)}} a_{\psi, e_{\bullet}} \cdot e_1 e_2 \ldots e_n. \end{aligned}$$ Here the coefficient $a_{\psi, e_{\bullet}}$ is given by $$\begin{aligned} \label{af} a_{\psi, e_{\bullet}}=\frac{1}{n}(m_{n-1}(e_1^{\vee}, e_2^{\vee}, \ldots, e_{n-1}^{\vee}), e_n^{\vee}). \end{aligned}$$ Then by Lemma \[lem:mbound\], we have $$\begin{aligned} W_{E_{\bullet}} \in \mathbb{C}\{ Q_{E_{\bullet}} \} \subset \mathbb{C}{[\![}Q_{E_{\bullet}} {]\!]}.\end{aligned}$$ Therefore $W_{E_{\bullet}}$ determines a convergent super-potential of $Q_{E_{\bullet}}$ (see Definition \[def:conv:pot\]). Let $\overline{E}$ be the object given by (\[E:bar\]). By the Serre duality, ${\mathop{\rm Ext}\nolimits}^2(\overline{E}, \overline{E})^{\vee}$ is identified with ${\mathop{\rm Ext}\nolimits}^1(\overline{E}, \overline{E})$. Thus $$\begin{aligned} \label{basis} \{e^{\vee} : e \in E(Q_{E_{\bullet}})\} \subset {\mathop{\rm Ext}\nolimits}^1(\overline{E}, \overline{E})\end{aligned}$$ gives a basis of ${\mathop{\rm Ext}\nolimits}^2(\overline{E}, \overline{E})^{\vee}$. Using this basis, the relation $I_{E_{\bullet}}$ defined in (\[relation:I\]) satisfies $$\begin{aligned} I_{E_{\bullet}}=\{\mathbf{m}^{\vee}(e^{\vee}) : e \in E(Q_{E_{\bullet}})\} =\partial W_{E_{\bullet}}. \end{aligned}$$ Here the first identity is due to the definition of $I_{E_{\bullet}}$ via the basis (\[basis\]), and the second identity follows from the construction of $W_{E_{\bullet}}$ and the cyclic condition (\[cyclic\]). As a corollary of Theorem \[thm:precise\], we obtain the following: \[cor:CY3\] In the situation of Theorem \[thm:precise\], suppose furthermore that $X$ is a smooth projective CY 3-fold. Then there is a convergent super-potential $W_{E_{\bullet}}$ of $Q_{E_{\bullet}}$, analytic open neighborhoods $p \in U \subset M_{\omega}(v)$, $0 \in V \subset M_{Q_{E_{\bullet}}}(\vec{m})$ and commutative isomorphisms $$\begin{aligned} \label{dia:comiso2} \xymatrix{ p_M^{-1}(U) \ar[d]^{p_M} & \ar[l]_-{\cong}^-{I_{\ast}} {\mathcal{M}}_{(Q_{E_{\bullet}, \partial W_{E_{\bullet}})}}(\vec{m})|_{V} \ar@{=}[r] \ar[d]^{p_Q} & \left[ \{d({\mathop{\rm tr}\nolimits}W_{E_{\bullet}})=0\}/G \right] \ar@<-0.3ex>@{^{(}->}[r] & [\pi_Q^{-1}(V)/G] \ar[d]^-{{\mathop{\rm tr}\nolimits}W_{E_{\bullet}}} \\ U & \ar[l]_-{\cong} M_{(Q_{E_{\bullet}, \partial W_{E_{\bullet}})}}(\vec{m})|_{V} & & \mathbb{C}. }\end{aligned}$$ Here the bottom arrow sends $0$ to $p$, $\pi_Q \colon \mathrm{Rep}_{Q_{E_{\bullet}}}(\vec{m}) \to M_{Q_{E_{\bullet}}}(\vec{m})$ is the quotient morphism, and ${\mathop{\rm tr}\nolimits}W_{E_{\bullet}}$ is the $G$-invariant analytic function on the smooth analytic space $\pi_Q^{-1}(V)$ (see Subsection \[subsec:potential\]). Non-commutative deformation theory {#sec:NC} ================================== Note that the diagram (\[dia:comiso\]) in Theorem \[thm:precise\] in particular implies the isomorphism $$\begin{aligned} \label{I:nil} I_{\ast} \colon p_Q^{-1}(0) \stackrel{\cong}{\to} p_M^{-1}(p). \end{aligned}$$ In this section, we recall the NC deformation theory associated to a simple collection of sheaves, and explain its relationship to the isomorphism (\[I:nil\]). More precisely in Theorem \[cor:equiv:I\], using NC deformation theory we show that the map $I_{\ast}$ gives an equivalence of categories between the category of nilpotent representations of the Ext-quiver and the subcategory of coherent sheaves on $X$ generated by the given simple collection. The result of Theorem \[cor:equiv:I\] immediately implies the isomorphism (\[I:nil\]), so giving an interpretation of (\[I:nil\]) via NC deformation theory. The result of Theorem \[cor:equiv:I\] will be only used in the proof of Lemma \[lem:pstab\] in the next section, but seems to be an interesting result in it’s own right as it gives intrinsic understanding of the isomorphism (\[I:nil\]). NC deformation functors ----------------------- Let $X$ be a smooth projective variety, and take a simple collection of coherent sheaves on it $$\begin{aligned} \label{simple} E_{\bullet}=(E_1, E_2, \ldots, E_k). \end{aligned}$$ The NC deformation theory associated to the simple collection (\[simple\]) is formulated for such a collection [@Lau; @Erik; @Kawnc; @BoBo]. The following convention is due to Kawamata [@Kawnc]. By definition, a $k$-*pointed $\mathbb{C}$-algebra* is an associative ring $R$ with $\mathbb{C}$-algebra homomorphisms $$\begin{aligned} \mathbb{C}^k \stackrel{p}{\to} R \stackrel{q}\to \mathbb{C}^k\end{aligned}$$ whose composition is the identity. Then $R$ decomposes as $$\begin{aligned} R=\mathbb{C}^{k} \oplus \mathbf{m}, \ \mathbf{m} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}{\mathop{\rm Ker}\nolimits}q.\end{aligned}$$ For $1\le i \le k$, let $\mathbf{m}_i$ be the kernel of the composition $$\begin{aligned} R \stackrel{q}{\to} \mathbb{C}^k \to \mathbb{C}\end{aligned}$$ where the second map is the $i$-th projection. Note that $\mathbf{m}=\cap_{i=1}^{k} \mathbf{m}_i$. We define ${\mathcal{A}}rt_k$ to be the category of finite dimensional $k$-pointed $\mathbb{C}$-algebras $R=\mathbb{C}^k \oplus \mathbf{m}$ such that $\mathbf{m}$ is nilpotent. For a simple collection (\[simple\]), we have the NC deformation functor $$\begin{aligned} \label{rDef} \mathrm{Def}_{E_{\bullet}}^{\rm{nc}} \colon {\mathcal{A}}rt_k \to {\mathcal{S}}et. \end{aligned}$$ The above functor is defined by sending $R=\mathbb{C}^k \oplus \mathbf{m}$ to the set of isomorphism classes of pairs $$\begin{aligned} ({\mathcal{E}}, \psi), \ {\mathcal{E}}\in {\mathop{\rm Coh}\nolimits}(R \otimes_{\mathbb{C}}{\mathcal{O}}_X)\end{aligned}$$ where ${\mathcal{E}}$ is a coherent left $R \otimes_{\mathbb{C}}{\mathcal{O}}_X$-module which is flat over $R$, and $\psi$ is an isomorphism $R/\mathbf{m} \otimes_R {\mathcal{E}}\stackrel{\cong}{\to} \oplus_i E_i$ which induces isomorphisms $$\begin{aligned} R/\mathbf{m}_i \otimes_R {\mathcal{E}}\stackrel{\cong}{\to} E_i, \ 1\le i\le k.\end{aligned}$$ Pro-representable hull ---------------------- Let $\widehat{{\mathcal{A}}rt}_k$ be the category whose objects consist of $\mathbb{C}^k$-algebras given by inverse limits of objects in ${\mathcal{A}}rt_k$. An object $A \in \widehat{{\mathcal{A}}rt}_k$ is called a *pro-representable hull* of the functor $\mathrm{Def}_{E_{\bullet}}^{\rm{nc}}$ if there is a formally smooth morphism $$\begin{aligned} {\mathop{\rm Hom}\nolimits}_{\widehat{{\mathcal{A}}rt}_k}(A, -) \to \mathrm{Def}_{E_{\bullet}}^{\rm{nc}}(-)\end{aligned}$$ which are isomorphisms in first orders. A pro-representable hull is, if it exists, unique up to non-canonical isomorphisms (see [@Schle]). A pro-representable hull of the functor $\mathrm{Def}_{E_{\bullet}}^{\rm{nc}}$ is known to exist by [@Lau; @Erik]. By [@Kawnc], it is explicitly constructed by taking the iterated universal extensions of sheaves $E_i$, which we review here. We first set $E_i^{(0)}=E_i $ for $1\le i\le k$. Suppose that $E^{(n)}_i$ is constructed for some $n\ge 0$ and all $1\le i\le k$. Then $E^{(n+1)}_i$ is constructed as the universal extension $$\begin{aligned} \label{univ:ext} 0 \to \bigoplus_{j=1}^k {\mathop{\rm Ext}\nolimits}^1(E_i^{(n)}, E_j)^{\vee} \otimes E_j \to E_i^{(n+1)} \to E_i^{(n)} \to 0. \end{aligned}$$ Let us set $$\begin{aligned} E^{(n)} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\bigoplus_{i=1}^n E_i^{(n)}, \ R^{(n)} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}{\mathop{\rm Hom}\nolimits}(E^{(n)}, E^{(n)}). \end{aligned}$$ Then $R^{(n)}$ is an object of ${\mathcal{A}}rt_k$, and $E^{(n)}$ is an element of $\mathrm{Def}_{E_{\bullet}}^{\rm{nc}}(R^{(n)})$ by [@Kawnc Theorem 4.8]. Moreover by [@Kawnc Lemma 4.3, Corollary 4.6, Theorem 4.8], there exist natural surjections $R^{(n+1)} \twoheadrightarrow R^{(n)}$ such that the inverse limit $$\begin{aligned} R_{E_{\bullet}}^{\rm{nc}}=\lim_{\longleftarrow} R^{(n)} \in \widehat{{\mathcal{A}}rt}_k\end{aligned}$$ is a pro-representable hull of (\[rDef\]). Moreover the surjection $E^{(n+1)} \twoheadrightarrow E^{(n)}$ induces the isomorphism $$\begin{aligned} \label{RnE} R^{(n)} \otimes_{R^{(n+1)}} E^{(n+1)} \stackrel{\cong}{\to} E^{(n)}.\end{aligned}$$ By the surjection $R^{(n+1)} \twoheadrightarrow R^{(n)}$, we have the fully-faithful embedding $$\begin{aligned} \label{R:emb} {\mathop{\rm mod}\nolimits}R^{(n)} \hookrightarrow {\mathop{\rm mod}\nolimits}R^{(n+1)}.\end{aligned}$$ Then the category ${\mathop{\rm mod}\nolimits}_{\rm{nil}} R_{E_{\bullet}}^{\rm{nc}}$ is defined by $$\begin{aligned} \label{mod:nil} {\mathop{\rm mod}\nolimits}_{\rm{nil}} R_{E_{\bullet}}^{\rm{nc}} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\lim_{\longrightarrow} \left({\mathop{\rm mod}\nolimits}R^{(n)} \right). \end{aligned}$$ The above category is identified with the abelian category of nilpotent finite dimensional right $R_{E_{\bullet}}^{\rm{nc}}$-modules. Equivalence of categories via NC deformations --------------------------------------------- In what follows, we show that the category (\[mod:nil\]) is equivalent to the subcategory of ${\mathop{\rm Coh}\nolimits}(X)$ $$\begin{aligned} \langle E_1, E_2, \ldots, E_k \rangle \subset {\mathop{\rm Coh}\nolimits}(X)\end{aligned}$$ given by the extension closure of $E_1, \ldots, E_k$, i.e. the smallest extension closed subcategory of ${\mathop{\rm Coh}\nolimits}(X)$ which contains $E_1, \ldots, E_k$. \[lem:Phi0\] For $T \in {\mathop{\rm mod}\nolimits}R^{(n)}$, we have $$\begin{aligned} \label{Phi0} \Phi(T) {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}T \otimes_{R^{(n)}} E^{(n)} \in \langle E_1, \ldots, E_k \rangle. \end{aligned}$$ Since $R^{(n)} \in {\mathcal{A}}rt_k$, it decomposes as $R^{(n)}=\mathbb{C}^k \oplus \mathbf{m}^{(n)}$. We take the following filtration in ${\mathop{\rm mod}\nolimits}R^{(n)}$ $$\begin{aligned} \cdots \subset T(\mathbf{m}^{(n)})^j \subset T(\mathbf{m}^{(n)})^{j-1} \subset \cdots \subset T\mathbf{m}^{(n)} \subset T. \end{aligned}$$ Then the subquotient $$\begin{aligned} T^{(j)} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}T(\mathbf{m}^{(n)})^j/T(\mathbf{m}^{(n)})^{j+1}\end{aligned}$$ is a $\mathbb{C}^k$-module, which is zero for $j\gg 0$. Since $E^{(n)}$ is an NC deformation of $E_{\bullet}$ to $R^{(n)}$, it follows that $T^{(j)} \otimes_{R^{(n)}}E^{(n)}$ is a direct sum of objects in $(E_1, \ldots, E_k)$. Since $T$ is given by iterated extensions of $T^{(j)}$, the lemma follows. The functor $$\begin{aligned} \Phi \colon {\mathop{\rm mod}\nolimits}R^{(n)} \to \langle E_1, \ldots, E_k \rangle\end{aligned}$$ given by Lemma \[lem:Phi0\] commutes with the embedding (\[R:emb\]) by the isomorphism (\[RnE\]). Hence we obtain the functor $$\begin{aligned} \label{Phi} \Phi \colon {\mathop{\rm mod}\nolimits}_{\rm{nil}}R_{E_{\bullet}}^{\rm{nc}} \to \langle E_1, \ldots, E_k \rangle. \end{aligned}$$ Below we show that the functor (\[Phi\]) is an equivalence of categories. We prepare some lemmas. \[lem:Ei\] We have ${\mathop{\rm Hom}\nolimits}(E_i^{(n)}, E_j)=\mathbb{C}^{\delta_{ij}}$ and the natural map $$\begin{aligned} {\mathop{\rm Ext}\nolimits}^1(E_i^{(n)}, E_j) \to {\mathop{\rm Ext}\nolimits}^1(E_i^{(n+1)}, E_j)\end{aligned}$$ is a zero map. The lemma follows from the exact sequence $$\begin{aligned} 0 \to {\mathop{\rm Hom}\nolimits}(E_i^{(n)}, E_j) &\to {\mathop{\rm Hom}\nolimits}(E_i^{(n+1)}, E_j) \to {\mathop{\rm Ext}\nolimits}^1(E_i^{(n)}, E_j) \\ &\stackrel{{\textrm{id}}}{\to} {\mathop{\rm Ext}\nolimits}^1(E_i^{(n)}, E_j) \to {\mathop{\rm Ext}\nolimits}^1(E_i^{(n+1)}, E_j)\end{aligned}$$ obtained by applying ${\mathop{\rm Hom}\nolimits}(-, E_j)$ to the exact sequence (\[univ:ext\]). \[lem:Ext0\] For any $U \in \langle E_1, \ldots, E_k \rangle$ and $n\ge 0$, the natural map $$\begin{aligned} \label{nat:EU} {\mathop{\rm Ext}\nolimits}^1(E_i^{(n)}, U) \to {\mathop{\rm Ext}\nolimits}^1(E_i^{(n+l)}, U)\end{aligned}$$ is a zero map for $l\gg 0$. If $U=E_j$ for some $1\le j\le k$, the lemma follows from Lemma \[lem:Ei\]. Otherwise there is an exact sequence $$\begin{aligned} 0 \to U' \to U \to U'' \to 0, \ U', U'' \in \langle E_1, \ldots, E_k \rangle \setminus \{0\}.\end{aligned}$$ Suppose that the lemma holds for $U'$ and $U''$. For $l'\gg 0$ and $l'' \gg 0$, We have the commutative diagram $$\begin{aligned} \xymatrix{ {\mathop{\rm Ext}\nolimits}^1(E^{(n)}, U') \ar[r] \ar[d] & {\mathop{\rm Ext}\nolimits}^1(E^{(n)}, U) \ar[r] \ar[d] &{\mathop{\rm Ext}\nolimits}^1(E^{(n)}, U'') \ar[d]^{0} \\ {\mathop{\rm Ext}\nolimits}^1(E^{(n+l'')}, U') \ar[r] \ar[d]^{0} & {\mathop{\rm Ext}\nolimits}^1(E^{(n+l'')}, U) \ar[r] \ar[d] & {\mathop{\rm Ext}\nolimits}^1(E^{(n+l'')}, U'') \ar[d] \\ {\mathop{\rm Ext}\nolimits}^1(E^{(n+l'+l'')}, U') \ar[r] & {\mathop{\rm Ext}\nolimits}^1(E^{(n+l'+l'')}, U) \ar[r] & {\mathop{\rm Ext}\nolimits}^1(E^{(n+l+l'')}, U''). }\end{aligned}$$ Here the horizontal arrows are exact sequences. The map (\[nat:EU\]) for $l=l'+l''$ is the composition of middle vertical arrows, which is zero by a diagram chasing. Therefore the lemma follows by the induction on the number of iterated extensions of $U$ by $E_1, \ldots, E_k$. \[lem:terminate\] For any $U \in \langle E_1, \ldots, E_k \rangle$, the sequence $$\begin{aligned} \label{terminate} {\mathop{\rm Hom}\nolimits}(E^{(0)}, U) \subset {\mathop{\rm Hom}\nolimits}(E^{(1)}, U) \subset \cdots \subset {\mathop{\rm Hom}\nolimits}(E^{(n)}, U) \subset \cdots\end{aligned}$$ terminates for $n\gg 0$. The lemma can be proved by the induction on the number of iterated extensions of $U$ by $E_1, \ldots, E_k$. If $U=E_i$ for some $i$, then the sequence (\[terminate\]) terminates by Lemma \[lem:Ei\]. Otherwise there is an exact sequence $$\begin{aligned} 0 \to E_i \to U \to U' \to 0\end{aligned}$$ for some $1\le i\le k$ and $U' \in \langle E_1, \ldots, E_k \rangle$. By applying ${\mathop{\rm Hom}\nolimits}(E^{(n)}, -)$, we obtain the exact sequence $$\begin{aligned} 0 \to {\mathop{\rm Hom}\nolimits}(E^{(n)}, E_i) \to {\mathop{\rm Hom}\nolimits}(E^{(n)}, U) \to {\mathop{\rm Hom}\nolimits}(E^{(n)}, U'). \end{aligned}$$ By Lemma \[lem:Ei\], it follows that $$\begin{aligned} \hom(E^{(n)}, U) \le \hom(E^{(n)}, U')+1.\end{aligned}$$ By the induction hypothesis, $\hom(E^{(n)}, U')$ is bounded above by a number which is independent of $n$. Therefore $\hom(E^{(n)}, U)$ is also bounded above. By Lemma \[lem:terminate\], we have the functor $$\begin{aligned} \label{Psi} \Psi \colon \langle E_1, \ldots, E_k \rangle \to {\mathop{\rm mod}\nolimits}_{\rm{nil}} R_{E_{\bullet}}^{\rm{nc}}\end{aligned}$$ sending $U$ to ${\mathop{\rm Hom}\nolimits}(E^{(n)}, U)$ for $n\gg 0$. \[lem:Psiexact\] The functor (\[Psi\]) is exact. It is enough to show that (\[Psi\]) is right exact. Let $0 \to U' \to U \to U'' \to 0$ be an exact sequence in $\langle E_1, \ldots, E_k \rangle$. For $n\gg 0$ and $l\gg 0$, we have the commutative diagram $$\begin{aligned} \xymatrix{ {\mathop{\rm Hom}\nolimits}(E^{(n)}, U) \ar[r] \ar[d] ^{\cong} & {\mathop{\rm Hom}\nolimits}(E^{(n)}, U'') \ar[r] \ar[d]^{\cong} & {\mathop{\rm Ext}\nolimits}^1(E^{(n)}, U') \ar[d]^{0} \\ {\mathop{\rm Hom}\nolimits}(E^{(n+l)}, U) \ar[r] & {\mathop{\rm Hom}\nolimits}(E^{(n+l)}, U'') \ar[r] & {\mathop{\rm Ext}\nolimits}^1(E^{(n+l)}, U'). }\end{aligned}$$ Here the isomorphisms of the left and middle vertical arrows follow from Lemma \[lem:terminate\] and the right vertical arrow is a zero map by Lemma \[lem:Ext0\]. Therefore the right bottom horizontal arrow is a zero map, which shows that ${\mathop{\rm Hom}\nolimits}(E^{(n)}, U) \to {\mathop{\rm Hom}\nolimits}(E^{(n)}, U'')$ is surjective for $n\gg 0$. Therefore the functor (\[Psi\]) is exact. We then show the following proposition: \[prop:equiv\] The functor (\[Phi\]) is an equivalence of categories. The functor (\[Psi\]) is a right adjoint functor of $\Phi$, so there exist canonical natural transformations $$\begin{aligned} {\textrm{id}}\to \Psi \circ \Phi(-), \ \Phi \circ \Psi(-) \to {\textrm{id}}. \end{aligned}$$ It is enough to show that both of them are isomorphisms of functors. As $E^{(n)}$ is flat over $R^{(n)}$, the functor $\Phi$ is exact. The functor $\Psi$ is also exact by Lemma \[lem:Psiexact\], so the compositions $\Psi \circ \Phi$, $\Phi \circ \Psi$ are also exact. Therefore by the induction on the number of iterated extensions by simple objects and the five lemma, it is enough to check the isomorphisms $$\begin{aligned} S_i \stackrel{\cong}{\to} \Psi \circ \Phi(S_i), \ \Phi \circ \Psi(E_i) \stackrel{\cong}{\to}E_i. \end{aligned}$$ Here $S_1, \ldots, S_k$ are simple $R^{(0)}=\mathbb{C}^k$-modules. Since $\Phi(S_i)=E_i$ and $\Psi(E_i)=S_i$, the above isomorphisms are obvious. Mauer-Cartan formalism of NC deformations ----------------------------------------- We can interpret the NC deformation functor (\[rDef\]) in terms of Mauer-Cartan formalism. The argument below is also available in [@ESe]. For $R \in {\mathcal{A}}rt_k$ with the decomposition $R=\mathbb{C}^k \oplus \mathbf{m}$, an argument similar to Subsection \[subsec:complex\] shows that $$\begin{aligned} \notag \mathrm{Def}_{E_{\bullet}}^{\rm{nc}}(R) &\cong \mathrm{MC}\left(A^{0, \ast}\left( {\mathcal{H}}om^{\ast} \left(\bigoplus_{i=1}^k {\mathcal{E}}_{i}^{\bullet}, \bigoplus_{i=1}^k {\mathcal{E}}_{i}^{\bullet} \right) \underline{\otimes} \mathbf{m} \right) \right)/\sim \\ \label{MC:NC}&=\mathrm{MC}\left( \bigoplus_{i, j}A^{0, \ast} ({\mathcal{H}}om^{\ast}({\mathcal{E}}_{i}^{\bullet}, {\mathcal{E}}_j^{\bullet})) \otimes_{\mathbb{C}} \mathbf{m}_{ij} \right)/\sim.\end{aligned}$$ Here $\sim$ means gauge equivalence, $\underline{\otimes}$ is the tensor product of $k$-pointed $\mathbb{C}$-algebras (see [@ESe Section 1.3]), and $\mathbf{m}_{ij}= \mathbf{e}_i \cdot \mathbf{m}\cdot \mathbf{e}_j$ for the idempotents $\{\mathbf{e}_1, \ldots, \mathbf{e}_k\}$ of $R$. Then using the $A_{\infty}$-operation $\{I_n\}_{n\ge 1}$ in Subsection \[subsec:minimal\], we have the map $$\begin{aligned} \label{I:MC:map} I_{\ast} \colon & \mathrm{MC}\left( \bigoplus_{i, j}{\mathop{\rm Ext}\nolimits}^{\ast}(E_i, E_j) \otimes_{\mathbb{C}} \mathbf{m}_{ij} \right) \\ \notag &\to \mathrm{MC}\left( \bigoplus_{i, j}A^{0, \ast} ({\mathcal{H}}om({\mathcal{E}}_{i}^{\bullet}, {\mathcal{E}}_j^{\bullet})) \otimes_{\mathbb{C}} \mathbf{m}_{ij} \right)\end{aligned}$$ which is an isomorphism after taking the quotients by gauge equivalence. Here the LHS is the solution of the MC equation of the $A_{\infty}$-algebra $$\begin{aligned} \bigoplus_{i, j}{\mathop{\rm Ext}\nolimits}^{\ast}(E_i, E_j) \otimes_{\mathbb{C}} \mathbf{m}_{ij}\end{aligned}$$ whose $A_{\infty}$-product is given by (\[factor\]), and the map $I_{\ast}$ is constructed as in (\[series:I\]). Let $A$ be the $\mathbb{C}^k$-algebra defined by $$\begin{aligned} \label{alg:A} A {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\mathbb{C}{[\![}Q_{E_{\bullet}} {]\!]}/(f_1, \ldots, f_l)\end{aligned}$$ where $(f_1, \ldots, f_l)$ is the convergent relation of $Q_{E_{\bullet}}$ given in (\[relation:I\]). We have the tautological identification $$\begin{aligned} \label{tautological} \mathrm{MC}\left( \bigoplus_{i, j}{\mathop{\rm Ext}\nolimits}^{\ast}(E_i, E_j) \otimes_{\mathbb{C}} \mathbf{m}_{ij} \right)={\mathop{\rm Hom}\nolimits}_{\widehat{{\mathcal{A}}rt}_k}\left(A, R \right). \end{aligned}$$ Here $(e_{i, j} \otimes r_{i, j})$ in the LHS corresponds to $A \to R$ given by $$\begin{aligned} {\mathop{\rm Ext}\nolimits}^1(E_i, E_j)^{\vee} \supset E_{i, j} \ni z \mapsto e_{i, j}(z) \cdot r_{i, j}.\end{aligned}$$ As proved in [@ESe Proposition 2.13], under the above identification the gauge equivalence in the LHS corresponds to the conjugation by an element in $1+\oplus_{i} \mathbf{m}_{ii}$ in the RHS. Thus we see that $A$ is a pro-representable hull of $\mathrm{Def}_{E_{\bullet}}^{\rm{nc}}$. By the uniqueness of pro-representable hull, we have an isomorphism $$\begin{aligned} R_{E_{\bullet}}^{\rm{nc}} \cong A\end{aligned}$$ which commute with maps to $\mathrm{Def}_{E_{\bullet}}^{\rm{nc}}$. Combined with Proposition \[prop:equiv\], we have the following corollary: \[cor:equiv\] We have an equivalence of categories $$\begin{aligned} \label{Phi:eq} \Phi \colon {\mathop{\rm mod}\nolimits}_{\rm{nil}}A \stackrel{\sim}{\to} \langle E_1, E_2, \ldots, E_k \rangle. \end{aligned}$$ Here $A$ is the $\mathbb{C}^k$-algebra (\[alg:A\]). Equivalence of categories via $I_{\ast}$ ---------------------------------------- Let us take a nilpotent $Q_{E_{\bullet}}$-representation $$\begin{aligned} \label{nilp:u} u=(u_e)_{e \in E(Q_{E_{\bullet}})}, \ u_e \colon V_{s(e)} \to V_{t(e)}.\end{aligned}$$ By the argument in Subsection \[subsec:functI\] and Remark \[rmk:operator\], the correspondence $u \mapsto I_{\ast}(u)$ forms a functor $$\begin{aligned} \label{funct:I} I_{\ast} \colon {\mathop{\rm mod}\nolimits}_{\rm{nil}}(A) \to {\mathop{\rm Coh}\nolimits}(X). \end{aligned}$$ We compare the above functor with the equivalence (\[Phi:eq\]) in the following proposition: \[cor:equiv:I\] The functor (\[funct:I\]) is isomorphic to the functor $\Phi$ in (\[Phi\]). In particular, the functor $I_{\ast}$ in (\[funct:I\]) is an equivalence of categories $$\begin{aligned} I_{\ast} \colon {\mathop{\rm mod}\nolimits}_{\rm{nil}}(A) \stackrel{\sim}{\to} \langle E_1, E_2, \ldots, E_k \rangle \subset {\mathop{\rm Coh}\nolimits}(X). \end{aligned}$$ Let $A=\mathbb{C}^k \oplus \mathbf{m}$ be the decomposition, $\{\mathbf{e}_1, \ldots, \mathbf{e}_k\}$ the idempotents of $A$, and set $A^{(n)} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}A/\mathbf{m}^{n+1}$, $\mathbf{m}^{(n)} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\mathbf{m}/\mathbf{m}^{n+1}$. Then for an element $u$ as in (\[nilp:u\]), the compositions of $u_e$ for $e \in E(Q_{E_{\bullet}})$ along with the path in $Q_{E_{\bullet}}$ defines the linear map $$\begin{aligned} \mathbf{u} \colon \mathbf{m}_{ij}^{(n)} \to {\mathop{\rm Hom}\nolimits}(V_i, V_j), \ \mathbf{m}_{ij}^{(n)} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\mathbf{e}_i \cdot \mathbf{m}^{(n)} \cdot \mathbf{e}_j.\end{aligned}$$ On the other hand, let $$\begin{aligned} c^{(n)} \in \mathrm{MC}\left( \bigoplus_{i, j}{\mathop{\rm Ext}\nolimits}^{\ast}(E_i, E_j) \otimes_{\mathbb{C}} \mathbf{m}_{ij}^{(n)} \right)\end{aligned}$$ be the canonical element corresponding to the surjection $A \twoheadrightarrow A^{(n)}$ under the tautological identity (\[tautological\]). Applying the map (\[I:MC:map\]), we obtain $$\begin{aligned} \label{I:kappan} I_{\ast} (c^{(n)}) \in \mathrm{MC}\left( \bigoplus_{i, j}A^{0, \ast} ({\mathcal{H}}om^{\ast}({\mathcal{E}}_{i}^{\bullet}, {\mathcal{E}}_j^{\bullet})) \otimes_{\mathbb{C}} \mathbf{m}_{ij}^{(n)} \right). \end{aligned}$$ Then for $n\gg 0$, we have the identity $$\begin{aligned} \label{Iu:kappa} I_{\ast}(u)=\mathbf{u} \circ I_{\ast}(c^{(n)}) \in \mathrm{MC}(\mathfrak{g}_E^{\ast}).\end{aligned}$$ Let ${\mathcal{F}}^{(n)} \in \mathrm{Def}_{E_{\bullet}}^{\rm{nc}}(A^{(n)})$ the NC deformation of $E_{\bullet}$ over $A^{(n)}$ corresponding to (\[I:kappan\]) under the isomorphism (\[MC:NC\]). Note that ${\mathcal{F}}^{(n)}$ is the universal NC deformation over $A$ pulled back by the surjection $A\twoheadrightarrow A^{(n)}$. Let $T \in {\mathop{\rm mod}\nolimits}_{\rm{nil}}(A)$ be the object given by the $Q_{E_{\bullet}}$-representation $u$. Then the identity (\[Iu:kappa\]) implies that $$\begin{aligned} I_{\ast}(T) \cong T \otimes_{A^{(n)}} {\mathcal{F}}^{(n)}.\end{aligned}$$ By the construction of $\Phi$ in (\[Phi:eq\]), which goes back to the construction in Lemma \[lem:Phi0\], and the universality of ${\mathcal{F}}^{(n)}$, we have $\Phi(T)=T \otimes_{A^{(n)}} {\mathcal{F}}^{(n)}$. Therefore the proposition holds. In the diagram (\[dia:comiso\]), note that $p_Q^{-1}(0)$ consists of nilpotent $A$-modules and $p_M^{-1}(p)$ consists of objects in the extension closure $\langle E_1, \ldots, E_k \rangle$. The above proposition implies that the isomorphism (\[I:nil\]) is induced by the universal family over NC deformations. Moduli spaces of one dimensional semistable sheaves {#sec:one} =================================================== In this section, we focus on the case of moduli spaces of one dimensional semistable sheaves, and prove Theorem \[intro:thm:onedim\]. Twisted semistable sheaves -------------------------- Let $X$ be a smooth projective variety, and $A(X)_{\mathbb{C}}$ its complexified ample cone $$\begin{aligned} A(X)_{\mathbb{C}} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\{B+i\omega \in \mathrm{NS}(X)_{\mathbb{C}} : \omega \mbox{ is ample }\}. \end{aligned}$$ Let $$\begin{aligned} {\mathop{\rm Coh}\nolimits}_{\le 1}(X) \subset {\mathop{\rm Coh}\nolimits}(X)\end{aligned}$$ be the abelian subcategory of coherent sheaves whose supports have dimensions less than or equal to one. For an object $E \in {\mathop{\rm Coh}\nolimits}_{\le 1}(X)$ and $B+i\omega \in A(X)_{\mathbb{C}}$, the *$B$-twisted $\omega$-slope* $\mu_{B, \omega}(E)$ is defined by $$\begin{aligned} \mu_{B, \omega}(E) {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\frac{\chi(E)-B \cdot {\mathop{\rm ch}\nolimits}_{d-1}(E)}{\omega \cdot {\mathop{\rm ch}\nolimits}_{d-1}(E)} \in \mathbb{R} \cup \{\infty\}. \end{aligned}$$ Here $d=\dim X$, and we set $\mu_{B, \omega}(E)=\infty$ if $\omega \cdot {\mathop{\rm ch}\nolimits}_{d-1}(E)=0$, i.e. if $E$ is a zero dimensional sheaf. \[def:Bw\] An object $E \in {\mathop{\rm Coh}\nolimits}_{\le 1}(X)$ is $(B, \omega)$-(semi)stable if for any non-zero subsheaf $F \subsetneq E$, we have the inequality $$\begin{aligned} \mu_{B, \omega}(F)<(\le) \mu_{B, \omega}(E). \end{aligned}$$ \[rmk:B=0\] If $B=0$, then $E \in {\mathop{\rm Coh}\nolimits}_{\le 1}(X)$ is $(0, \omega)$-(semi)stable iff it is $\omega$-Gieseker (semi)stable sheaf. \[rmk:tensor\] For any integer $k\ge 1$ and a line bundle ${\mathcal{L}}$ on $X$, we have $$\begin{aligned} \mu_{B, \omega}(E)=\mu_{kB, k\omega}(E)= \mu_{kB+c_1({\mathcal{L}}), k\omega}(E \otimes {\mathcal{L}}). \end{aligned}$$ In particular if $B, \omega$ are elements of $\mathrm{NS}(X)_{\mathbb{Q}}$ so that $kB, k\omega$ are integral, then for a line bundle ${\mathcal{L}}$ with $c_1({\mathcal{L}})=-kB$ a sheaf $E \in {\mathop{\rm Coh}\nolimits}_{\le 1}(X)$ is $(B, \omega)$-semistable iff $E \otimes {\mathcal{L}}$ is a $\omega$-Gieseker semistable sheaf. The $(B, \omega)$-stability condition is interpreted in terms of Bridgeland stability conditions [@Brs1] as follows. Let $N_1(X) \subset H_2(X, \mathbb{Z})$ be the group of numerical classes of algebraic one cycles on $X$ and set $$\begin{aligned} \Gamma_X {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}N_1(X) \oplus \mathbb{Z}.\end{aligned}$$ Let ${\mathop{\rm cl}\nolimits}$ be the group homomorphism defined by $$\begin{aligned} \label{def:cl} {\mathop{\rm cl}\nolimits}\colon K({\mathop{\rm Coh}\nolimits}_{\le 1}(X)) \to \Gamma_X, \ E \mapsto ([E], \chi(E))\end{aligned}$$ where $[E]$ is the fundamental one cycle associated to $E$. By definition, a *Bridgeland stability condition* on $D^b({\mathop{\rm Coh}\nolimits}_{\le 1}(X))$ w.r.t. the group homomorphism map (\[def:cl\]) consists of data $$\begin{aligned} \label{def:stab} \sigma=(Z, {\mathcal{A}}), \ Z \colon \Gamma_X \to \mathbb{C}, \ {\mathcal{A}}\subset D^b({\mathop{\rm Coh}\nolimits}_{\le 1}(X))\end{aligned}$$ where $Z$ is a group homomorphism, ${\mathcal{A}}$ is the heart of a bounded t-structure satisfying some axioms (see [@Brs1; @K-S] for details). It determines the set of *$\sigma$-(semi)stable objects*: $E \in D^b({\mathop{\rm Coh}\nolimits}_{\le 1}(X))$ is $\sigma$-(semi)stable if $E[k] \in {\mathcal{A}}$ for some $k\in \mathbb{Z}$, and for any non-zero subobject $0\neq F \subsetneq E[k]$ in ${\mathcal{A}}$, we have the inequality in $(0, \pi]$: $$\begin{aligned} \arg Z({\mathop{\rm cl}\nolimits}(F))<(\le) \arg Z({\mathop{\rm cl}\nolimits}(E[k])).\end{aligned}$$ The set of Bridgeland stability conditions (\[def:stab\]) forms a complex manifold, which we denote by ${\mathop{\rm Stab}\nolimits}_{\le 1}(X)$. The forgetting map $(Z, {\mathcal{A}}) \mapsto Z$ gives a local homeomorphism $$\begin{aligned} {\mathop{\rm Stab}\nolimits}_{\le 1}(X) \to (\Gamma_X)_{\mathbb{C}}^{\vee}. \end{aligned}$$ For a given element $B+i\omega \in A(X)_{\mathbb{C}}$, let $Z_{B, \omega}$ be the group homomorphism $\Gamma_X \to \mathbb{C}$ defined by $$\begin{aligned} \label{ZBw} Z_{B, \omega}(\beta, m) {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}-m+(B+i\omega)\beta. \end{aligned}$$ Then the pair $$\begin{aligned} \label{sigma:Bw} \sigma_{B, \omega} {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}(Z_{B, \omega}, {\mathop{\rm Coh}\nolimits}_{\le 1}(X))\end{aligned}$$ determines a point in ${\mathop{\rm Stab}\nolimits}_{\le 1}(X)$. It is obvious that an object in ${\mathop{\rm Coh}\nolimits}_{\le 1}(X)$ is $(B, \omega)$-(semi)stable iff it is Bridgeland $\sigma_{B, \omega}$-(semi)stable. We also call $(B, \omega)$-(semi)stable sheaves as *$\sigma_{B, \omega}$-(semi)stable objects*. Moreover the map $$\begin{aligned} A(X)_{\mathbb{C}} \to {\mathop{\rm Stab}\nolimits}_{\le 1}(X), \ (B, \omega) \mapsto \sigma_{B, \omega}\end{aligned}$$ is a continuous injective map, whose image is denoted by $$\begin{aligned} U(X) \subset {\mathop{\rm Stab}\nolimits}_{\le 1}(X).\end{aligned}$$ Moduli stacks of twisted semistable sheaves ------------------------------------------- For $\sigma=\sigma_{B, \omega} \in U(X)$, and $v \in \Gamma_X$, let $$\begin{aligned} {\mathcal{M}}_{\sigma}(v) \subset {\mathcal{M}}\end{aligned}$$ be the moduli stack of $\sigma$-semistable $E \in {\mathop{\rm Coh}\nolimits}_{\le 1}(X)$ with ${\mathop{\rm cl}\nolimits}(E)=v$. As in the case of Gieseker stability, we have the following: \[stack:twist\] The stack ${\mathcal{M}}_{\sigma}(v)$ is an algebraic stack of finite type with a projective coarse moduli space $M_{\sigma}(v)$. So we have the natural morphism $$\begin{aligned} p_M \colon {\mathcal{M}}_{\sigma}(v) \to M_{\sigma}(v). \end{aligned}$$ Moreover for each closed point $p \in M_{\sigma}(v)$, the same conclusion of Theorem \[thm:precise\] holds. If $B$ and $\omega$ are rational, then we can reduce the lemma in the case of $B=0$ and $\omega$ is integral by Remark \[rmk:tensor\]. In that case, the lemma follows from Theorem \[thm:precise\]. In general by wall-chamber structure on the space of Bridgeland stability conditions, there is a collection of real codimension one submanifolds $\{{\mathcal{W}}_{j}\}_{j \in J}$ in $A(X)_{\mathbb{C}}$ called *walls* such that ${\mathcal{M}}_{\sigma}(v)$ is constant if $\sigma$ is contained in a strata $$\begin{aligned} \label{strata} \cap_{j \in J'} {\mathcal{W}}_{j} \setminus \cup_{j\notin J' }{\mathcal{W}}_j\end{aligned}$$ for some subset $J' \subset J$. Each wall is given by $\mu_{B, \omega}(\beta, n)=\mu_{B, \omega}(\beta', n')$ for other $(\beta', n') \in \Gamma_X$ which is not proportional to $(\beta, n)$, i.e. $$\begin{aligned} (n'\beta-n\beta')\omega = B\beta' \cdot \omega \beta-B\beta \cdot \omega \beta'. \end{aligned}$$ The above equation determines a hypersurface in $A(X)_{\mathbb{C}}$ which contains dense rational points. Therefore if $(B, \omega)$ is not rational, then we can perturb it in the strata (\[strata\]) and can assume that $(B, \omega)$ is rational. Moduli stacks of semistable Ext-quiver representations ------------------------------------------------------ For $v \in \Gamma_X$ and $\sigma=\sigma_{B, \omega} \in U(X)$, take a point $p\in M_{\sigma}(v)$. Suppose that $p$ is represented by a $(B, \omega)$-polystable sheaf $E$ of the form $$\begin{aligned} \label{onedim:poly} E=\bigoplus_{i=1}^k V_i \otimes E_i\end{aligned}$$ where $E_i \in {\mathop{\rm Coh}\nolimits}_{\le 1}(X)$ is $(B, \omega)$-stable with $\mu_{B, \omega}(E_i)=\mu_{B, \omega}(E)$. Then we have the Ext-quiver $Q_{E_{\bullet}}$ associated to the simple collection $$\begin{aligned} E_{\bullet}=(E_1, \ldots, E_k),\end{aligned}$$ together with a convergent relation $I_{E_{\bullet}}$ as in (\[relation:I\]). For $i \in V(Q_{E_{\bullet}})=\{1, 2, \ldots, k\}$, let $S_i$ be the one dimensional $Q_{E_{\bullet}}$-representation corresponding to the vertex $i$. We denote by $K(Q_{E_{\bullet}})$ the Grothendieck group of finite dimensional $Q_{E_{\bullet}}$-representations, and take the group homomorphism $$\begin{aligned} \mathbf{dim} \colon K(Q_{E_{\bullet}}) \to \Gamma_Q {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\bigoplus_{i=1}^k \mathbb{Z} \cdot \mathbf{dim} (S_i)\end{aligned}$$ by taking the dimension vectors. Let us take another stability condition $$\begin{aligned} \label{sigma+} \sigma^{+}=\sigma_{B^{+}, \omega^{+}}= (Z_{B^{+}, \omega^{+}}, {\mathop{\rm Coh}\nolimits}_{\le 1}(X)) \in U(X).\end{aligned}$$ Then we have the group homomorphism $$\begin{aligned} Z_{Q}^{+} \colon K(Q_{E_{\bullet}}) \stackrel{\mathbf{dim}}{\to} \Gamma_Q \to \mathbb{C}, \ [S_i] \mapsto Z_{B^{+}, \omega^{+}}(E_i).\end{aligned}$$ The above group homomorphism determines a Bridgeland stability condition on the category of $Q_{E_{\bullet}}$-representations, and the associated (semi)stable representations. They are described in terms of slope stability condition as in Definition \[def:Bw\]. Let $\mu_Q^{+}$ be the slope function on the category of $Q_{E_{\bullet}}$-representations defined by $$\begin{aligned} \mu_Q^{+}(-) {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}-\frac{{\mathop{\rm Re}\nolimits}Z_{Q}^+(-)}{{\mathop{\rm Im}\nolimits}Z_Q^+(-)}. \end{aligned}$$ Note that if $\mathbb{V}$ is a $Q_{E_{\bullet}}$-representation with dimension vector $$\begin{aligned} \label{vec:onedim:m} \vec{m}=(m_i)_{1\le i\le k}, \ m_i=\dim V_i\end{aligned}$$ then we have the identity $$\begin{aligned} \label{id:slopes} \mu_Q^{+}(\mathbb{V})=\mu_{B^{+}, \omega^{+}}(E)\end{aligned}$$ where $E$ is given by (\[onedim:poly\]). We have the following definition: A $Q_{E_{\bullet}}$-representation $\mathbb{V}$ is $\mu_{Q}^{+}$-(semi)stable if for any sub $Q_{E_{\bullet}}$-representation $0\neq \mathbb{V}' \subsetneq \mathbb{V}$, we have the inequality $$\begin{aligned} \mu_Q^{+}(\mathbb{V}') <(\le) \mu_Q^+(\mathbb{V}). \end{aligned}$$ For the dimension vector (\[vec:onedim:m\]), let $$\begin{aligned} \mathrm{Rep}_{Q_{E_{\bullet}}}^{+}(\vec{m}) \subset \mathrm{Rep}_{Q_{E_{\bullet}}}(\vec{m})\end{aligned}$$ be the (Zariski) open subset consisting of $\mu_Q^{+}$-semistable $Q_{E_{\bullet}}$-representations. The above open subset is a GIT semistable locus with respect to a certain character of $G$ (see [@MR1315461 Section 3]). The quotients by $G$ $$\begin{aligned} {\mathcal{M}}_{Q_{E_{\bullet}}}^{+}(\vec{m})= [\mathrm{Rep}_{Q_{E_{\bullet}}}^{+}(\vec{m})/G], \ M_{Q_{E_{\bullet}}}^{+}(\vec{m})= \mathrm{Rep}_{Q_{E_{\bullet}}}^{+}(\vec{m}) {/\!\!/}G\end{aligned}$$ are the moduli stack of $\mu_{Q}^{+}$-semistable $Q_{E_{\bullet}}$-representations with dimension vector $\vec{m}$, and its coarse moduli space, respectively. We have the commutative diagram $$\begin{aligned} \xymatrix{ {\mathcal{M}}_{Q_{E_{\bullet}}}^{+}(\vec{m}) \ar@<-0.3ex>@{^{(}->}[r] \ar[d]_{p_Q^{+}} & {\mathcal{M}}_{Q_{E_{\bullet}}}(\vec{m}) \ar[d]^{p_Q} \\ M_{Q_{E_{\bullet}}}^{+}(\vec{m}) \ar[r]_{q_Q} & M_{Q_{E_{\bullet}}}(\vec{m}). }\end{aligned}$$ Here the vertical arrows are natural morphisms to the coarse moduli spaces, the top horizontal arrow is an open immersion and the bottom horizontal arrow $q_Q$ is induced by the universality of the GIT quotients. Note that $q_Q$ is projective due to a general argument of affine GIT quotients (see [@MR2004218 Section 6]). Let $0 \in V \subset M_{Q_{E_{\bullet}}}(\vec{m})$ be a sufficiently small analytic open subset as in Definition \[defi:cmoduli\]. Let $$\begin{aligned} \mathrm{Rep}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}^{+}(\vec{m})|_{V} \subset \mathrm{Rep}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V}\end{aligned}$$ be the open locus consisting of $\mu_Q^{+}$-semistable representations, where the RHS is defined as in (\[Rep:V\]). Then we set $$\begin{aligned} {\mathcal{M}}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}^{+}(\vec{m})|_{V} & {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}[\mathrm{Rep}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}^{+}(\vec{m})|_{V}/G], \\ M_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}^{+}(\vec{m})|_{V} & {\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\mathrm{Rep}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}^{+}(\vec{m})|_{V} {/\!\!/}G. \end{aligned}$$ Here $M_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}^{+}(\vec{m})|_{V}$ is the analytic Hilbert quotient given in Lemma \[lem:Zquot2\], which is a closed analytic subspace of $V^+=q_Q^{-1}(V)$. We have the commutative diagram $$\begin{aligned} \label{dia:quiver} \xymatrix{ {\mathcal{M}}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}^{+}(\vec{m})|_{V} \ar@<-0.3ex>@{^{(}->}[r] \ar[d]_{p_{(Q, I)}^{+}} \ar[dr]_-{r_{(Q, I)}}& {\mathcal{M}}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V} \ar[d]^{p_{(Q, I)}} \\ M_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}^{+}(\vec{m})|_{V} \ar[r]_{q_{(Q, I)}} & M_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V}. }\end{aligned}$$ Here the vertical arrows are natural morphisms to the coarse moduli spaces, the top horizontal arrow is an open immersion and the bottom horizontal arrow $q_{(Q, I)}$ is induced by the universality of analytic Hilbert quotients (see Lemma \[lem:universal\]). \[lem:proj\] The morphism $q_{(Q, I)}$ in the diagram (\[dia:quiver\]) is projective. We have the following commutative diagram $$\begin{aligned} \xymatrix{ M_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}^{+}(\vec{m})|_{V} \ar[d]_{q_{(Q, I)}} \ar@<-0.3ex>@{^{(}->}[r] & V^{+} \ar[d] \ar@<-0.3ex>@{^{(}->}[r] \ar@{}[dr]|\square & M_{Q_{E_{\bullet}}}^{+}(\vec{m}) \ar[d]^{q_Q} \\ M_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V} \ar@<-0.3ex>@{^{(}->}[r] & V \ar@<-0.3ex>@{^{(}->}[r] & M_{Q_{E_{\bullet}}}(\vec{m}). }\end{aligned}$$ Here the right diagram is a Cartesian square whose horizontal arrows are open immersions, and the horizontal arrows in the left diagram are closed immersions. Since $q_{Q}$ is projective, the morphism $q_{(Q, I)}$ is projective by the above diagram. Moduli stacks of semistable sheaves under the change of stability ----------------------------------------------------------------- Let us take $\sigma^{+}$ in (\[sigma+\]) sufficiently close to $\sigma$. Then by wall-chamber structure on $U(X)$, any $\sigma^{+}$-semistable object $E$ with ${\mathop{\rm cl}\nolimits}(E)=v$ is $\sigma$-semistable. Then we have the commutative diagram $$\begin{aligned} \label{dia:M+} \xymatrix{ {\mathcal{M}}_{\sigma^{+}}(v) \ar@<-0.3ex>@{^{(}->}[r] \ar[dr]_{r_M} \ar[d]_{p_M^{+}} & {\mathcal{M}}_{\sigma}(v) \ar[d]^{p_M} \\ M_{\sigma^{+}}(v) \ar[r]_{q_M} & M_{\sigma}(v). }\end{aligned}$$ Here the vertical arrows are natural morphisms to the coarse moduli spaces, the top arrow is an open immersion and the bottom arrow is induced by the universality of coarse moduli spaces. The following is the main result in this section. \[thm:onedim\] For a closed point $p\in M_{\sigma}(v)$ represented by a polystable sheaf (\[onedim:poly\]), there is an analytic open neighborhoods $p \in U \subset M_{\sigma}(v)$ and $0 \in V \subset M_{Q_{E_{\bullet}}}(\vec{m})$, where $Q_{E_{\bullet}}$ is the Ext-quiver associated to $p$ with convergent relation $I_{E_{\bullet}}$, and the dimension vector $\vec{m}$ is given by (\[vec:onedim:m\]), such that the diagram (\[dia:M+\]) pulled back to $U$ $$\begin{aligned} \notag \xymatrix{ r_M^{-1}(U) \ar@<-0.3ex>@{^{(}->}[r] \ar[d]_{p_M^{+}} & p_M^{-1}(U) \ar[d]^{p_M} \\ q_M^{-1}(U) \ar[r]_{q_M} & U }\end{aligned}$$ is isomorphic to the diagram (\[dia:quiver\]). We take $U={\mathcal{W}}{/\!\!/}G$, $V \subset M_{Q_{E_{\bullet}}}(\vec{m})$ and the isomorphism $$\begin{aligned} \label{isom:Iast} I_{\ast} \colon {\mathcal{M}}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V} \stackrel{\cong}{\to} p_M^{-1}(U)\end{aligned}$$ as in Proposition \[prop:complete\]. It is enough to show that the isomorphism (\[isom:Iast\]) restricts to the isomorphism $$\begin{aligned} \label{desire} I_{\ast} \colon {\mathcal{M}}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}^{+}(\vec{m})|_{V} \stackrel{\cong}{\to} r_M^{-1}(U). \end{aligned}$$ For a $\mathbb{C}$-valued point $x \in {\mathcal{M}}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V}$, let $\mathbb{V}_x$ be the corresponding $Q_{E_{\bullet}}$-representation, and $E_x \in {\mathop{\rm Coh}\nolimits}_{\le 1}(X)$ the $(B, \omega)$-semistable sheaf corresponding to $I_{\ast}(x) \in p_M^{-1}(U)$. Let ${\mathcal{Z}}\subset {\mathcal{M}}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}^{+}(\vec{m})|_{V}$ be the closed substack given by $$\begin{aligned} {\mathcal{Z}}{\mathrel{\raise.095ex\hbox{:}\mkern-4.2mu=}}\{ x\in {\mathcal{M}}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}^{+}(\vec{m})|_{V} : I_{\ast}(x) \notin r_M^{-1}(U)\}.\end{aligned}$$ Namely $x \in {\mathcal{M}}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V}$ is a $\mathbb{C}$-valued point of ${\mathcal{Z}}$ iff $\mathbb{V}_x$ is $\mu_Q^{+}$-semistable but $E_x$ is not $(B^{+}, \omega^{+})$-semistable. Below we use the notation in the diagram (\[dia:quiver\]). By Lemma \[lem:pstab\] below, we have $$\begin{aligned} \label{h:emptyset} {\mathcal{Z}}\cap (r_{(Q, I)})^{-1}(0)=\emptyset. \end{aligned}$$ On the other hand, by Lemma \[lem:univ:prepare\] the subset $$\begin{aligned} p_{(Q, I)}^{+}({\mathcal{Z}}) \subset M_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}^{+}(\vec{m})|_{V}\end{aligned}$$ is closed. Together with Lemma \[lem:proj\], we see that $$\begin{aligned} r_{(Q, I)}({\mathcal{Z}})=q_{(Q, I)} \circ p_{(Q, I)}^{+}({\mathcal{Z}}) \subset M_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V}\end{aligned}$$ is a closed subset. By (\[h:emptyset\]), the above closed subset does not contain $0$. Therefore by shrinking $V$ if necessary, we may assume that ${\mathcal{Z}}=\emptyset$, i.e. (\[isom:Iast\]) takes ${\mathcal{M}}^{+}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V}$ to $r_M^{-1}(U)$. Next for $x \in {\mathcal{M}}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V}$, suppose that $E_x$ is $(B^{+}, \omega^{+})$-semistable, i.e. $I_{\ast}(x) \in r_M^{-1}(U)$. Note that by (\[id:slopes\]), we have $$\begin{aligned} \notag \mu_{Q}^{+}(\mathbb{V}_x)=\mu_{B^{+}, \omega^{+}}(E_x).\end{aligned}$$ By the functoriality of $I_{\ast}$ in Subsection \[subsec:functI\] and the above equality, if a sub $Q_{E_{\bullet}}$-representation $\mathbb{V}' \subset \mathbb{V}_x$ destabilizes $\mathbb{V}_x$ in $\mu_Q^{+}$-stability, then by applying $I_{\ast}$ and noting Remark \[rmk:operator\] we obtain the subsheaf $E' \subset E_x$ which destabilizes $E_x$ in $(B^{+}, \omega^{+})$-stability. This is a contradiction, so $\mathbb{V}_x$ is $\mu_Q^+$-semistable, i.e. $x \in {\mathcal{M}}^{+}_{(Q_{E_{\bullet}}, I_{E_{\bullet}})}(\vec{m})|_{V}$. Therefore we obtain the desired isomorphism (\[desire\]). We have used the following lemma: \[lem:pstab\] Under the equivalence $I_{\ast}$ in Theorem \[cor:equiv:I\], an object $\mathbb{V} \in {\mathop{\rm mod}\nolimits}_{\rm{nil}}(A)$ with $\dim \mathbb{V}=\vec{m}$ is $\mu_{Q}^+$-semistable iff $F=I_{\ast}(\mathbb{V})$ is $(B^{+}, \omega^{+})$-semistable in ${\mathop{\rm Coh}\nolimits}_{\le 1}(X)$. The if direction is proved in the first part of the proof of Theorem \[thm:onedim\], so we only prove the only if direction. Suppose by contradiction that $\mathbb{V}$ is $\mu_Q^+$-semistable but $F$ is not $(B^{+}, \omega^{+})$-semistable. Then there is a non-zero subsheaf $F' \subsetneq F$ such that $\mu_{B^{+}, \omega^+}(F')>\mu_{B^{+}, \omega^+}(F)$. On the other hand, as $\sigma^+$ is sufficiently close to $\sigma$ we may assume that there is no wall between $\sigma$ and $\sigma^+$ w.r.t. the numerical class ${\mathop{\rm cl}\nolimits}(F)$. So we have $\mu_{B, \omega}(F')\ge \mu_{B, \omega}(F)$. Since $F \in \langle E_1, \ldots, E_k \rangle$ and each $E_i$ is $(B, \omega)$-stable with the same slope, the sheaf $F$ is $(B, \omega)$-semistable. Therefore we have $\mu_{B, \omega}(F')\le \mu_{B, \omega}(F)$, thus $\mu_{B, \omega}(F')=\mu_{B, \omega}(F)$ and $F'$ is also $(B, \omega)$-semistable. By the uniqueness of JH factors of $(B, \omega)$-semistable sheaves, we have $F' \in \langle E_1, \ldots, E_k \rangle$. Then by the equivalence $I_{\ast}$ in Theorem \[cor:equiv:I\], we find a subobject $\mathbb{V}' \subset \mathbb{V}$ in ${\mathop{\rm mod}\nolimits}_{\rm{nil}}(A)$ with $I_{\ast}(\mathbb{V}') \cong F'$. By the identity (\[id:slopes\]), the subobject $\mathbb{V}'$ destabilizes $\mathbb{V}$, hence a contradiction. [BBBBJ15]{} Jarod Alper, *Good moduli spaces for [A]{}rtin stacks*, Ann. Inst. Fourier (Grenoble) **63** (2013), no. 6, 2349–2402. [MR ]{}[3237451]{} Enrico Arbarello and Giulia Saccà, *Singularities of moduli spaces of sheaves on [K]{}3 surfaces and [N]{}akajima quiver varieties*, arXiv:1505.00759. A. Bodzenta and A. Bondal, *Flops and spherical functors*, preprint, arXiv:1511.00665. O. Ben-Bassat, C. Brav, V. Bussi, and D. Joyce, *A ’[D]{}arboux [T]{}heorem’ for shifted symplectic structures on derived [A]{}rtin stacks, with applications*, Geom. Topol.  **19** (2015), 1287–1359. Jonathan Block, *Duality and equivalence of module categories in noncommutative geometry*, A celebration of the mathematical legacy of [R]{}aoul [B]{}ott, CRM Proc. Lecture Notes, vol. 50, Amer. Math. Soc., Providence, RI, 2010, pp. 311–339. [MR ]{}[2648899]{} T. Bridgeland, *Stability conditions on triangulated categories*, Ann. of Math **166** (2007), 317–345. S. K. Donaldson and P. B. Kronheimer, *The geometry of four-manifolds*, Oxford Mathematical Monographs, The Clarendon Press, Oxford University Press, New York, 1990, Oxford Science Publications. [MR ]{}[1079726]{} Ben Davison and Sven Meinhardt, *Cohomological [D]{}onaldson-[T]{}homas theory of a quiver with potential and quantum enveloping algebras*, arXiv:1601.02479. E. Eriksen, *Computing noncommutative deformations of presheaves and sheaves of modules*, Canad. J. Math.  **62** (2010), 520–542. D. Fiorenza, D. Iacono, and E. Martinengo, *Differential graded [L]{}ie algebras controlling infinitesimal deformations of coherent sheaves*, J. Eur. Math. Soc.  **14** (2012), 521–540. Kenji Fukaya, *Deformation theory, homological algebra and mirror symmetry*, Geometry and physics of branes ([C]{}omo, 2001), Ser. High Energy Phys. Cosmol. Gravit., IOP, Bristol, 2003, pp. 121–209. [MR ]{}[1950958]{} Daniel Greb, *Complex-analytic quotients of algebraic [$G$]{}-varieties*, Math. Ann. **363** (2015), no. 1-2, 77–100. [MR ]{}[3394374]{} Peter Heinzner, *Geometric invariant theory on [S]{}tein spaces*, Math. Ann. **289** (1991), no. 4, 631–662. [MR ]{}[1103041]{} Daniel Huybrechts and Manfred Lehn, *The geometry of moduli spaces of sheaves*, Aspects of Mathematics, E31, Friedr. Vieweg & Sohn, Braunschweig, 1997. [MR ]{}[1450870]{} Peter Heinzner, Luca Migliorini, and Marzia Polito, *Semistable quotients*, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) **26** (1998), no. 2, 233–248. [MR ]{}[1631577]{} D. Joyce, *A classical model for derived critical loci*, J. Differential Geom.  **101** (2015), 289–367. D. Joyce and Y. Song, *A theory of generalized [D]{}onaldson-[T]{}homas invariants*, Mem. Amer. Math. Soc.  **217** (2012). Y. Kawamata, *On multi-pointed non-commutative deformations and [C]{}alabi-[Y]{}au threefolds*, preprint, arXiv:1512.06170. A. D. King, *Moduli of representations of finite-dimensional algebras*, Quart. J. Math. Oxford Ser. (2) **45** (1994), no. 180, 515–530. [MR ]{}[1315461]{} M. Kontsevich and Y. Soibelman, *Stability structures, motivic [D]{}onaldson-[T]{}homas invariants and cluster transformations*, preprint, arXiv:0811.2435. Maxim Kontsevich and Yan Soibelman, *Homological mirror symmetry and torus fibrations*, Symplectic geometry and mirror symmetry ([S]{}eoul, 2000), World Sci. Publ., River Edge, NJ, 2001, pp. 203–263. [MR ]{}[1882331]{} O. Laudal, *Noncommutative deformations of modules*, Homology Homotopy Appl **4** (2002), 357–396. Domingo Luna, *Slices étales*, 81–105. Bull. Soc. Math. France, Paris, Mémoire 33. [MR ]{}[0342523]{} [to3em]{}, *Fonctions différentiables invariantes sous l’opération d’un groupe réductif*, Ann. Inst. Fourier (Grenoble) **26** (1976), no. 1, ix, 33–49. [MR ]{}[0423398]{} D. Maulik and Y. Toda, *Gopakumar-[V]{}afa invariants via vanishing cycles*, preprint, arXiv:1610.07303. Shigeru Mukai, *An introduction to invariants and moduli*, Cambridge Studies in Advanced Mathematics, vol. 81, Cambridge University Press, Cambridge, 2003, Translated from the 1998 and 2000 Japanese editions by W. M. Oxbury. [MR ]{}[2004218]{} Amnon Neeman, *The topology of quotient varieties*, Ann. of Math. (2) **122** (1985), no. 3, 419–459. [MR ]{}[819554]{} A. Polishchuk, *Homological mirror symmetry with higher products*, Winter [S]{}chool on [M]{}irror [S]{}ymmetry, [V]{}ector [B]{}undles and [L]{}agrangian [S]{}ubmanifolds ([C]{}ambridge, [MA]{}, 1999), AMS/IP Stud. Adv. Math., vol. 23, Amer. Math. Soc., Providence, RI, 2001, pp. 247–259. [MR ]{}[1876072]{} T. Pantev, B. To$\ddot{\textrm{e}}$n, M. Vaquie, and G. Vezzosi, *Shifted symplectic structures*, Publ. Math. IHES **117** (2013), 271–328. M. Schlessinger, *Functors on [A]{}rtin rings*, Trans. Amer. Math. Soc.  **130** (1968), 208–222. Gerald W. Schwarz, *The topology of algebraic quotients*, Topological methods in algebraic transformation groups ([N]{}ew [B]{}runswick, [NJ]{}, 1988), Progr. Math., vol. 80, Birkhäuser Boston, Boston, MA, 1989, pp. 135–151. [MR ]{}[1040861]{} E. Segal, *The [$A_{\infty}$]{} deformation theory of a point and the derived categories of local [C]{}alabi-[Y]{}aus*, J. Algebra **320** (2008), 3232–3268. Y. Toda, *Gopakumar-[V]{}afa invariants and wall-crossing*, arXiv:1710.01843. J. Tu, *Homotopy [L]{}-infinity spaces*, arXiv:1411.5115. R. O. Wells, Jr., *Differential analysis on complex manifolds*, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1973, Prentice-Hall Series in Modern Analysis. [MR ]{}[0515872]{} Willie Wai-Yeung Wong, *Lecture notes on [S]{}obolev spaces for [CCA]{}*, http://sma.epfl.ch/ wwywong/papers/sobolevnotes.pdf. Kavli Institute for the Physics and Mathematics of the Universe, University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, 277-8583, Japan. *E-mail address*: yukinobu.toda@ipmu.jp
{ "pile_set_name": "ArXiv" }
--- abstract: | In this paper, we are interested in conical structures of manifolds with respect to the Ricci flow and, in particular, we study them from the point of view of Perelman’s functionals. In a first part, we study Perelman’s $\lambda$ and $\nu$ functionals of cones and characterize their finiteness in terms of the $\lambda$-functional of the link. As an application, we characterize manifolds with conical singularities on which a $\lambda$-functional can be defined and get upper bounds on the $\nu$-functional of asymptotically conical manifolds. We then present an adaptation of the proof of Perelman’s pseudolocality theorem and prove that cones over some perturbations of the unit sphere can be smoothed out by type III immortal solutions on the Ricci flow. author: - OZUCH Tristan title: | Perelman’s functionals on cones\ Construction of type III Ricci flows coming out of cones --- Introduction ============ Just like in the study of a lot of geometric equations, a crucial aspect of the study of Ricci flows is the analysis of singularities, and in particular the possibility to continue the flow through some of them. One of the type of singularity people are interesting in at the moment is conical sigularities, the local geometry of some points of the manifolds is modeled on that of a Riemannian cone. There have been a lot of work on desingularizing such manifolds. This has been done thanks to expanding solitons coming out of cones, see [@gs]. A good picture of a Ricci flow continued through a conical singularity that was formed is given in [@FIK] where a gradient shrinking soliton shrinks into a cone (the singularity) that is then smoothed out by a gradient expanding soliton. As a consequence, the research of expanding solitons asymptotic to cones is currently a particularly topical subject (see [@SS; @DMod; @Dth]). Cones manifolds are therefore a subject of interest for the study of Ricci flows.\ In this paper, we first study cone manifolds from the point of view of Perelman’s $\lambda$ and $\nu$ functionals and characterize their finiteness and give several lower bounds for cones over some perturbations of the sphere or its quotients. We use this to characterize manifolds with conical singularities with finite $\lambda$-functional and give some upper bounds on the $\nu$-functional of asymptotically conical manifolds and manifolds with conical singularities. We then adapt the proof of Perelman’s pseudolocality theorem to prove that cones over some perturbations of the unit sphere can be smoothed out by an immortal type III solution of the Ricci flow coming out of the cone that are asymptotic to the cone at all times. In some cases, these manifolds are gradient expanding solitons. Main definitions ---------------- Let us start by giving some definitions of the mathematical objects we will be considering. The definitions will be minimal, for more developped explanations, see [@TheRF] for detailed notes on the Ricci flow or [@KleinLott] for notes focused on Perelman’s papers.\ A *Ricci flow* on a differential manifold $N$ on an interval $I$ is a family of Riemannian metrics $(g_t)_{t \in I}$ on $N$ ($t$ will be referred to as the *time*) that are satisfying the following evolution equation : $$\partial_tg_t = -2\operatorname{\textup{Ric}}(g_t),$$ where $\operatorname{\textup{Ric}}(g_t)$ is the Ricci curvature associated to the metric $g_t$.\ A *Ricci soliton* is a particular case of a Ricci flow that is a fixed point of the Ricci flow, up to pull-back by diffeomorphisms and scaling of the metric (they are self similar solutions of the Ricci flow). In other words, it means that if $(g_t)_t$ is a Ricci soliton, there exists a one-parameter family of diffeomorphisms $\zeta_t$ and a scaling factor $\gamma$ such that $g_t = (1+\gamma t)\zeta_t^*g_0$. The factor is affine because the Ricci tensor is scale invariant while the metric is not. Note that if $\zeta_t$ is generated by a vector field $-V$, being a Ricci soliton is equivalent to satisfying what is called the *Ricci soliton equation* : $$\begin{aligned} \label{ricci soliton equation} \operatorname{\textup{Ric}}+ \mathcal{L}_V g_0 -\frac{\gamma}{2} g_0 = 0.\end{aligned}$$ An *expanding* soliton satisfies $\gamma>0$, a *steady* soliton corresponds to $\gamma = 0$ and a *shrinking* soliton corresponds to $\gamma<0$.\ A *cone* over a *link* $(N,g^N)$, noted *C(N)* is defined as : $$(C(N),g)=\left(\mathbb{R}^+\times N, dr^2+r^2g^N\right).$$ Whenever we will be working on a cone, we will be noting $r$ the coordinate on the $\mathbb{R}^+$ factor and $dr^2$ the associated metric. Along this paper, we will consider $N$ of dimension **n**, that is, $C(N)$ of dimension **(n+1)** and $M$ a manifold of dimension **(n+1)**. We will call a Ricci flow *nonsingular* if it is defined on an interval of the form $[t_0,+\infty)$ (for example, expanding and steady solitons are immortal).\ We will call Ricci flow of *Type III* on $M$ if it is defined for times $t\in[0,+\infty)$ and has controlled curvature in the following way : There exists $C>0$ such that for all $t$ : $$|\operatorname{\textup{Rm}}|(.,t)\leqslant \frac{C}{t}.$$ It is the needed condition to take limits of blowdowns of Ricci flow (process described in the appendix), the usual process to construct expanding solitons. We will say that a Ricci flow $(g_t)_{t\in (0,+\infty)}$ on $M = \mathbb{R}^+\times N$ is *coming out of the cone $C(N)$* if we have the two following properties : $\;\;$ 1)$\;\;$The metric space $(\mathbb{R}^+\times N\backslash (0,.), g_t)$ converges to the metric space $(C(N),d^{C(N)})$ in the Gromov-Hausdorff sense as $t\to 0$. $\;\;$ 2)$\;\;$On $M\backslash\left\{(0,.)\right\}$, $g_t$ converges *smoothly* to $dr^2+r^2g^N$ where $r(x) = d^{C(N)}(x,(0,.))$. Presentation of the main results -------------------------------- ### Characterization of cones with finite $\mu$ and $\nu$-functionals We define a notion of Perelman’s $\mu$ and $\nu$-functionals on Riemannian cones and characterize their finiteness. In a lot of cases, the $\mu$-functional of a cone is equal to $-\infty$ and so is the $\lambda$-functional. A characterization of cones with infinite $\mu$ or $\lambda$-functional is given in terms of the $\lambda$-functional of its link, namely : Given $N$ a compact $n$-dimensional ($n \geqslant 2$) Riemannian manifold, noting : $$\left\{ \begin{array}{ll} \mu^{C(N)}(\tau):= \mu\left(\left(\mathbb{R}^+\times N,dr^2+r^2g^N\right),\tau\right),\\ \nu^{C(N)}:= \nu\left(\mathbb{R}^+\times N,dr^2+r^2g^N\right),\\ \lambda^{C(N)}:= \lambda\left(\mathbb{R}^+\times N,dr^2+r^2g^N\right),\\ \lambda^{N}:= \lambda\left(N,g^N\right). \end{array} \right.$$ We have the following informations on Perelman’s functionals on the cone depending on the link : - For the $\mu$ and $\nu$-functionals, we have :\ For all $\tau>0$ : $$\begin{aligned} &\mu^{C(N)}(\tau) = \nu^{C(N)} = -\infty,\;\text{ if and only if : }\;\lambda^N\leqslant (n-1).\end{aligned}$$ - For the $\lambda$-functional, we have: $$\lambda^{C(N)}= -\infty,\;\text{ if and only if : }\;\lambda^N< (n-1),$$ and : $$\lambda^{C(N)} = 0 ,\;\text{ if and only if : }\; \lambda^N\geqslant (n-1).$$ This makes clear that it is possible to have $\lambda$ bounded while the minimum of the scalar curvature is arbitrarily negative (here we can have $\operatorname{\textup{R}}_{min}=-\infty$ and $\lambda = 0$). $\lambda^N>(n-1)\;$ is a quite strong condition on the positivity of the curvature that limits the possible topologies, and sizes of the link. (recall that : $\operatorname{\textup{R}}_{min}\leqslant \lambda\leqslant \operatorname{\textup{R}}_{av}$ on compact manifolds). The result is actually a log-Sobolev inequality on cones and the condition $\lambda^N>(n-1)$ can also be seen as a condition implying a dimension free log-Sobolev inequality on the link, since for example, $\operatorname{\textup{Ric}}^N> \frac{n-1}{n}g^N$ implies $\lambda^N>(n-1)$ and $\operatorname{\textup{Ric}}^N> \frac{n-1}{n}g^N$ implies a log sobolev inequality with constant : $1$ on N, see [@ca]. ### Perelman’s functionals on manifolds with conical singularities and asymptotically conical manifolds As a consequence of the previous results on cone and as a justification of the usefulness of our definitions, we can characterize on which manifolds with conical singularities (see definition \[manifold with conical singularities\]) it is possible to define a $\lambda$-functional on such manifolds. Let $(M^n,g)$ be a compact manifold with conical singularities (see definition \[manifold with conical singularities\]) such that one singularity is modeled on a cone $C(N)$ on a section $N$ such that $\lambda^N < (n-2)$. Then $$\lambda^M = -\infty.$$ Conversely, if each singularity at a $x_i$ is modeled on a cone $C(N_i)$ with a link $N_i$ such that $\lambda^N > (n-2)$, then, $$\lambda^M > -\infty.$$ It was proven by Wang and Dai, in Wang’s PhD thesis, that the $\lambda$-functional was not infinite when $R^N>(n-2)$ for each conical singularity link. Since $\lambda^N\geqslant\min(R^N)$, we recover their result and have a precise threshold. Note that with the definition (\[manifold with conical singularities\]) of a manifold with conical singularities we chose, we cannot decide for the case $\lambda^N = (n-2)$. If there is a fast enough convergence to the conical model at each singularity, then $\lambda^N = (n-2)$ implies $\lambda^M > -\infty$. For $(M^n,g)$, a manifold smoothly asymptotic to the cone $C(N)$ at infinity, if $\lambda^N\leqslant (n-2)$, then, $$\nu^M(g) = - \infty.$$ If, $\lambda^N > (n-2)$, then, $$\nu^M \leqslant \nu^{C(N)}.$$ ### A global pseudolocality theorem We give a simple condition implying that a manifold will generate a nonsingular type III Ricci flow by just checking its $\nu$-functional. The proof is an adaptation of the proof of Perelman’s pseudolocality theorem : For all $n\geqslant 3$, there exists $\eta_n>0$ (supposed optimal for the following property), such that, for any $n$-dimensional Riemannian manifold $(M,g)$ such that the Ricci flow starting at it exists for a short time : If the manifold satisfies $$\nu(g)> -\eta_n,$$ then the Ricci flow starting at $M$ exists and is nonsingular for all time $t>0$ and we have the following estimate, there exists $\alpha(\nu(g))$ such that, for all $t>0$ : $$\begin{aligned} |\operatorname{\textup{Rm}}(g_t)|\leqslant \frac{\alpha}{t}.\end{aligned}$$ In other words, the Ricci flow is nonsingular and of type III. By the description of the high curvature regions in dimension 3 done by Perelman, we can get an explicit constant $\eta_3 = \inf_{\alpha>0}(\eta(\alpha,3)) = 1-\log 2$. ### Construction of type III solutions and expanding solitons coming out of some cones The global pseudolocality giving a way to ensure that a Ricci flow is of type III, we construct type III solutions of the Ricci flow asmptotic to some cones. Realizing that some cones have a high $\nu$-functional, we prove that it is sometimes possible to smooth them out by manifolds (asymptotic to the initial cone) while keeping a large $\nu$-functional, so that we can apply the pseudolocality result of the previous part.\ This smoothening process relies on the study of the renormalizations of the Ricci flow (applied to the link only) adapted to manifolds shrinking in finite time into a sphere. And the study of the $\mathcal{W}$-functional of perturbations of the unit sphere.\ We get in particular the following result for some cones over perturbations of the unit sphere. For all $n\geqslant 2$, there exists $\beta_1(n)<1<\beta_2(n)$ such that, for all metric $g^N$ on $\mathbb{S}^n$ satisfying the property noted $P(\beta_1,\beta_2)$ : - *Positivity of the curvature* : The isotropic curvature when crossed with $\mathbb{R}^2$ is positive. - *$C^0$-closeness to the sphere* : $$\beta_1^2g^{\mathbb{S}^n}\leqslant g^N \leqslant \beta_2^2g^{\mathbb{S}^n}.$$ - *Lower bound on the scalar curvature* : $$\operatorname{\textup{R}}^N\geqslant \frac{n(n-1)}{\beta_2^2}.$$ Then, there exists an immortal type III solution of the Ricci flow $(\mathbb{R}^{n+1},g_t)$ coming out of the cone $C(N) := (\mathbb{R}^+\times \mathbb{S}^n, dr^2 +r^2g^N)$ that stays asymptotic to it at all times. Having a positive curvature operator actually implies the first condition. Introduction to Perelman’s functionals -------------------------------------- Let us define Perelman’s $\mathcal{F}$, $\lambda$, $\mathcal{W}$, $\mu$ and $\nu$ functionals and their principal features that will be used throughout the paper. There is a lot of literature about these quantities (see in particular [@TheRF; @KleinLott] and for the first introduction by Perelman [@Perentformula]). ### The $\mathcal{F}$ and $\lambda$ functionals Let us start by defining the $\mathcal{F}$-functional, which is the base functional to define the other ones. We define the *$\mathcal{F}$-functional* : $$\begin{aligned} \mathcal{F}(\phi,g):=\int_N\left(|\nabla \phi|^2+\operatorname{\textup{R}}\right)e^{-\phi}dv,\end{aligned}$$ where $\operatorname{\textup{R}}$ is the *scalar curvature* of $g$.\ From this, we can define another important quantity : We can define the *$\lambda$-functional* as : $$\begin{aligned} \lambda(g) := \inf_{\phi}\mathcal{F}(\phi,g),\end{aligned}$$ where the infimum is taken among smooth $\phi$ such that $$\int_M e^{-\phi}dv = 1.$$ Note that it is also the first eigenvalue of the operator $$-4\Delta +\operatorname{\textup{R}},$$ and is always between the least value of the scalar curvature $\operatorname{\textup{R}}_{min}$ and its average $\operatorname{\textup{R}}_{av}$. The main feature of this functional is that the Ricci flow can be see as a gradient flow *up to diffeomorphisms* for it : If $g$ and $\phi$ are evolving according to : $$\left\{ \begin{array}{ll} \partial_t g &= -2\operatorname{\textup{Ric}},\\ \partial_t \phi &= -\Delta \phi+|\nabla \phi|^2-\operatorname{\textup{R}}, \end{array} \right.\label{evolution equation F}$$ then : $$\begin{aligned} \partial_t \left(\mathcal{F}(g_t,\phi_t)\right) = 2\int_M|\operatorname{\textup{Ric}}+Hess \phi|^2 e^{-\phi}dv\geqslant 0,\end{aligned}$$ and the monotonicity is strict unless $(g,\phi)$ is a *gradient steady soliton*, which is a steady soliton for which the vector field $V$ is $\nabla \phi$. These functionals are invariant by diffeomorphism action. ### The $\mathcal{W}$, $\mathcal{N}$, $\mu$ and $\nu$ functionals Let us now define the $\mathcal{W}$, $\mu$ and $\nu$ functionals that are the most useful functionals to study finite time singularities of the Ricci flow : Let us give the formula right away : - The *$\mathcal{W}$-functional* is defined as : $$\begin{aligned} \mathcal{W}(f,g,\tau) :&= \int_M\left[\tau\left(|\nabla f|^2+\operatorname{\textup{R}}\right)+f-n\right]\left(\frac{e^{-f}}{(4\pi\tau)^{\frac{n}{2}}}dv\right) \\ &= \tau\mathcal{F}(\phi,g)+\mathcal{N}(\phi,g)-\frac{n}{2}\log(4\pi\tau)-n,\end{aligned}$$ where $\mathcal{N}(\phi,g) := \int_M \phi e^{-\phi}dv$ is *Nash entropy* functional,\ wnd where $\phi := f+\frac{n}{2}\log(4\pi\tau)$ satisfies $\int e^{-\phi} = 1$. - The *$\mu$-functional* (often referred to as *entropy*) is defined by : $$\begin{aligned} \mu(g,\tau) := \inf_f \mathcal{W}(f,g,\tau),\end{aligned}$$ where the infimum is taken among the smooth $f$ such that $$\int\frac{e^{-f}}{(4\pi\tau)^{\frac{n}{2}}}dv =1.$$ - The *$\nu$-functional* is defined by : $$\begin{aligned} \nu(g) := \inf_{\tau>0} \mu(g,\tau).\end{aligned}$$ These quantities can again be used to see the Ricci flow as a gradient flow *up to diffeomorphisms and change of scale* : If $g$ and $\phi$ are evolving according to : $$\left\{ \begin{array}{ll} \partial_t g &= -2\operatorname{\textup{Ric}},\\ \partial_t f &= -\Delta f+|\nabla f|^2-\operatorname{\textup{R}}+\frac{n}{2\tau},\\ \partial_t \tau &= -1, \end{array} \right.\label{evolution equation W}$$ then : $$\begin{aligned} \partial_t \left(\mathcal{W}(f_t,g_t,\tau(t))\right) = 2\tau\int_M\left|\operatorname{\textup{Ric}}+Hess f-\frac{1}{2\tau}g\right|^2 \frac{e^{-f}}{(4\pi\tau)^{}\frac{n}{2}}dv\geqslant 0,\end{aligned}$$ and the monotonicity is strict unless $(g,\phi,\tau)$ is a *gradient shrinking soliton* which is a shrinking soliton for which the vector field $V$ is $\nabla f$ and $\gamma = -\frac{1}{2\tau}.$ These functionals are invariant by diffeomorphism action and also by *parabolic scaling* : $(g,\tau)\mapsto(\alpha g, \alpha \tau)$ for $\alpha>0$. We will sometimes use the following change of functional : $$u^2 = \frac{e^{-f}}{(4\pi\tau)^{\frac{n}{2}}}.$$ For purpose of notation, we will also note $\mathcal{W}$ functional applied to the change of function $u$, for a $n$-dimensional manifold $M$, the expression is the following : $$\begin{aligned} \mathcal{W}^M(f,g,\tau) =& \int_M [\tau(|\nabla f|^2 + \operatorname{\textup{R}})+f-n]\left(\frac{e^{-f}}{(4\pi\tau)^{\frac{n}{2}}}dv\right) \nonumber\\ =&\int_M [\tau(4|\nabla u|^2+\operatorname{\textup{R}}u^2)-u^2\log(u^2)]dv-\frac{n}{2}\log(4\pi\tau)-n.\label{corresp f u}\end{aligned}$$ $\mathcal{W}$ and $\mu$-functionals on cones -------------------------------------------- We will define the functionals as the usual expression on the smooth part of the cone, that is the cone without its tip. This definition is justified by the fact that given a manifold with conical singularities or asymptotic to some cone, the values of the functionals on the cone give informations on their value on the manifold. All along the paper, we will be interested in cones from the point of view of Ricci flows, and we will in particular look at them through Perelman’s functionals which take a particular form. A few basic facts about to keep in mind about these functionals : - For a cone, for all $\tau>0$, $\mu(g,\tau) = \nu(g)$ because the cone is scale invariant while the $\mu$-functional is invariant by parabolic scaling. - The $\mu$-functional of the Euclidean cone : $\mathbb{R}^{n+1} = C(\mathbb{S}^n)$ is vanishing, and this is the only cone with this property (in other cases, $\nu^{C(N)}<0$).\ The minimizers at $\tau$ are Gaussians corresponding to the potential : $\frac{|.-x_0|^2}{4\tau}$ for any $x_0$ on the manifold. Even if it would be possible to make every computation at $\tau = 1$ thanks to the first point, we will make every computation at a given $\tau > 0$ to emphasize its independance and because they will also be used for warped product later in this paper. ### The basic formula On a cone, the formula for the entropy has a particular expression which is given by next lemma. We will refer to it as “basic”.\ The definition we will take is the usual formula for the cone with the tip excluded which is an incomplete manifold. This definition is motivated by the applications to manifolds with conical singularities and manifolds asymptotically conical we present. $$\begin{aligned} \mathcal{W}^{C(N)}(f,g^{C(N)},\tau) = \int_0^\infty\int_N&\left[\tau\left((\partial_r f)^2+\frac{|\nabla^N f|^2+(\operatorname{\textup{R}}^N-n(n-1))}{r^2}\right)\right.\nonumber\\ &\left. \Huge+f-(n+1)\right]\left(\frac{e^{-f}}{(4\pi\tau)^{\frac{n+1}{2}}}r^ndv dr \right). \label{basic}\end{aligned}$$ On a cone over a manifold $N$ : $(C(N),g^{C(N)}) = (\mathbb{R}^+\times N,dr^2+r^2g^N)$, with the coordinates $(r,x)\in \mathbb{R}^+\times N$ such that $d^{C(N)}\left((0,x),(r,x)\right)=r$. We have the following formulas for any functional $f$ on the cone : - $\operatorname{\textup{Ric}}= \left[ \begin{array}{cc} 0 & 0 \\ 0 & \operatorname{\textup{Ric}}^N-(n-1)g^N \end{array} \right]$ (it vanishes except in the direction of the link), - $\operatorname{\textup{R}}= \frac{\operatorname{\textup{R}}^N-(n(n-1))}{r^2}$, - $|\nabla f|^2 = (\partial_r f)^2+\frac{|\nabla^N f|^2}{r^2}$, - $dv^{C(N)} = r^ndv^Ndr$ (we will note $dv$ the volume form on $N$ in the following). Now, if we plug this in the expression of the entropy, we get : $$\begin{aligned} \mathcal{W}^{C(N)}&(f,g,\tau) := \int_{C(N)} \left[\tau(|\nabla^N f|^2+\operatorname{\textup{R}})+f-(n+1)\right](4\pi\tau)^{-\frac{n+1}{2}}e^{-f}dv\\ &= \int_0^\infty\int_N \left[\tau\left((\partial_r f)^2+\frac{|\nabla^N f|^2+(\operatorname{\textup{R}}^N-n(n-1))}{r^2}\right)+f-(n+1)\right](4\pi\tau)^{-\frac{n+1}{2}}e^{-f}r^ndv dr.\end{aligned}$$ Lower bounds on the $\lambda$ and $\nu$-functionals of cones\ Characterization of $N$ such that $\nu^{C(N)} > -\infty$ and $\lambda^{C(N)} > -\infty$ ======================================================================================= Let us look more carefully at the $\mu$-functional of cones through a separation of variables formula and characterize, thanks to it, closed manifolds $N$ for which $\nu^{C(N)} = -\infty$. A separation of variables formula --------------------------------- Let us introduce a separation of variables formula that is natural if one wants to emphasize the fact that the $\mathcal{W}$ functional of a cone is closely related to the behavior of the $\mathcal{W}$-functional on its link.\ This separation of variables will be particularly interesting to get lower bound on the cone $\mu$-functional. On a cone $(C(N),g) = (\mathbb{R}^+\times N, dr^2+r^2g^N)$ with the coordinates $(r,x)\in \mathbb{R}^+\times N$ such that for all $x\in N$, $d^{C(N)}\left((0,x),(r,x)\right)=r$.\ We can define the following separation of variables :\ \ $\forall f : \mathbb{R}\times N \to \mathbb{R}$, $\exists !$ $(\tilde{f}, a)$, $\tilde{f}:C(N)\to \mathbb{R}$ and $a:\mathbb{R}^+\to \mathbb{R}$, such that for all $r>0$, - $f(r,.) = \tilde{f}(r,.) + a_r$ which gives $e^{-f} = e^{-a_r}e^{-\tilde{f}}$, - $\int_N(4\pi\tau \textbf{r}^{-2})^{-\frac{n}{2}} e^{-\tilde{f}}dv = 1$ (**notice the $r^{-2}$**), - $\int_0^\infty (4\pi\tau)^{-\frac{1}{2}}e^{-a_r}dr = 1$. Thanks to these, we can rewrite the expression as : $$\begin{aligned} \mathcal{W}^{C(N)}(f,g,\tau) &= \int_0^\infty \left[\mathcal{W}^{N}\left(\tilde{f},g^N,\frac{\tau}{r^2}\right)-n(n-1)\frac{\tau}{r^2}\right]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right) \nonumber\\ &+\int_N \left(\int_0^\infty \left[\tau(\partial_r (\tilde{f}(r,.) + a_r))^2+a_r-1\right]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\right)\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right). \label{separation}\end{aligned}$$ In the Euclidean cone, for a Gaussian centered at the origin (which is a minimizer of $\mathcal{W}^{\mathbb{R}^{n+1}}$), it gives : $$\tilde{f} = \log\left(\frac{r^nvol(\mathbb{S}^n)}{(4\pi\tau)^{\frac{n}{2}}}\right),$$ and $$a_r = \frac{r^2}{4\tau} - \log\left(\frac{r^nvol(\mathbb{S}^n)}{(4\pi\tau)^{\frac{n}{2}}}\right).$$ By , we have : $$\begin{aligned} \mathcal{W}^{C(N)}(f,g,\tau) = \int_0^\infty\int_N &\left[\tau\left((\partial_r f)^2+\frac{|\nabla^N f|^2+(\operatorname{\textup{R}}^N-n(n-1))}{r^2}\right)\right.\\ &\left.+f-(n+1)\right]\frac{e^{-f}}{(4\pi\tau)^{\frac{n+1}{2}}}r^ndv dr .\end{aligned}$$ Now, if we define $a_r$ such that $\frac{e^{-a_r}}{4\pi\tau} = \int_N \frac{r^ne^{-f}}{(4\pi\tau)^{\frac{n}{2}}}dv$, and then define $\tilde{f} = f-a_r$. We get the existence of such a separation of variables, the unicity can also be checked.\ Noting that :\ $$r^n(4\pi\tau)^{-\frac{n+1}{2}} = (4\pi\tau r^{-2})^{\frac{n}{2}}(4\pi\tau)^{-\frac{1}{2}},$$ and separating the $\tilde{f}$ terms and the $a$ terms, we get : $$\begin{aligned} \mathcal{W}^{C(N)}(f,g,\tau) =& \int_0^\infty \left(\int_N\left[\frac{\tau}{r^2}\left(|\nabla^N \tilde{f}|^2+(\operatorname{\textup{R}}^N-n(n-1))\right)+\tilde{f}-n\right]\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}}dv\right)\left(e^{-a_r}(4\pi\tau)^{-\frac{1}{2}} dr\right) \\ &+\int_N \int_0^\infty [\tau(\partial_r f)^2+a_r-1]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right) \left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right),\end{aligned}$$ which is the wanted result. Lower bound on the cone $\mu$-functional ---------------------------------------- We are interested in getting a lower bound for a cones $\mu$-functional. It becomes obvious from the separation of variables formulas that the $\mu$-functional is deeply linked to the behavior of the $\mathcal{W}$-functional on the link, and this *at all scales* (because of the $\frac{\tau}{r^2}$ taking every single value in $(0,+\infty]$). Looking at this quantity at all scales is uncommon, as it is usually considered at very particular scales such as the time remaining to a possible singularity time.\ It is known that this functional is vanishing as $\tau \to 0$ for smooth manifolds. And as $\mathcal{W}(..\tau) = \tau \mathcal{F}+\mathcal{N}-\frac{n}{2}\log \tau +C(n)$, it is natural to think that for large $\tau$, a minimizer will get closer and closer to a minimizer of $\mathcal{F}$. We will give a description of the behavior of this functional and its minimizers in the next parts. ### Study of $\tau\mapsto \mu(g^N,\tau)$ - Lower bounds at large $\tau$ Here we are going to get sharp lower bounds on the left derivative of $\tau\mapsto\mu^N(g_0,\tau)$ that are only attained in the precise case when a minimizing function of $\mathcal{W}^N(.,g_0,\tau)$ is also a minimizer of $\mathcal{F}^N$. $\tau\mapsto\mu(g_0,\tau)$ when $\lambda^N>-\infty$ is upper semicontinuous, and : $$\begin{aligned} \lim_{\tau_2\to\tau_1}\left(\frac{\mu(g_0,\tau_1)-\mu(g_0,\tau_2)}{\tau_1-\tau_2}\right)\geqslant \lambda^N-\frac{n}{2\tau_1}.\end{aligned}$$ This is sharp as the last inequality gets arbitrarily glose to being an equality when $\tau_1\to\infty$ since we have : For any compact manifold $N$, and $f_k$ minimizing of $\mathcal{W}^N(.,g_0,\tau_k)$ tends to a minimizer of $\mathcal{F}^N(.,g_0)$ in $H^1(N,g_0)$ as $\tau_k\to +\infty$. See Appendix A.2. These estimates are sharp and actually attained for all $\tau$ larger than the extinction time for positively curved Einstein manifolds (see appendix B.1) : As a direct corollary, we get a control of the asymptotic behavior of the entropy as $\tau$ tends to infinity. \[borne mu\] For all $N$ closed Riemannian manifold of dimension $n$, there exists $A\in \mathbb{R}$ such that : $$\begin{aligned} \mu^N(\tau,g^N) \geqslant \tau\lambda^N-A-\frac{n}{2}\log_+(\tau)\label{largetau}\end{aligned}$$ where $\log_+$ is the positive part of the $\log$. Moreover, if there is a $T_N<+\infty$ (in particular if $\lambda^N > 0$) such that $\mu^N(T_N)=\nu^N$, the we have the sharper control : $\forall \tau\geqslant T_N$, $$\begin{aligned} \mu^N(\tau,g^N) \geqslant \mu^N(T_N)+(\tau-T_N)\lambda^N-\frac{n}{2}\log\left(\frac{\tau}{T_N}\right).\end{aligned}$$ Here, there is equality if and only if $N$ is a positively curved Einstein manifold. $\mu^{C(N)} = -\infty$ if and only if $\lambda^N \leqslant (n-1)$ ----------------------------------------------------------------- In this section, still interested in the behavior of the $\mu$-functional of a cone, we realize that in some cases the entropy of a cone is $-\infty$. We are going to give a characterization of compact $n$-manifolds that generate cones of infinite $\mu$-functional.\ The $\mu$-functional being very negative is usually associated to the *collapsedness* of some region of the manifold (because we usually assume lower bounds on the scalar curvature). The best example probably being the proof of the Perelman’s local noncollapsedness theorem along the Ricci flow (see [@KleinLott], section 13).\ But here, if the cone is not flat, the curvature blows up close to the tip, and the condition implying a $\mu$-functional being infinite is actually linked to a negative enough curvature on the link and not to the collapsnedness of some region. This is characterized by the value of Perelman’s $\lambda$ on the link. Namely, we are going to prove : We have : - For the $\mu$ and $\nu$ functionals : $$\forall \tau>0 \text{, }\;\mu^{C(N)}(\tau)\; = \;\nu^{C(N)}\; = \;-\infty,$$ if and only if : $$\lambda^N \leqslant (n-1).$$ - For the $\lambda$-functional : $$\lambda^{C(N)} = -\infty,$$ if and only if $$\lambda^N<(n-1).$$ Moreover, if $\lambda^N \geqslant (n-1) $, then $\;\lambda^{C(N)} = 0$. Note that the fact that the two possible values for $\lambda^N$ are $0$ and $-\infty$ is not surprising since the cone is scaling invariant while the $\mathcal{F}$-functional is not.\ (Here, when $(n-1)\leqslant \lambda^N < n(n-1)$, we have $\operatorname{\textup{R}}_{min} = -\infty\;$ while $\lambda^N = 0$). This means that the links that have cones of finite $\mu$ correspond to quite positively curved manifolds, as $\lambda^N > (n-1)$ implies in particular that the average scalar curvature is stricly larger than $(n-1)$.\ This is a strong condition that limits the possible topologies of the manifold. in particular, in dimension $2$, it must be diffeomorphic to a sphere, in dimension $3$, it must be diffeomorphic to $\mathbb{S}^3$ or $\mathbb{S}^2\times \mathbb{S}^1$. At the same time, it is not a very strong condition compared to a lower bound on the scalar curvature. As the theorem in particular implies that there are cones with $\operatorname{\textup{R}}_{min}= -\infty$ and $\lambda = 0$.\ \ This condition is implied by $\operatorname{\textup{Ric}}^N\geqslant \frac{n-1}{n}g^N$, which is the kind of condition that implies dimension free log-Sobolev inequalities on $n$-dimensional manifolds (if $\operatorname{\textup{Ric}}\geqslant K>0$, then there is a log-Sobolev inequality with a constant $\left[K\frac{n}{n-1}\right]$).\ It is not surprising that the condition depends on the $\lambda$-functional of the link.\ Morally, the geometry of the cone far away from the tip gets mild (locally nearly flat), and this controls the functionals. So the degenerate behavior should come from the region around the tip of the cone.\ The regions close to the tip (for $r$ small) correspond to large values of $\frac{\tau}{r^2}$ in the $\mathcal{W}^N$ term of the separation of variables expression, which means that we are looking at the link for large $\tau$, and we have just seen that the behavior of the $\mu$-functional at large $\tau$ is ruled by the $\lambda$-functional value.\ **For spheres**, the condition is : $$\forall \tau,\mu^{C(\beta\mathbb{S}^n)}(\tau) = \nu^{C(\beta\mathbb{S}^n)} = -\infty,$$ if and only if : $$\beta \geqslant \sqrt{n},$$ and we see that these cones are actually the least collapsed among the cones over spheres, and we can see that the quantity that makes the entropy infinite is mostly the low scalar curvature. ### If $\lambda^N\leqslant (n-1)$, then $\nu^{C(N)} = -\infty$ Let us start the proof by the implication involving the least intermediate results. We have the first implications : - If $\lambda^N \leqslant (n-1)$, then $\nu^{C(N)} = -\infty$. - If $\lambda^N < (n-1)$, then $\lambda^{C(N)}=-\infty$. Let $N$ be a $n$-dimensional closed manifold and $\tau>0$.\ Let us define : $$K :=\lambda^N - n(n-1) =\inf_{\int_N u^2 = 1}\int_N \left[(\operatorname{\textup{R}}^N-n(n-1))u^2+4|\nabla u|^2\right]dv.$$ We want to prove that if $K$ is lower than $-(n-1)^2$ (equivalent to $\lambda^N\leqslant (n-1)$), then there is a function $v^2 = \frac{e^{-f}}{(4\pi\tau)^{\frac{n+1}{2}}}$ such that $\mathcal{W}(f,g,\tau)$ is $-\infty$.\ It is enough to find a function in $H^1$ for which it is infinite. Let us construct such a function. ##### 1) Let us consider $v = \left(\frac{b}{r^a}\tilde{u}\right) \chi_{[0,2r_0]}$, where $\tilde{u}$ is a minimizer of Perelman’s $\mathcal{F}$ functional, that is : $$\int_N \left[\left(\operatorname{\textup{R}}^N-n(n-1)\right)\tilde{u}^2+4|\nabla \tilde{u}|^2\right]dv = K,$$ and $$\int_N \tilde{u}^2 dv =1,$$ where $\chi_{[0,r_0]}$ is a cutoff function equal to $1$ in $[0,r_0]$ and with support in $[0, 2r_0]$ and a first derivative of order $\frac{1}{r_0}$. and for some real numbers $a$, $b>0$ and $r_0>0$ such that $\int_{C(N)}v^2 = 1$. The goal is to prove that we can choose these numbers to get $\mathcal{W}(f,g^{C(N)},\tau) = -\infty$.\ Let us first note that the part of the function that corresponds to the values in $[r_0,2r_0]$ is of finite influence, so it will be enough to look at the functional on $[0,r_0]$. On $[r_0,2r_0]$, the derivative is bounded by $$\left|\partial_r \left(\frac{b}{r^a} \chi_{[0,2r_0]}\right)\right|\leqslant\frac{ab}{r_0^{a+1}} + \frac{b}{r_0^{a+1}}\mathcal{O}(1).$$ The Nash entropy term is also bounded on this domain because the function is bounded. We can therefore restrict our attention to the interval $[0,r_0]$.\ By plugging this expression in the $\mathcal{W}$-functional, we get : $$\mathcal{W}^{C(N)}(f,g,\tau) = \int_0^{r_0} \left[\tau \left(b^2r^{-2a-2+n}(4a^2+K)\right)+\left(2b^2r^{-2a+n}a\log r\right)\right]dr -\frac{(n+1)}{2}\log(4\pi\tau)+(n+1).$$ #### a. Making the first term negative :\ For the first term $\tau \left(b^2r^{-2a-2+n}(4a^2+K)\right)$ (that we want to make much bigger than the others around $0$) to be negative, we want $4a^2+K<0$ that is : $$\begin{aligned} a<\frac{\sqrt{-K}}{2}.\label{cond1}\end{aligned}$$ #### b. Making the remaining terms finite while the first one is infinite :\ Now, to get an arbitrarily negative entropy, we would like to have $r^{n-2a}\log r$ integrable while $r^{n-2a-2}$ is not. Note that this will also imply that our function is indeed in $L^2$. We want $n-2a>1$ while $n-2a-2\leqslant 1$.\ This way, the first term produces an infinite negative quantity while the second is finite.\ We want $a$ to satisfy : $$\begin{aligned} \frac{n-1}{2}\leqslant a < \frac{n+1}{2}.\label{cond2}\end{aligned}$$ #### c. Values for which and are consistent :\ These two conditions are consistent for some $a$ ($<\frac{n+1}{2}$) if and only if : $$\frac{n-1}{2}<\frac{\sqrt{-K}}{2},$$ that is : $$K< -(n-1)^2.$$ In conclusion, if $$\lambda^N< n(n-1)-(n-1)^2 =(n-1),$$ then $$\mu^{C(N)}=-\infty.$$ Here we have actually made $\mathcal{F}^{C(N)}$ arbitrarily negative (while keeping $\mathcal{N}^{C(N)}$ bounded). Hence the second part of the proposition. #### 2) Remains the equality case $\lambda^N = (n-1)$, that is $K = -(n-1)^2$ :\ In this case, let us choose $a = \frac{\sqrt{-K}}{2} = \frac{n-1}{2}$. The first term vanishes and we are left with : $$\int_0^{r_0} 2b^2r^{-2a+n}a\log r dr - C(n,\tau)= (n-1)b^2\int_0^{r_0} r^{-1}\log r dr - C(n,\tau) = -\infty,$$ because $\frac{\log r}{r}$ is not integrable and negative around zero.\ If $\lambda^N\leqslant (n-1)$, then $\mu^{C(N)} = -\infty$. Here we have actually made $\mathcal{N}^{C(N)}$ arbitrarily negative while the first term vanishes. ### If $\lambda^N>(n-1)$, then $\mu^{C(N)}>-\infty$ Let us now present the more challenging other implication. We have the following other implication : For the $\nu$-functional : - If $\lambda^N>(n-1)$, then, $\nu^{C(N)}>-\infty$. For the $\lambda$-functional : - If $\lambda^N\geqslant (n-1)$, then, $\lambda^{C(N)} = 0$. We will focus on the more challenging first statement and point out from which part of the proof it is possible to deduce the second part. Let $N$ be a $n$-dimensional manifold such that $\lambda^N>(n-1)$, and consider $\tau>0$ and $f:C(N)\to \mathbb{R}$ a smooth function such that : $$\int_{C(N)}\frac{e^{-f}}{(4\pi\tau)^{\frac{n+1}{2}}}dv^{C(N)} = 1.$$ We want to bound the $\mathcal{W}^{C(N)}(f,g,\tau)$ from below.\ Let us start by explaining why it is enough to bound the part of the functional corresponding to the ball of radius $C\sqrt{\tau}$ around the tip of the cone. The intuition behind this is that at larger distance, the geometry becomes milder and the $\mathcal{W}$-functional cannot take too negative values. \[control a grande distance\] There exists $C >0$, depending on the geometry of $N$, such that : $$\begin{aligned} \int_{C\sqrt{\tau}}^{+\infty}\int_N\left[\tau(|\nabla f|^2+\operatorname{\textup{R}})+f-(n+1)\right]\left(\frac{e^{-f}}{(4\pi\tau)^{\frac{n+1}{2}}}r^ndvdr\right) &\geqslant -1\\ &>-\infty\end{aligned}$$ For any smooth manifold $(N,g)$, $ \lim_{\tau\to 0} \mu^N(\tau,g) = 0 $. In particular, there exists $\frac{1}{2n(n-1)}>\tau_0>0$, such that for all $0 \leqslant \tau\leqslant\tau_0$,\ we have $\mu^N(\tau,g)\geqslant -\frac{1}{2}$. Now, we can use the separation of variables formula for the $\mathcal{W}$-functional on the cone without a ball of radius $\sqrt{\frac{\tau}{\tau_0}}$ (chosen to have $0<\frac{\tau}{r^2}\leqslant \tau_0$) : $$\begin{aligned} \mathcal{W}^{C(N)}(f) =& \int_{\sqrt{\frac{\tau}{\tau_0}}}^\infty \left[\mathcal{W}^{N}\left(\tilde{f},g,\frac{\tau}{r^2}\right)-n(n-1)\frac{\tau}{r^2}\right]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right) \\ &+\int_N \int_{\sqrt{\frac{\tau}{\tau_0}}}^\infty [\tau(\partial_r (a_r+\tilde{f}))^2+a_r-1]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right)\\ \geqslant & \int_{\sqrt{\frac{\tau}{\tau_0}}}^\infty \left[-\frac{1}{2}-n(n-1)\tau_0\right]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right) \\ &+\int_N \int_{\sqrt{\frac{\tau}{\tau_0}}}^\infty [\tau(\partial_r (a_r+\tilde{f}))^2+a_r-1]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right)\\ \geqslant & -1\\ &+ \int_N \int_{\sqrt{\frac{\tau}{\tau_0}}}^\infty [\tau(\partial_r (a_r+\tilde{f}))^2+a_r-1]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right). \end{aligned}$$ The last term is non negative, as we shall see in more details in the next part of the proof, thanks to the lemma \[lower bound radial term\] to get rid of the $\tilde{f}$ term followed by the lemma \[log sobolev radial\] to control the resulting integrand. The lemma \[log sobolev radial\] can only be directly used with functions such that $$\int\frac{e^{-f}}{(4\pi\tau)^{\frac{n+1}{2}}} = 1,$$ but since, for all $c\in \mathbb{R}$, $$\mathcal{W}(f+c,g,\tau) = e^{-c}\mathcal{W}(f,g,\tau) + ce^{-c},$$ we can directly relate our result to other values of the integral. In particular, if $\int\frac{e^{-f}}{(4\pi\tau)^{\frac{n+1}{2}}} \leqslant 1$ and $\mathcal{W}(f+c,g,\tau) \geqslant 0$ for a constant $c\leqslant 0$ such that $\int\frac{e^{-(f+c)}}{(4\pi\tau)^{\frac{n+1}{2}}} = 1$, then we also have $\mathcal{W}(f,g,\tau) \geqslant 0$. As a consequence, it is then enough to find a lower bound for the part of the $\mathcal{W}$-functional at scale $\tau$ corresponding to the truncated cone $\left([0,C\sqrt{\tau}], dr^2+r^2g^N\right)$. In this part of the manifold, the important quantity is the $\mathcal{W}$-functional on the link at large scales. That is the reason why the most influent quantity is $\lambda^N$. #### 0) We have the following expression of the entropy :\ By the separation of variables : $$\begin{aligned} \mathcal{W}^{C(N)}(f) =& \int_0^\infty \left[\mathcal{W}^{N}\left(\tilde{f},g,\frac{\tau}{r^2}\right)-n(n-1)\frac{\tau}{r^2}\right]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right) \\ &+\int_N \int_0^\infty [\tau(\partial_r (a_r+\tilde{f}))^2+a_r-1]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right).\end{aligned}$$ #### 1) Let us first bound the second term $$\int_N \int_0^\infty [\tau(\partial_r( a_r+\tilde{f}))^2+a_r-1]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right)$$ by a nicer term without $\tilde{f}$ in the $\partial_r$ term : \[lower bound radial term\] We have the following lower bound : $$\begin{aligned} \int_N \int_0^\infty [\tau(\partial_r( a_r+\tilde{f}))^2+a_r-1]&\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right)\\ &\geqslant \mathcal{W}^{\mathbb{R}^+}\left(\left(a_r-\frac{n}{2}\log\frac{\tau}{r^2}\right),dr^2, \tau\right) +\frac{n}{2}\log\frac{\tau}{r^2}.\end{aligned}$$ The expression is the following : $$\begin{aligned} \int_N \int_0^\infty(\partial_r (a_r+\tilde{f}))^2&\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right) \\ &= \int_0^\infty\left(\int_N (\partial_r (a_r+\tilde{f}))^2\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right)\right)\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\\ &\geqslant \int_0^\infty \left(\int_N \partial_r (a_r+\tilde{f})\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right)\right)^2 \left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right) \tag*{By Jensen inequality.} \\ &\geqslant\int_0^\infty \left(\partial_ra_r+\int_N (\partial_r \tilde{f})\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right)\right)^2 \left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right) \\ &= \int_0^\infty \left(\partial_r a_r + \int_N (\partial_r \tilde{f})(4\pi\tau r^{-2})^{-\frac{n}{2}} e^{-\tilde{f}}dv\right)^2 \left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right).\end{aligned}$$ Let us see what we can do with the term : $$\int_N (4\pi\tau r^{-2})^{-\frac{n}{2}} e^{-\tilde{f}}(\partial_r \tilde{f})dv.$$ $$\begin{aligned} \int_N (4\pi\tau r^{-2})^{-\frac{n}{2}} e^{-\tilde{f}}(\partial_r \tilde{f})dv &= (4\pi\tau)^{\frac{n}{2}}\int_N r^n e^{-\tilde{f}}(\partial_r \tilde{f})dv \\ &=-(4\pi\tau)^{-\frac{n}{2}}\int_N r^n \partial_r\left(e^{-\tilde{f}}\right)dv \\ &=-\left[(4\pi\tau)^{-\frac{n}{2}}\int_N \partial_r\left(r^ne^{-\tilde{f}}\right)dv - \frac{n}{r}\right]\tag*{Because $\int_N (4\pi\tau r^{-2})^{\frac{n}{2}} e^{-\tilde{f}} = 1$.}\\ &=-\left[(4\pi\tau)^{-\frac{n}{2}}\partial_r\left(\int_N r^ne^{-\tilde{f}}dv\right) - \frac{n}{r}\right] \\ &= \frac{n}{r} \tag*{Because $\int_N r^ne^{-\tilde{f}}dv$ is constant.}\\ &= \partial_r \left(\frac{n}{2}\log\frac{\tau}{r^2}\right).\end{aligned}$$ We finally get : $$\begin{aligned} \int_N \int_0^\infty (\partial_r (a_r+\tilde{f}))^2\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right) \geqslant \int_0^\infty\left(\partial_r \left(a_r-\frac{n}{2}\log\frac{\tau}{r^2}\right)\right)^2 \left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right).\end{aligned}$$ Plugging this inequality in the expression of the lemma, we get : $$\begin{aligned} \int_N \int_0^\infty [\tau(\partial_r( a_r+\tilde{f}))^2+a_r-1]&\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right)\\ &\geqslant \int_0^\infty[\tau(\partial_r(a_r+n\log r))^2+a_r-1]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right).\end{aligned}$$ #### 2) Let us now use the lemma to get a lower bound on $\mathcal{W}^{C(N)}(f,g,\tau)$ expressed as a functional on $\mathbb{R}^+$ namely : \[lowbdmu\] There exists $A>0$ such that : $$\begin{aligned} \mathcal{W}^{N}(\tilde{f},g,\tau r^{-2})-n(n-1)\frac{\tau}{r^2}\geqslant \left( \lambda^N-n(n-1) \right)\frac{\tau}{r^2}-A-\frac{n}{2}\log_+ \left(\frac{C^2\tau}{r^2}\right)\end{aligned}$$ It is a direct consequence of . #### We have : $\left( \lambda^N-n(n-1) \right)>-(n-1)^2$ since by assumption, $\lambda^N>(n-1)$\ As a consequence of Lemma \[lowbdmu\], it is enough to bound from below the following simpler quantity : Let us note $K:=\left( \lambda^N-n(n-1)\right)>-(n-1)^2$, we have by a direct rephrasing with the new notations : $$\begin{aligned} \mathcal{W}^{C(N)}(f) \geqslant&\int_0^{C\sqrt{\tau}} \left[K\frac{\tau}{r^2} +\tau\left(\partial_r \left(a_r-\frac{n}{2}\log \left(4\pi\frac{\tau}{r^2}\right)\right)\right)^2+ a_r-\frac{n}{2}\log \left(4\pi\frac{\tau}{r^2}\right)\right] \left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\\ &-A-1-\frac{n}{2}\log\left(\frac{C^2}{4\pi}\right).\end{aligned}$$ Noting $w = \frac{e^{-a_r}}{(4\pi\tau) r^n}$, we can rewrite this as : $$\begin{aligned} \mathcal{W}^{C(N)}(f,\tau)\geqslant & \int_0^{C\sqrt{\tau}} \left[\tau\left(4(\partial_r (w))^2+\frac{K}{r^2}w^2 \right)+w^2\log(w^2)\right]r^ndr-\frac{n+1}{2}\log(4\pi\tau)\\ &-(A+1)-\frac{n}{2}\log\left(\frac{C^2}{4\pi}\right).\end{aligned}$$ The goal of the next pages is to bound the term : $$\int_0^{C\sqrt{\tau}} \left[\tau\left(4(\partial_r w)^2+\frac{K}{r^2}w^2 \right)+w^2\log(w^2)\right]r^ndr-\frac{n+1}{2}\log\tau,$$ from below for smooth $w$ such that $$\int_0^{C\sqrt{\tau}}w^2 r^ndr \leqslant 1.$$ #### 3) Let us work on $[0,\infty]$ for easier notations and bound $$\int_0^\infty \left[\tau\left(4(\partial_r (w))^2+\frac{K}{r^2}w^2 \right)+w^2\log(w^2)\right]r^ndr-\frac{n+1}{2}\log\tau,$$ for $w$ such that : $$\int_0^{\infty}w^2 r^ndr = 1.$$ This is stronger than what we need to finish the proof as, given a $w$ such that : $0<\int_0^{C\sqrt{\tau}}w^2 r^ndr \leqslant 1$, we can consider $\tilde{w} = \frac{w}{\sqrt{\int_0^{C\sqrt{\tau}}w^2 r^ndr}}$ in $B(0,C\sqrt{\tau})$ and $\tilde{w} = 0$ outside. The idea of this estimate is to use the $|\partial_r w|^2$ term to first control the $u^2\log u^2$ term by a weighted log-Sobolev inequality and then use the remaining $|\partial_r w|^2$ term to use a weighted Hardy inequality. ##### a. The $w^2\log w^2$ term - weighted log-Sobolev inequality : The sharp weighted log-Sobolev inequality on $\mathbb{R}^+$ we use comes directly from the Euclidean case inequality with radial functions : \[log sobolev radial\] We have the following log-Sobolev inequality whose sharpness comes from the Euclidean case of the unit sphere : For all $\tau_0>0$ : $$\begin{aligned} 4\tau_0\int_0^\infty r^n |w'|^2dr \geqslant\int_0^\infty r^n w^2\log(w^2) dr+\frac{n+1}{2}\log(4\pi\tau_0)+(n+1)-\log(vol(\mathbb{S}^n)), \label{logsob}\end{aligned}$$ for $\int_0^\infty r^n w^2 = 1$. The idea of the proof is to apply the classical log-Sobolev inequality on the Euclidean space to radial functions.\ By the classical log-Sobolev inequality, the $\nu$-functional of the Euclidean space $\mathbb{R}^{n+1}$ is 0.\ \ By considering a radial function $v: \mathbb{R}^n\to \mathbb{R}$, $$v(r,.)=\frac{w(r)}{\sqrt{vol(\mathbb{S}^n)}}$$ such that $\int_{\mathbb{R}^{n+1}} v^2 =1$, that is : $\int_{\mathbb{R}^+}r^n w(r)^2 = 1$ and rewriting the fact that : $$\mathcal{W}^{\mathbb{R}^{n+1}}(f,\tau,g_{eucl})\geqslant 0,$$ where $f$ is such that : $v^2 = \frac{e^{-f}}{(4\pi\tau)^{\frac{n+1}{2}}}$.\ We get :\ For all $w: \mathbb{R}^+\to \mathbb{R}^+$ such that $\int_{\mathbb{R}^+} r^nw^2 = 1$, we have : $$\int_{\mathbb{R}^+} r^n [4\tau(\partial_r w)^2-w^2\log w^2]dr-\frac{n+1}{2}\log(4\pi\tau) -(n+1)+\log(vol(\mathbb{S}^n))\geqslant 0.$$ This gives a sharp inequality *for all $\tau_0$*.\ Let us choose one $\tau_0$ to define later with which we will use this inequality. $\tau_0$ will be chosen as the optimal constant to bound the remaining term thanks to a weighted Hardy inequality. ##### b. The $\frac{K}{r^2}w^2$ term - Hardy inequality :\ Now, let us see what freedom we have on $\tau_0$ to apply the weighted Hardy inequality to the remaining term : Let us see until which value of $\tau\geqslant\tau_0>0$, the functional $$\begin{aligned} \int_{\mathbb{R}^+}r^n\int_N\left(4(\tau-\tau_0)|\partial_r w|^2+\tau\frac{K}{r^2}w^2\right)dvdr. \label{postlogsob}\end{aligned}$$ is nonnegative, for all smooth $w$. If $K \geqslant 0$, then it is always the case, and we will take $\tau_0 = \tau$.\ Let us assume that $K<0$. From [@hardy], we have the following weighted Hardy inequality on $\mathbb{R}^+$ : For all $v$ such that : $$0<\int_0^{+\infty}r^{n-2}v^2<+\infty.$$ We have the following weighted Hardy inequality : $$\begin{aligned} \int_0^{+\infty}r^{n-2}v^2dr<\frac{4}{(n-1)^2}\int_0^{+\infty}r^n(v')^2dr.\label{hardy}\end{aligned}$$ We have assumed that $\lambda^N>(n-1)$ which implies : $K >-(n-1)^2$, so we have the following lemma as a consequence of the weighted Hardy inequality that gives a condition on the combination $(\tau,\tau_0)$ that we will use in the following. If the pair $(\tau,\tau_0)$ satisfies : $\frac{4(\tau-\tau_0)}{-\tau K} \geqslant \frac{4}{(n-1)^2}$ that is : $$\begin{aligned} \tau\geqslant\frac{\tau_0}{\left(1-\frac{-K}{(n-1)^2}\right)}.\label{condtautau0}\end{aligned}$$ We have, by : $$\begin{aligned} \int_{\mathbb{R}^+}r^n\int_N\left(4(\tau-\tau_0)|\partial_r w|^2+\tau\frac{K}{r^2}w^2\right)dvdr &> \int_{\mathbb{R}^+}r^{n-2} (\tau K+(\tau - \tau_0)(n-1)^2)w^2dr\\ &>0.\end{aligned}$$ ##### c. Let us choose such a $(\tau,\tau_0)$ combination for the following :\ Let us rewrite the quantity we want to bound from below to emphasize how we use each inequality : $$\begin{aligned} \int_0^\infty& r^n\int_N\left[\tau\left(4|\partial_r w|^2+\frac{K}{r^2}w^2\right)-w^2\log w^2\right]dvdr-\frac{n+1}{2}\log(\tau)-(n+1)\nonumber\\ &=\int_0^\infty r^n\int_N\left[(\tau-\tau_0)4|\partial_r w|^2+\tau\frac{K}{r^2}w^2\right]dvdr \label{har term}\\ &+\int_{\mathbb{R}^+} r^n(4\tau_0 |\partial_r w|^2dr -w^2\log(w^2))dr-\frac{n+1}{2}\log(\tau_0)-(n+1)+\log(vol(\mathbb{S}^n))\label{logsob term}\\ &-\frac{n+1}{2}\log\left(\frac{\tau}{\tau_0}\right)-\log(vol(\mathbb{S}^n)).\label{remaining}\end{aligned}$$ Here, thanks to and , and are nonnegative and we get : $$\begin{aligned} \mathcal{W}^{C(N)}(u,\tau)&\geqslant -\frac{n+1}{2}\log\left(\frac{\tau}{\tau_0}\right)-\log(vol(\mathbb{S}^n))\\ &\geqslant \frac{n+1}{2}\log\left(1-\frac{-K}{(n-1)^2}\right)-\log(vol(\mathbb{S}^n))>-\infty,\end{aligned}$$ where the last step is done by choosing the the smallest $\tau$ possible at a given $\tau_0$ (or biggest $\tau_0$ at a given $\tau$) to satisfy .\ Since the rest of the expression of $\mathcal{W}^{C(N)}(f,g,\tau)$ is finite by the lemma \[control a grande distance\], we have the result : There exists $D = 1+A+\frac{n+1}{2}\log\left(1-\frac{-K}{(n-1)^2}\right)-\log(vol(\mathbb{S}^n))$ such that : $$\nu^{C(N)}>-D>-\infty.$$ Looking back at the proof, we can get the following lower bound on the $\nu$-functional of cones over some perturbations of the unit sphere : Let us consider a manifold $N$ such that there exists $\epsilon_1>0$, $\epsilon_2>0$ and $\epsilon_3>0$ small enough such that its $\mathcal{W}$-functional satisfies the lower bound noted $L(\epsilon_1,\epsilon_2,\epsilon_3)$ defined by : For all $f : N\mapsto \mathbb{R}$ and $\tau > 0$ $$\begin{aligned} \mathcal{W}^{N}(f,\tau)\geqslant \tau(1-\epsilon_1)\mathcal{F}^{\mathbb{S}^n}\left(f+\delta,g^{\mathbb{S}^n}\right)+(1+\epsilon_2)\mathcal{N}^{\mathbb{S}^n}\left(f+\delta,g^{\mathbb{S}^n}\right)+\frac{n}{2}\log{4\pi\tau}-n-\epsilon_3,\end{aligned}$$ where $\delta$ is defined to ensure that $\int_{\mathbb{S}^n}\frac{e^{-f-\delta}}{(4\pi\tau)^{\frac{n}{2}}}=1$. Then, $$\nu^{C(N)}\geqslant -\Psi(\epsilon_1,\epsilon_2,\epsilon_3),$$ where $\Psi>0$ and $\Psi(\epsilon_1,\epsilon_2,\epsilon_3)\to 0$ when $(\epsilon_1,\epsilon_2,\epsilon_3)\to (0,0,0)$. The case $\epsilon_1 =\epsilon_2 = \epsilon_3 =0$ corresponds to the $\mathcal{W}$-functional of the sphere. That is why we will consider such manifolds as perturbations of the sphere from the point of view of the $\mathcal{W}$-functional. Let us also note that such a bound implies that $\lambda^N\geqslant (1-\epsilon_1)n(n-1)$ which is higher than $(n-1)$ for small $\epsilon_1$. The proof is basically the same as the last proposition, but uses the log-Sobolev inequality a bit differently.\ By the separation of variables formula, the lower bound $L(\epsilon_1,\epsilon_2,\epsilon_3)$ on the link $N$ implies the following lower bound on the cone $C(N)$ : $$\begin{aligned} \mathcal{W}^{C(N)}(f,g,\tau)&\geqslant \int_0^\infty \left[\frac{\tau}{r^2}\left((1-\epsilon_1)\mathcal{F}^{\mathbb{S}^n}-n(n-1)\right)+(1+\epsilon_2)\mathcal{N}^{\mathbb{S}^n}-\frac{n}{2}\log\left(4\pi\frac{\tau}{r^2}\right)-n\right]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\\ &-\delta-\epsilon_3 \\ &+\int_{\mathbb{S}^n} \left(\int_0^\infty \left[\tau(\partial_r f)^2+a_r-1\right]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} \right)\right)\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}}\right)drdv,\end{aligned}$$ where the functionals on $\mathbb{S}^n$ are taken at $f+\delta+\frac{n}{2}\log(4\pi\tau)$\ The next idea is to substract $$(1+\epsilon_2)\mathcal{W}^{C(\mathbb{S}^n)}(f+\delta,g,\tau_0)\geqslant 0$$ from the previous lower bound.\ After using Jensen inequality to take care of the $-\epsilon_2 a_re^{-a_r}$ term that is left, the expression left is : $$\begin{aligned} \mathcal{W}^{C(N)}(f,g,\tau)&\geqslant \int_0^\infty \left[\frac{\tau}{r^2}\left((1-\epsilon_1)\mathcal{F}^{\mathbb{S}^n}-n(n-1)\right)-(1+\epsilon_2)\frac{\tau_0}{r^2}\left(\mathcal{F}^{\mathbb{S}^n}-n(n-1)\right)\right]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\\ &\frac{n+1}{2}\log\left(\frac{\tau}{\tau_0^{1+\epsilon^2}}\right)-\epsilon_2n-\delta-\epsilon_3 \\ &+\int_{\mathbb{S}^n} \left(\int_0^\infty \left[(\tau-(1+\epsilon_2)\tau_0)(\partial_r f)^2-\epsilon_2\right]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} \right)\right)\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}}\right)drdv.\end{aligned}$$ An application of the weighted Hardy inequality like in the theorem implies a lower bound that tend to $0$ as $(\epsilon_1,\epsilon_2,\epsilon_3)$ tends to $(0,0,0)$. Perelman’s functionals on manifolds with conical singularities and asymptotically conical manifolds --------------------------------------------------------------------------------------------------- Let us now give a first application of the previous finiteness results to the study of manifolds asymptotically conical and manifold with conical singularities thanks to Perelman’s functionals. \[manifold with conical singularities\] We will say that a metric space $(M,g)$ is a *manifold with conical singularities* modeled on the cones $C(N_1),...,C(N_k)$ at the points $x_1,...,x_k$ if $(M\backslash\{x_1,...,x_k\},g)$ is a smooth manifold and if, for each $j \in \{1,...,k\}$, there exist $\epsilon_j>0$ and, a diffeomorphism $\phi_j : (0,\epsilon_j)\times N_j\to B(x_j,\epsilon)$, such that as $r\to 0$, for all $k\in\mathbb{N}$ : $$r^k|\nabla^k\left(\phi_j^*(g)-\left(dr^2 + r^2g^{N_j}\right)\right)|\to 0.$$ We will say that a complete manifold $(M,g)$ is *smoothly asymptotic to the cone $C(N)$* at infinity if there exists a compact $K\subset M$, a real $R>0$ and a diffeomorphism $$\phi : (R,+\infty)\times N\to M\backslash K$$ such that, as $r\to +\infty$, for $k\in \mathbb{N}$, $$r^k|\nabla^k\left(\phi^*(g)-\left(dr^2 + r^2g^{N}\right)\right)|\to 0.$$ The norms and derivatives are computed thanks to the cone metric. Let $(M^n,g)$ be a compact manifold with conical singularities such that one singularity is modeled on a cone $C(N)$ on a section $N$ such that $\lambda^N < (n-2)$. Then $$\lambda^M = -\infty.$$ Conversely, if each singularity model is modeled on a cone $C(N_i)$ with a link $N_i$ such that $\lambda^{N_i} > (n-2)$, then, $$\lambda^M > -\infty.$$ By the definition of a conical singularity, in small balls around a conical singularity, there exists coordinates in which the manifold is a warped product of the form of the ones studied in the appendix A.3. In particular, we can control how far the expression of $\mathcal{F}^M$ differs from $\mathcal{F}^N$ in the same coordinates (see appendix A.3.). As a consequence, for all conical singularity, there exists $\epsilon'_i>0$ small enough such that the difference between the expressions is less than $|\lambda^{N_i} - (n-2)|$, so, in the two cases : 1. If $\lambda^{N_i}<(n-2)$ for a conical singularity, then it is infinite. Like in the proof of the cone case, we can consider a function $v = \frac{b}{r^a}1_{[0,r_0]}$ for $r_0<\epsilon'$ small enough and estimate just like in 2.3.1. 2. If $\lambda^{N_i} > (n-2)$ for all conical singularities, then it is finite. It is a little more complicated to see : On each of the balls $B(x_i,\epsilon_i)$, we can use the weighted Hardy inequality of [@hardy] just like in 2.3.2. and get a lower bound on the $\lambda$-functional on the balls $B(x_i,\epsilon'_i)$. The rest of the manifold being smooth and compact, there exists another lower bound for the $\mathcal{F}$-functional. We cannot decide if $\lambda^M$ is finite or not for a conical singularity modeled on $C(N)$ such that $\lambda^N = (n-2)$. Ideed, considering that the metric becomes conical at a rate $\epsilon(r)$ and by using a function $v = \frac{b}{r^{\frac{n-1}{2}}}1_{[0,r_0]}$, we get that $$\lambda^M \geqslant\int_0^{\infty}\frac{\mathcal{O}(\epsilon(r))}{r}dr,$$ and if $\epsilon(r)$ does not converge to zero fast enough and the $\mathcal{O}(\epsilon(r))$ happens to be negative (this is easy to construct as a warped product on $(\mathbb{R}\times \mathbb{S}^{n-1},dr^2 + h(r)^2 g^{\mathbb{S}^{n-1}})$), this may diverge.\ If we assume a convergence speed in a positive power of $r$ for example, then, $\lambda^N = (n-2)$ also implies that $\lambda^M >-\infty$ by looking more closely at the proof of the *strict* Hardy inequality in [@hardy]. For $(M^n,g)$, a manifold smoothly asymptotic to the cone $C(N)$ at infinity. If $\lambda^N\leqslant (n-2)$, then, $$\nu^M(g) = - \infty.$$ If, $\lambda^N > (n-2)$, then, $$\nu^M \leqslant \nu^{C(N)}.$$ The result is not a characterization of the finiteness of the functional as it was for manifolds with conical singularities, but the $\nu$-functional of the cones at infinity give an upper bound for the entropy of the whole manifold. A similar upper bound for manifolds with conical singularities is true for the $\mu$-functional. Acknowledgements {#acknowledgements .unnumbered} ================ I would like to thank Richard Bamler for inviting me at UC Berkeley and supervising this work. A global Pseudolocality result ============================== Perelman’s Pseudolocality theorem states that if a region $\Omega$ is “close enough” to the Euclidean space, then there is a smaller region inside, which has bounded curvature for a small time. In particular this region will be nonsingular for small times.\ Looking at the proof of this theorem, we realize that the “close enough” to the Euclidean space hypothesis is only used to ensure that for small $\tau$, the $\mu(g,\tau)$-functional of $\Omega$ is larger than $-\eta_n$ for some $\eta_n>0$.\ Our modification states that there exists $\eta_n>0$ such that if for all $\tau \leqslant T_0$, $\mu(g,\tau)\geqslant-\eta_n$, then the flow is nonsingular on $[0,T_0]$ and has a bound on its curvature tensor on this interval only depending on the values of the $\mu(g,\tau)$ for $\tau\in [0,T_0]$. In dimension 3, from the description of the singularities given by Perelman, we have an explicit value : $$\eta_3 = 1-\log 2 = \nu(\mathbb{S}^2).$$ The classical theorem --------------------- Let us state the theorem, the proof can be found in Kleiner-Lott notes [@KleinLott]. For every $\alpha > 0$ there exist $\delta$, $\epsilon > 0$ with the following property :\ Suppose that we have a smooth pointed Ricci flow solution $(M,(x_0, 0), g(.))$ defined for $t \in [0,(\epsilon r_0)^2]$, such that each time slice is complete. Suppose that for any $x \in B_0(x_0, r_0)$ and $\Omega \subset B_0(x_0, r_0)$, we have\ $$\operatorname{\textup{R}}(x,0) \geqslant -r_0^{-2}$$ and $$vol(\partial\Omega)^n \geqslant (1 - \delta) c_n vol(\Omega)^{n-1},$$ where $c_n$ is the Euclidean isoperimetric constant.\ Then, $|\operatorname{\textup{Rm}}|(x,t) < \alpha t^{-1} + (\epsilon r_0)^{-2}$ whenever $0 < t \leqslant (\epsilon r_0)^2$ and $d(x, t) = dist_t(x, x_0) \leqslant \epsilon r_0$. The proof is a proof by contradiction, by considering a sequence of pointed manifolds that are counterexamples of the property : $(M_k, x_{0,k},g_k(.))$ on an interval $[0,\tau_k]$. The idea is to first, find an upper bound on the $\mu$-functional of these manifolds for $k$ large enough, and then contradict this bound thanks to a bound on the isoperimetric constant and the scalar curvature that imply a lower bound on $\mu(\tau)$ for small $\tau$.\ The crucial element to note is that the lower bound on the isoperimetric constant and the scalar curvature are only used at the very last step of the proof to contradict an upper bound for $\min_{0\leqslant\tau\leqslant\tau_k}\mu^{M_k}(g_k,\tau)$ given by the lemma 33.4 (thanks to conjugate heat kernel centered at a well chosen high curvature space time point).\ In particular, any condition that contradicts this upper bound of that $\min_{\tau\leqslant\tau_k}\mu(g_k,\tau)$ would imply the result. A global version ---------------- The only step where the isoperimetric constant of our domain is used to have a bound on the curvature is the very last step. In particular, manifolds that have a $\nu$-functional (the minimum of the $\mu(\tau)$) close enough to $0$ contradict the lemma 33.4 and we can rephrase the result as : \[global pseudolocality\] For all $\alpha>0$, there exists a constant $\eta_n(\alpha)>0$ such that : If $(M,g)$ is a complete Riemannian manifold with bounded curvature such that there exists $T > 0$ such that, for all $0 < \tau \leqslant T$, $$\mu(g,\tau) > - \eta_n(\alpha),$$ then, the flow exists for all $0<t<T$ and : $$|Rm(g_t)|\leqslant \frac{\alpha}{t}.$$ The proof of the statement is adaptation of the classical pseudolocality result, let us give a description of it : 1. The first step is to choose a good point that we will be able to blow up at the scale of the curvature : For all $A>0$, there exists $\epsilon>\frac{1}{100 n A}$ such that the solution to the Ricci flow exists on $[0,\epsilon^2]$ and there exists a point $(\bar{x},\bar{t})$ such that : For all $(x,t)\in B_{\bar{t}}\left(\bar{x}, \frac{A}{10} |Rm(\bar{x},\bar{t})|^{-\frac{1}{2}}\right)\times \left[\bar{t}- \frac{1}{2|Rm(\bar{x},\bar{t})|}\right]$, $$|Rm(x,t)|\leqslant 4|Rm(\bar{x},\bar{t})|.$$ 2. By parabolically scaling the previous Ricci flow by its curvature at $(\bar{x},\bar{t})$, we are in the case of a manifold satisfying : $$\sup_M|Rm_T| = |Rm_T(\bar{x})| = 1 > \frac{\alpha}{T}$$ for some $T > \alpha > 0$, and such that for all $(x,t)\in B_T\left(\bar{x},\frac{2}{100n}\right) \times [0,T]$, $$|Rm_t(x)|\leqslant 4.$$ 3. Let us prove that there is a strictly negative upper bound to the $\mu$-functional. This bound will be the $-\eta$ we are looking for and adding the assumption that $\mu(\tau,g) > -\eta$ would lead to a contradiction. For any Ricci flow $(M^n,g(t))_{0\leqslant t\leqslant T}$ such that $$\sup_M|Rm_T| = |Rm_T(\bar{x})| = 1 > \frac{\alpha}{T}$$ for some constants $T > \alpha > 0$, and such that for all $(x,t)\in B_T\left(\bar{x},\frac{2}{100n}\right) \times [0,T]$, $$|Rm_t(x)|\leqslant 4,$$ there exists $\tilde{t}\in [T-\frac{\alpha}{2},T]$ and $\eta(n,\alpha) > 0$ and such that $\mu(g(\tilde{t}),T-\tilde{t}) \leqslant -\eta(n,\alpha)$. Let us prove this result by contradiction. Let us fix $\alpha>0$ and consider a sequence of counterexamples, that is, a sequence of solutions to the Ricci flow $(M_k,g_k(t))_{0\leqslant t\leqslant T_k}$ pointed at $(\bar{x}_k,T_k)$ and $T_k > \alpha$ such that for all $T_k-\frac{\alpha}{2}<t_k<T_k$ : $$\liminf_{k\to\infty}\sup_{0<t_k<T_k}\mu(g_k(t_k),T_k-t_k) \geqslant 0.$$ Moreover, let us consider the fundamental solutions $u_k$ to the backward heat equation starting at a Dirac at $\bar{x}_k$ at the time $T_k$, that is : $$\begin{aligned} \partial_t u_k = - \Delta u_k + R_ku_k \end{aligned}$$ and the limit at $T_k$ is the Dirac at $\bar{x}_k$.\ There are two cases : 1. There is a lower bound on the injectivity radius implying that it is possible to take a sublimit of the sequence by Hamilton’s compactness theorem. Let us note it $(M_\infty,g_\infty(.))$ it is defined on a time interval $[T_\infty-\frac{\alpha}{2},T_\infty]$.\ It satisfies $|Rm_\infty|\leqslant 4$ in $B(\bar{x}_\infty, \frac{2}{100n})\times [0,T_\infty]$ and $|Rm_\infty(\bar{x}_\infty,T_\infty)| = 1$. thanks to the lemma 33.1 of [@KleinLott], the fundamental solution to the backwards heat equation starting at $\bar{x}_\infty,T_\infty$, $u_\infty$ is the smooth limit (on compact subsets, and in particular in $B(\bar{x}_\infty,\frac{1}{100n}))$. Now, given the smooth convergence, we also have the convergence of $v_k(t) := \left[\left(T_k-t\right)(2\Delta_k f_k-|\nabla_k f_k|^2+\operatorname{\textup{R}}_k)-f_k-n\right]u_k$ associated to $u_k$ to $v_\infty$ associated to $u_\infty$ (we used the usual notation $u = \frac{1}{\left(4\pi(T-t)\right)^{\frac{n}{2}}}e^{-f}$ to define $f$). By Perelman’s Harnack inequality, $$v_\infty\leqslant 0.$$ Let $\tilde{t}_\infty\in (T_\infty-\frac{\alpha}{2},T_\infty)$, and let us consider $h$ a solution to the forward heat equation starting at a nonnegative function exactly supported in $B_\infty :=B_{\tilde{t}_\infty}(\bar{x}_\infty,\sqrt{T_\infty-\tilde{t}_\infty})$ at $\tilde{t}_\infty$. By Perelman’s Harnack inequality : $$\frac{d}{dt}\int_{B_\infty}hv_\infty dv_\infty \geqslant 0.$$ Moreover, it tends to $0$ as $t$ tends to $T_\infty$. This implies that for all $t\in (\tilde{t}_\infty,T_\infty)$, $\int_{B_\infty}hv_\infty dv_\infty = 0$ because $v_\infty\leqslant 0$. Now since $h$ is strictly positive for $t>\tilde{t}_\infty$, for all $t\in (\tilde{t}_\infty,T_\infty)$, $$v_\infty = 0.$$ This implies that the $\mathcal{W}$-functional at $f_\infty$ (which is the integral of $v_\infty$) is constant when $(g_\infty,f_\infty)$ satisfy the equations of evolution . This means that they satisfy the following shrinking soliton equation : $$\operatorname{\textup{Ric}}_\infty + \nabla^2_\infty f_\infty = \frac{1}{2(T_\infty-t)}.$$ But as we have seen, this means that up to diffeomorphism, the Ricci flow at $(M_\infty,g_\infty)$ just acts by scaling by $T_\infty-t$. In particular, the (non vanishing) curvature blows up at rate $\frac{1}{T_\infty-t}$ when $t\to T_\infty$ which is a contradiction to our curvature bounds. 2. In the second case, it is possible to blow up at the scale of the collapsing injectivity radii to ensure that the injectivity radius equals $1$ to end up with a smooth limiting Ricci flow. Because the injectivity radius was collapsing at the scale of the curvature, we end up with a flat Ricci flow which is not the Euclidean space because its injectivity radius is $1$. Since we have blown up even more than the scale of the curvature, the range of $\tau$ to consider for the $\mathcal{W}$ functional is arbitrarily large and in particular, like in the proof of lemma 33.4 in [@KleinLott], we can see that the $\mathcal{W}$ associated to a good cut-off of the backwards heat equation $f_\infty$ leads to : $$\lim_{\tau\to\infty}\mathcal{W}(f_\infty,g_\infty,\tau) = -\infty.$$ and this implies that there exists $\tilde{t}_k$ depending on the scale of the injectivity radius for each $k$, such that : $$\lim_{k \to\infty}\mathcal{W}(f_k,g_k(\tilde{t}_k),T_k-\tilde{t}_k) = -\infty.$$ which is also a contradiction. Another way to see it is to realize that there is collapsing and that collapsing together with a lower bound on the scalar curvature gives an arbitrarily negative $\nu$-functional (see the non-collapsing theorems in [@KleinLott]). 4. This implies the statement of the proposition, indeed, $$t\mapsto \mu\left(g_k(t),\tau-t\right)$$ is increasing and in particular, if for all $0<\tau<T$, $$\mu\left(g_k(0),\tau\right)>-\eta_n(\alpha),$$ then by monotonicity, we have for all $\tau \geqslant t\geqslant 0$ : $$\mu\left(g_k(t),\tau-t\right)>-\eta_n(\alpha).$$ In particular, by the previous lemmas, every $t_k$ when the control of the curvature is not satisfied has to be larger than $T$. Which is the statement of the lemma. With the same assumptions, if we assume that $\nu(g)$ is larger then the infimum of the $\eta_n(\alpha)$ for $\alpha > 0$, then the Ricci flow starting at $(M,g)$ is a type III solution existing for all times. This result using Perelman’s functionals instead of bounds on the scalar curvature and isoperimetric constants gives control for potentially large times and ball radii. It is also compatible to our study of Perelman’s functionals on cones as this allows us to take advantage of the scale invariance of cones to construct type III solutions of the Ricci flow coming out of some cones in the next section. In dimension $3$, thanks to the description of the possible singularities of the Ricci flow given by Perelman’s canonical neighborhood theorem, we can find the constant $\eta_3$ explicitely : $\eta_3 = \nu^{\mathbb{R}\times \mathbb{S}^2} = 1-\log 2$ (with equality in the case of the formation of necks). Constructing nonsingular type III flows coming out of some cones ================================================================ In this section, we will present an application of the global pseudolocality result from the previous section to the construction of nonsingular Ricci flows coming out of some cones. The “some cones” will correspond to cones over particular perturbations of the unit sphere.\ The construction is done in a few steps : - Find a condition on the link $N$ under which it is possible to smooth out the cone $C(N)$ into a manifold $M$ that has a high enough $\nu$-functional to apply the global pseudolocality. - Thanks to the global pseudolocality result, there exists a Ricci flow starting at $(\mathbb{R}^{n+1},g^M)$ that is an immortal type III solution of the Ricci flow, we will note it $(\mathbb{R}^{n+1},g^{M(t)})$. This flow is asymptotic to the cone $C(N)$ at each time. - From this Ricci flow, it is possible to get a sublimit of a blowdown sequence. This will give the wanted Ricci flow coming out of the cone $N$. We expect the resulting Ricci flow to be an expanding soliton (it is true in some cases). We will focus on a family of manifolds whose Perelman’s $\mathcal{W}$-functional is very close to the $\mathcal{W}$-functional of the sphere (which is the crucial point of our proof). There exist $\beta_1(n)$ and $\beta_2(n)$ ($0<\beta_1<1<\beta_2$) such that :\ If $(\mathbb{S}^n,g^N)$ is a $n$-dimensional Riemannian manifold satisfying the following set of properties $P(\beta_1,\beta_2)$ : - $(N,g^N)\times \mathbb{R}^2$ has positive isotropic curvature (implied by the positiveness of the curvature tensor for example). We know by the sphere theorem of Brendle and Schoen that this implies that $N$ is diffeomorphic to $\mathbb{S}^n$, so the initial supposition is redundant. - $C^0$-closeness : $$\begin{aligned} \beta_1^2g^{\mathbb{S}^n}\leqslant g^N \leqslant \beta_2^2g^{\mathbb{S}^n}.\end{aligned}$$ - Lower bound on the scalar curvature : $$\begin{aligned} \operatorname{\textup{R}}^N\geqslant \frac{n(n-1)}{\beta_2^2}, \end{aligned}$$ Then, there exists an immortal type III solution of the Ricci flow coming out of the cone $C(N)$. These are quite artificial conditions that are just chosen in order to be able smooth out the cone by a manifold of high $\nu$-functional thanks to an adaptation of the proof of the lower bound for the $\mathcal{W}$-functional for perturbations of the sphere.\ The proof of this theorem is the goal of the section. Smoothing out cones while controlling the $\nu$-functional ---------------------------------------------------------- Here we present the first step of the proof : smoothing out the cone by a smooth manifold of high $\nu$-functional. This is the section in which we use the quite artificial assumptions on the link.\ Let us start the proof by proving that the condition we asked for is preserved along a renormalization of Ricci flow with uniform worse constants. We will call $(\tilde{g}_t)_t$ a *renormalization of the Ricci flow* on $N$ if there exists a time depending constant $\alpha(t)$ such that : $$\partial_t\tilde{g} = -2\operatorname{\textup{Ric}}(\tilde{g}) + \alpha \tilde{g}.$$ \[Prop preservée\] For all $N$ satisfying $P(\beta_1,\beta_2)$, there exists a renormalization of the Ricci flow $(\tilde{g}(t))_{t\in[0,+\infty)}$ on $\mathbb{S}^n$ starting at $g^N$ that exists for all times and such that as the time tends to $\infty$, $$\tilde{g_t}\to g^{\mathbb{S}^n}.$$ Moreover, for all $t>0$, $\tilde{g}_t$ satisfies $P(\beta_1',\beta_2')$ for some other $\beta'_1$ and $\beta'_2$ depending only on the initial $\beta_1$ and $\beta_2$ and the dimension.\ Moreover, as $\beta_1$ and $\beta_2$ tend to $1$, $\beta'_1$ and $\beta'_2$ also tend to $1$. This result relies completely on the proof of the article [@BM] by Bamler and Maximo and on the $C^0$ perturbation of Ricci flows presented in [@Sim] : Let us first note that, by [@bs], the positive isotropic curvature when crossed with $\mathbb{R}^2$ is preserved by Ricci flow and that the convergence towards $\left(\mathbb{S}^n,T_N\times2(n-1)g^{\mathbb{S}^n}\right)$ by the flow defined in the next lemma is exponentially fast in every derivative. #### 1) A lower bound on the scalar curvature There exists $\beta_1'(\beta_1,\beta_2)$ and $\beta_2'(\beta_1,\beta_2)$ which tends to $1$ as $(\beta_1,\beta_2)\to (1,1)$ such that if $(\mathbb{S}^n,g^N)$ satisfies $P(\beta_1,\beta_2)$, then, defining $\bar{g}(t)$ by : $$\left\{ \begin{array}{ll} \bar{g}(0)=g^N,\\ \partial_t\bar{g} = -2\operatorname{\textup{Ric}}(\bar{g})+\frac{1}{2T_N}\bar{g}, \end{array} \right.$$ where $T_N$ is the existence time of the flow.\ $ \bar{N}(t) = (\mathbb{S}^n,\bar{g}(t))$ satisfies $$R^{\bar{N}(t)}\geqslant \frac{n(n-1)}{{\beta_2'}^2}$$ along the flow. Let us start by proving that the scalar curvature does not become to negative along the actual Ricci flow.\ Because of the $C^0$-closeness, the extinction time is close to that of the unit sphere by the work of [@Sim] because of the $C^0$-closeness, so we can use the results in [@BM] which deals with manifolds with positive isotropic curvature when crossed with $\mathbb{R}^2$ that have a lower bound on the scalar curvature as well as an existence time close to the maximum possible given the lower bound on the scalar curvature : To be more precise, we will need to look at the proof given in [@BM]. In the section “End of Proof of Theorem 1.1.”. after an arbitrarily short time noted $t_2$ (depending only on the lower bound on the scalar curvature and on the existence time), the sectional curvatures are very pinched around a lower bound for the scalar curvature (with a factor $\frac{1}{n(n-1)}$). Thanks to [@hui], this is a preserved property along the flow.\ This implies that $\frac{\operatorname{\textup{R}}^2}{n}\leqslant|\operatorname{\textup{Ric}}|^2\leqslant\frac{(1+\epsilon)\operatorname{\textup{R}}^2}{n}$ and since the equation satisfied by the scalar curvature along a Ricci flow is $$\partial_t R = \Delta R + 2 |Ric|^2,$$ we can deduce that the product $\operatorname{\textup{R}}^N_{min}\times (T_N-t)$ (where $T_N$ is the existence time of the flow) satisfies along a Ricci flow : $$\frac{n}{2}-\mathcal{O}(\epsilon)\leqslant \operatorname{\textup{R}}^N_{min}\times (T_N-t)\leqslant \frac{n}{2}$$ (the lower bound comes from a lower bound on the existence time thanks to the maximum of the scalar curvature). This bound implies that along the renormalized flow presented in the theorem, we have $\operatorname{\textup{R}}^N_{min}\geqslant n(n-1)-\mathcal{O}(\epsilon)$ with $\epsilon$ a function of $(\beta_1,\beta_2)$ which tends to $0$ as $(\beta_1,\beta_2)$ tends to $(1,1)$. #### 2) A bound on the $C^0$-closeness There exists $\beta_1'(\beta_1,\beta_2)$ and $\beta_2'(\beta_1,\beta_2)$ which tends to $1$ as $(\beta_1,\beta_2)\to (1,1)$ such that if $(\mathbb{S}^n,g^N)$ satisfies $P(\beta_1,\beta_2)$, then defining $\bar{g}(t)$ by : $$\left\{ \begin{array}{ll} \bar{g}(0)=g^N,\\ \partial_t\bar{g} = -2\operatorname{\textup{Ric}}(\bar{g})+\frac{1}{2T_N}\bar{g}, \end{array} \right.$$ where $T_N$ is the existence time of the flow.\ $ \bar{N}(t) = (\mathbb{S}^n,\bar{g}(t))$ satisfies $${\beta_1'}^2 g^{\mathbb{S}^n} \leqslant \bar{g}(t)\leqslant {\beta_2'}^2 g^{\mathbb{S}^n}$$ along the flow. This is actually a direct application of the theorem 1.1 of [@BM] : Since we have a lower bound on $\operatorname{\textup{R}}_{min}(t)\times (T_N-t)$ we have the $C^0$-closeness preserved by scaling (with a potentially worse constant). #### 3) Converging to the unit sphere Now, in the statement of the proposition, we asked for the flow to converge to the unit sphere and not one of radius $\sqrt{T_N\times 2(n-1)}$. It is possible to do by just scaling $g^N$ at the end of the flow where it is $C^3$-close to a sphere as it is presented in the appendix B.2. It is possible to do this without changing the constant $\beta_1'$ and $\beta_2'$ as $\bar{g}$ is $C^{2}$-close to $T_N\times 2(n-1)$. We will name $\tilde{g}_t$ the resulting flow.\ Let us now use the proposition \[Prop preservée\] to smooth out the cone over $(N,g^N)=(\mathbb{S}^n,g^N)$ thanks to a manifold of the form : $$(\mathbb{R}^+\times \mathbb{S}^n,dr^2+r^2\hat{g}(r)),$$ and prove that for a well chosen $\hat{g}(r)$, the $\nu$-functional is large. We will actually choose $$\hat{g}(r) = \tilde{g}(\phi(r))$$ for a $\phi:\bar{\mathbb{R}}^+\to\bar{\mathbb{R}}^+$. We want the manifold to be smooth, that is to have $\hat{g}(r)$ asymptotic to the unit sphere for $r$ close to $0$, so $\phi(0) = +\infty$ and we also want the manifold to be asymptotic to $C(N)$, that is $\phi(+\infty) = 0$. To have good estimates coming from the work we have already done on cones, we will consider such a function with slow enough variations. For all $\delta$, the manifold $(M,g^\delta):=\left(\mathbb{R}^+\times \mathbb{S}^n,dr^2 + r^2 \tilde{g}(\frac{\delta}{r^2})\right)$ smoothes out the cone $C(N)$ and for $\delta_0(N)>0$ small enough, we have $\nu^M(g^{\delta_0})>-\eta_n$. In other words, it is possible to smooth out the cone by a manifold that satisfies the assumption of the pseudolocality result of the previous section. A small $\delta$ corresponds to small variations of the link to make it look constant locally and thus look locally like a cone from the point of view of the $\mathcal{W}$-functional (it is constant if and only if $(M,g^{\delta})$ is a cone). Let us consider a manifold $(M,g^\delta)=(\mathbb{R}^+\times \mathbb{S}^n,dr^2+r^2\tilde{g}(\frac{\delta}{r^2}))$. For all $t$, $\tilde{g}_t$ satisfies a lower bound $L(\epsilon_1,\epsilon_2,\epsilon_3)$ for some $\epsilon_i$ tending to zero as $\beta_i\to 0$. Manifolds satisfying $P(\beta_1',\beta_2')$ for $(\beta_1',\beta_2')$ close enough to $(1,1)$ also satisfy $L(\epsilon_1,\epsilon_2,\epsilon_3)$ for small $\epsilon_i$ thanks to the appendix A.4. Now, thanks to the appendix A.3, such a manifold $(M,g^\delta)$ can be treated as a cone over a a manifold satisfying a lower bound $L(\epsilon_1+\mathcal{O}(\delta),\epsilon_2,\epsilon_3)$ (in the sense that its $\nu$-functional is bounded from below by the same $\Psi(\epsilon_1+\mathcal{O}(\delta),\epsilon_2,\epsilon_3)$). A type III immortal solution of the Ricci flow asymptotic to the cone --------------------------------------------------------------------- Thanks to the last proposition, for all $N$ satisfying the assumptions of the theorem, then it is possible to construct a manifold asymptotic to the cone $C(N)$\ For $N$ satisfying the set of properties $P(\beta_1,\beta_2)$, the Ricci flow starting at $(M,g^{\delta_0})$ is an immortal type III solution of the Ricci flow. It is moreover asymptotic to the cone $C(N)$ at all times. The existence of the type III solution starting at $(M,g)$ is a direct consequence of the pseudolocality result presented in the previous section, because for $(\beta_1,\beta_2)$ close enough to $(1,1)$, we have : $$\nu^M(g)>\eta_{n+1}.$$ The fact that the flow stays asymptotic to the cone can be found in [@lz]. Construction of a type III immortal solution coming out of the cone by a blow down process ------------------------------------------------------------------------------------------ From the previous results, we have a type III solution of the Ricci flow. A standard process to get an idea of the long time behavior of such a Ricci flow is to “blow down” the flow by parabolic scaling.\ Thanks to the results in the appendix D, we can take a sublimit of the blow downs of this Ricci flow into an immortal type III solution of the Ricci flow coming out of a cone. Which is the statement of the theorem. Note that if we had a limit instead of a sublimit, we could argue that the limiting flow is an expanding soliton. We could expect that such a Ricci flow would be a soliton. For example, if we can ensure the positivity of the Ricci curvature along the renormalized flow, then a direct application of the theorem 1 of [@Ma] ensures that we have a gradient expanding soliton. It could be possible to use the same techniques in our case, this would require to get controls of the heat kernels on manifolds satisfying a log sobolev inequality (and a lower bound on the scalar curvature). In particular, assuming that the link $N$ also has a curvature tensor larger than that of the unit sphere, we obtain an expanding soliton. Proof of some lemmas ==================== Let us give the proof of some technical and not particularly enlightening lemmas needed for some proofs in the text Upper semicontinuity of $\tau\mapsto\mu(g,\tau)$ ------------------------------------------------ $\tau\mapsto\mu(g_0,\tau)$ when $\lambda^N>-\infty$ is upper semicontinuous.\ More precisely, $$\begin{aligned} \lim_{\tau_2\to\tau_1}\left(\frac{\mu(g_0,\tau_1)-\mu(g_0,\tau_2)}{\tau_1-\tau_2}\right)&\geqslant \mathcal{F}(u_1)-\frac{n}{2\tau_1}\geqslant \lambda^N-\frac{n}{2\tau_1}.\end{aligned}$$ Let us consider $\tau_1>\tau_2>0$.\ \ Let $\phi : N\to \mathbb{R}$ such that $\int_N \phi^2dv = 1$.\ For all $u$ function on the manifold, we have : $$\begin{aligned} \mathcal{W}\left(\phi-\frac{n}{2}\log(4\pi\tau_1),g_0,\tau_1\right)-\mathcal{W}\left(\phi-\frac{n}{2}\log(4\pi\tau_2),g_0,\tau_2\right)&=(\tau_1-\tau_2)\mathcal{F}(\phi,g_0)-\frac{n}{2}\log\frac{\tau_1}{\tau_2}\\ &\geqslant (\tau_1-\tau_2)\lambda^N-\frac{n}{2}\log\frac{\tau_1}{\tau_2}.\end{aligned}$$ Now, if we consider $\phi_1$ for which $\mathcal{W}(\phi_1-\frac{n}{2}\log(4\pi\tau_1), g_0, \tau_1)=\mu(g_0,\tau_1)$ (or approximating it up to a $o(\tau_1-\tau_2)$ in the case where such a minimizer doesn’t exist). $$\begin{aligned} \mu(g_0,\tau_1)-\mathcal{W}\left(\phi_1-\frac{n}{2}\log(4\pi\tau_2),g_0,\tau_2\right)&\geqslant (\tau_1-\tau_2)\lambda^N-\frac{n}{2}\log\frac{\tau_1}{\tau_2}\\ \mu(g_0,\tau_1)&\geqslant \mathcal{W}\left(\phi_1-\frac{n}{2}\log(4\pi\tau_2),g_0,\tau_2\right)+ (\tau_1-\tau_2)\lambda^N-\frac{n}{2}\log\frac{\tau_1}{\tau_2}\\ \mu(g_0,\tau_1)&\geqslant \mu(g_0,\tau_2)+ (\tau_1-\tau_2)\lambda^N-\frac{n}{2}\log\frac{\tau_1}{\tau_2}.\end{aligned}$$ Now, since $(\tau_1-\tau_2)\lambda^N-\frac{n}{2}\log\frac{\tau_1}{\tau_2}$ tends to $0$ when $\tau_2$ tends to $\tau_1$, we have the wanted upper semicontinuity. A sequence of minimizers of $\mathcal{W}(.,g,\tau)$ tends to a minimizer of $\mathcal{F}$ as $\tau\to\infty$ ------------------------------------------------------------------------------------------------------------ For any compact manifold $N$, as $\tau_k$ tends to infinity, a sequence $u_k^2=\frac{e^{-f_k}}{(4\pi\tau_k)}$, where $f_k$ is a minimizer of $\mathcal{W}^N(.,g,\tau_k)$ tends to $u_\mathcal{F}$ in $H^1$, the minimizer of $\mathcal{F}^N(.,g)$. Let $\tau>0$, $f: N \to \mathbb{R}$.\ Noting $u^2 = e^{-\phi}$, where $\phi = f+\frac{n}{2}\log(4\pi\tau)$.\ The expression of the entropy on $N$ is : $$\begin{aligned} \mathcal{W}^N\left(f,g,\tau\right) = \tau\mathcal{F}^N(\phi,g)-\int_Nu^2\log(u^2)dv-\frac{n}{2}\log{4\pi\tau}-n.\label{entrexp}\end{aligned}$$ It is expected that a minimizer of $\mathcal{W}^N(.,g,\tau)$ becomes close to a minimizer of $\mathcal{F}^N(.,g)$ as $\tau$ tends to $\infty$ since the importance of the compensating term $-\int_Nu^2\log(u^2)dv$ becomes neglectible compared to the $\tau\mathcal{F}^N(\phi,g)$. #### 1) Let us bound the $u^2\log u^2$ term :\ There exists a $\tau_0$ such that, for all $u$ such that $\int_N u^2 = 1$ (this exists because the manifold is compact), $$\begin{aligned} -\int_N u^2\log (u^2)dv &\geqslant -\tau_0\int_N4|\nabla u|^2dv.\end{aligned}$$ And, by Jensen inequality : $$\begin{aligned} 0\geqslant-\int_N u^2\log (u^2)dr. \label{up bound log}\end{aligned}$$\ **Let us consider a sequence $(u_k)_k$ of minimizers for $\tau_k\to+\infty$**\ From these two inequalities, can bound the $u^2\log (u^2)$ term and we get : $$\int_N \left(4|\nabla u|^2+\operatorname{\textup{R}}u^2\right)dv\geqslant\frac{\mathcal{W}^N(u,\tau_k)+\frac{n}{2}\log(4\pi\tau_k)+n}{\tau_k} \geqslant \int_N \left(\frac{(\tau_k-\tau_0)}{\tau_k}4|\nabla u|^2+\operatorname{\textup{R}}u^2\right)dv,$$ and a minimizer of the middle term is also a minimizer of $\mathcal{W}$. #### 2) Let us prove that $u_k$ tends to a minimizer of $\mathcal{F}$ by comparing its $\mathcal{W}$-functional to that of $u_\mathcal{F}$, a minimizer of the $\mathcal{F}$ functional : $$\mathcal{F}(u_\mathcal{F}) = \lambda^N.$$\ Since $u_k$ is a minimizer of $\mathcal{W}^N(.,\tau_k)$, we have : $$\begin{aligned} 0 &\leqslant \mathcal{W}^N(u_\mathcal{F},\tau_k)-\mathcal{W}^N(u_k,\tau_k) \nonumber\\ &\leqslant \tau_k (\lambda^N-\mathcal{F}(u_k)) -\frac{\tau_0}{\tau_k}\int_N |\nabla u_k|^2dv. \label{W F}\end{aligned}$$ Now, if $u_k$ doesn’t approach a minimizing function for $\mathcal{F}^N$, then there exists $\epsilon>0$ such that, for all $k$ : $$\lambda^N-\mathcal{F}(u_k)<-\epsilon,$$ and, as $\tau\to\infty$, the right term of tends to $-\infty$ (the other terms are negative or tend to $0$ as $k\to\infty$), which is a contradiction of the inequality.\ $u_k$ tends to a minimizer of $\mathcal{F}$ in $H^1$. Making the manifold look locally conical ---------------------------------------- Here we want to prove that given a flow on a manifold diffeomorphic to a sphere that ends at the unit sphere, it is possible to construct a manifold asymptotic to the cone over the initial manifold for which at each point, the quantities involved in the entropy are arbitrarily close to that over a cone.\ This makes the computation of the entropy much easier, in particular to prove that the $\nu$-functional of the manifold smoothing the cone is high enough in the last section of the text.\ We will consider a manifold : $$(M,g) = (\mathbb{R}^+\times N,dr^2+r^2g(r)).$$ In the case when $g(r)$ is constant, we have a cone and the expression from which we have been able to control the $\nu$-functional after a separation of variables is : $$\begin{aligned} \mathcal{W}^{C(N)}(f,g,\tau) &= \int_0^\infty \left[\mathcal{W}^{N}\left(\tilde{f},g^N,\frac{\tau}{r^2}\right)-n(n-1)\frac{\tau}{r^2}\right]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right) \\ &+\int_N \left(\int_0^\infty \left[\tau(\partial_r f)^2+a_r-1\right]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\right)\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right).\end{aligned}$$ The goal is to have a similar expression (up to a controlled error term) for manifolds of the form $(M,g) = (\mathbb{R}^+\times N,dr^2+r^2g(r))$ for an adapted choice of $g(r)$, that is : $$\begin{aligned} \mathcal{W}^{M}(f,g,\tau) &= \int_0^\infty \left[\mathcal{W}^{N}\left(\tilde{f},g(r),\frac{\tau}{r^2}\right)-n(n-1)\frac{\tau}{r^2}\right]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right) \\ &+\int_N \left(\int_0^\infty \left[\tau(\partial_r f)^2+a_r-1\right]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\right)\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right) \\ &+ \text{Error term}.\end{aligned}$$\ We will choose $g(r) = g^{\hat{N}(\theta(r))}$ for a $\theta$ with slow enough variations to make $\operatorname{\textup{R}}^M$ arbitrarily close to $\operatorname{\textup{R}}^{C(\hat{N}(\theta(r)))}$ at a point at distance $r$. The idea is to consider the variations of $g(r)$ small enough to consider it locally conical in the expression of the $\mathcal{W}$-functional. By taking advantage of the exponentially fast convergence of the metric along a renormalized flow. #### Some formulas for Riemannian foliations by hypersurfaces :\ Now, we have the following formula for the curvature of $M =(\mathbb{R}\times N, dt^2 + g_t)$ with $$\partial_t g_t = 2K(t) :$$ For the Ricci curvature : - $\operatorname{\textup{Ric}}_{00} = -\partial_t K^i_i + g^{ij}K_{ij}$, - $\operatorname{\textup{Ric}}_{0i} = -D_i K^j_j + D_jK^j_i$, - $\operatorname{\textup{Ric}}_{ij} = \operatorname{\textup{Ric}}^{n(t)}_{ij}-\partial_t K_{ij}+2 K_{il}K^l_j-K^l_lK_{ij}$. Which gives that the scalar curvature is : $$\begin{aligned} \operatorname{\textup{R}}^M = \operatorname{\textup{R}}^{N(t)}-\partial_t K^i_i + g^{ij}K_{ij} + \sum_{i,j}g^{ij}(-\partial_t K_{ij}+2 K_{il}K^l_j-K^l_lK_{ij}). \label{exp scal}\end{aligned}$$\ #### Estimating the difference with the expression of $\mathcal{W}$ for a cone :\ Let us define : $$\operatorname{\textup{R}}_{rest} := \operatorname{\textup{R}}^M(r)-\operatorname{\textup{R}}^{C(\hat{N}(\theta(r)))}(r).$$ If the convergence of the family of metric considered is exponentially fast in the $C^2$-sense. Choosing $\theta(r) = \frac{\delta}{r^2}$, we have the following estimate for small $\delta$ : $$\begin{aligned} \operatorname{\textup{R}}_{rest}(r)=\mathcal{O}(\delta)\end{aligned}$$ uniformly on the manifold. The convergence will always be exponentially fast for the constructions by renormalized Ricci flow we will consider. Thanks to , we have an explicit expression for the scalar curvature in our case depending on the first and second derivatives\ Let us just note that this quantity is a $O\left(|\partial^2_{r^2}g(r)|+\frac{1}{r}|\partial_r g(r)|+\frac{1}{r^2}|\partial_r g(r)|^2\right)$.\ Now, we know that the convergence of $\hat{g}$ is exponentially fast in the $C^2$-sense : $$|\partial^k_{t^k}\hat{g}|\leqslant C_ke^{-c_kt},$$ so, choosing $\theta(r)=\frac{\delta}{r^2}$ ($g(r) = \hat{g}(\frac{\delta}{r^2})$, for some small $\delta$), by the $O\left(|\partial^2_{r^2}g(r)|+\frac{1}{r}|\partial_r g(r)|+\frac{1}{r^2}|\partial_r g(r)|^2\right)$ estimate, we have : $$\begin{aligned} \operatorname{\textup{R}}_{rest} &= O\left(|\partial^2_{r^2}g(r)|+\frac{1}{r}|\partial_r g(r)|+\frac{1}{r^2}|\partial_r g(r)|^2\right)\\ &=O\left(\left((\theta')^2|\partial^2_{r^2}\hat{g}|+\theta''|\partial_r\hat{g}|\right)+\left(\frac{1}{r}\theta'|\partial_r \hat{g}|\right)+\left(\frac{1}{r^2}(\theta')^2|\partial_r \hat{g}|^2\right)\right)\\ &=O\left(\left(\frac{\delta^2}{r^6}e^{-\frac{\delta c_2}{r^2}}+\frac{\delta}{r^4}e^{-\frac{\delta c_1}{r^2}}\right)+\left(\frac{\delta}{r^4}e^{-\frac{\delta c_1}{r^2}}\right)+\left(\frac{\delta^2}{r^8}e^{-\frac{2\delta c_1}{r^2}}\right)\right)\\ &=\mathcal{O}(\delta+\delta^2).\end{aligned}$$ uniformy in $r$. So for $\delta$ small enough, this term will become small. Let us see how we can take care of it : $$\begin{aligned} \mathcal{W}^{M}(f,g,\tau) &= \int_0^\infty [\tilde{\mathcal{W}}^{N(r)}(\tilde{f},g(r),\tau r^{-2})]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\\ &+\int_N \int_0^\infty [\tau(\partial_r f)^2+a_r-1]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right) \\ &+ \mathcal{O}(\delta)\int_0^{\infty}\frac{\tau}{r^2}\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right).\\\end{aligned}$$ Since on $M$ we have the following formulas coming from the foliations formulas : - $\operatorname{\textup{R}}^M = \frac{\operatorname{\textup{R}}_{N(r)}-n(n-1)}{r^2}+\operatorname{\textup{R}}_{rest}$ by definition of $\operatorname{\textup{R}}_{rest}$, - $|\nabla^M f|^2 = (\partial_r f)^2+\frac{|\nabla^{N(r)} f|^2}{r^2}$, - $dv^M = r^ndv^{N(r)}dr$. This implies that $\mathcal{W}^M$ has the following expression : $$\begin{aligned} \mathcal{W}^M(f,g,\tau) =\int_0^\infty\int_{N(r)} &\left[\tau\left((\partial_r f)^2+\frac{|\nabla^{N(r)} f|^2+(\operatorname{\textup{R}}^{N(r)}-n(n-1)+\operatorname{\textup{R}}_{rest})}{r^2}\right)+f-(n+1)\right]\\ &\times\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}} dr\right)\left(\frac{e^{-\tilde{f}}}{\left(4\pi\tau r^{-2}\right)^{\frac{n}{2}}} dv\right).\end{aligned}$$ Now, defining the same separation of variables as in the cone case and using the last lemma stating that that $\operatorname{\textup{R}}_{rest} = \mathcal{O}(\delta)$ we get : $$\begin{aligned} \mathcal{W}^{M}(f,g,\tau) &= \int_0^\infty [\tilde{\mathcal{W}}^{N(r)}(\tilde{f},g(r),\tau r^{-2})]\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}}dr\right) \\ &+\int_N \int_0^\infty [\tau(\partial_r f)^2+a_r-1]\left( \frac{e^{-\tilde{f}}}{(4\pi\tau r^{-2})^{\frac{n}{2}}}dv\right)\left(\frac{e^{-a_r}}{(4\pi\tau)^{-\frac{1}{2}}}dr\right) \\ &+ \mathcal{O}(\delta)\int_0^{\infty}\frac{\tau}{r^2}\left(\frac{e^{-a_r}}{(4\pi\tau)^{\frac{1}{2}}}dr\right).\\\end{aligned}$$ Lower bounds on the $\mathcal{W}$-functional of perturbation of the sphere -------------------------------------------------------------------------- In this section, we see under which conditions, we can control the $\mathcal{W}$-functional of perturbation of the sphere.\ We will obtain a precise enough control for our purpose by assuming a lower bound on the scalar curvature and the $C^0$-closeness. Recall that the $\mathcal{W}$-functional has three components : $$\mathcal{W}(f,g^N,\tau) = \tau \mathcal{F}\left(f+\frac{n}{2}\log(4\pi\tau)\;,\;g\right)+\mathcal{N}\left(f+\frac{n}{2}\log(4\pi\tau)\;,\;g\right) +C(n,\tau).$$ We will see how we can control each of these. If $(\mathbb{S}^n, g^N)$ satisfies the two following properties : - $C^0$-closeness : $$\beta_1^2 g^{\mathbb{S}^n}\leqslant g^N \leqslant \beta_2^2g^{\mathbb{S}^n}.$$ - Lower bound on the scalar curvature : $$\operatorname{\textup{R}}^N\geqslant \frac{n(n-1)}{\beta_2^2}.$$ Then, we have the following lower bounds on the different component of the $\mathcal{W}$-functional compared to that of $g^{\mathbb{S}^n}$ : - For $\mathcal{F}$ : $$\mathcal{F}\left(f+\frac{n}{2}\log(4\pi\tau)\;,\;g^N\right)\geqslant \frac{\beta_1^{n}}{\beta_2^{n+4}}\mathcal{F}\left(f +\delta+\frac{n}{2}\log(4\pi\tau)\;,\;g^{\mathbb{S}^n}\right).$$ - For $\mathcal{N}$ : $$\mathcal{N}\left(f+\frac{n}{2}\log(4\pi\tau)\;,\;g^N\right)\geqslant \frac{\beta_2^{n}}{\beta_1^n}\mathcal{N}\left(f+\delta+\frac{n}{2}\log(4\pi\tau)\;,\;g^{\mathbb{S}^n}\right)+\left(\frac{\beta_1^{n}}{\beta_2^n}-\frac{\beta_2^{n}}{\beta_1^n}\right)\frac{vol(\mathbb{S}^n)}{e}-n\log\beta_2.$$ Where $\delta$ is defined to ensure that $\int_{\mathbb{S}^n}\frac{e^{-f-\delta}}{(4\pi\tau)^{\frac{n}{2}}}dv^{\mathbb{S}^n} = 1$. Note that it depends on $f$, but that there are bounds on it that only depend on the $C^0$-closeness. All along the proof, we will note $\theta:=\frac{dv^N}{dv^{\mathbb{S}^n}}$. By the $C^{0}$-closeness, we have $\beta_1^n\leqslant\theta\leqslant\beta_2^n$. Let us also note that the $\delta$ defined in the statement of the proposition satisfies $$-n\log\beta_2\leqslant \delta\leqslant -n\log\beta_1$$ Let us start by proving the estimate for $\mathcal{F}$ :\ We have a lower bound on the scalar curvature and the volume form, we only have to control $g^N\left(\nabla^N f,\nabla^N f\right)$ thanks to $g^{\mathbb{S}^n}\left(\nabla^{\mathbb{S}^n} f,\nabla^{\mathbb{S}^n} f\right)$ :\ By definition, we have, for any $v$ in $T_x\mathbb{S}^n$ for some $x\in\mathbb{S}^n$ : $$\begin{aligned} df(v) &= g^N\left(\nabla^N f,v\right)\\ &= g^{\mathbb{S}^n}\left(\nabla^{\mathbb{S}^n} f,v\right).\end{aligned}$$ We can decompose $\nabla^N f =: \alpha \nabla^{\mathbb{S}^n} f +p^N(\nabla^{N}f)$ where $p^N$ is the projection on the orthogonal of $\nabla^{\mathbb{S}^n} f$ for $g^N$ (note that $\alpha$ depends on the point at which we look at the tangent space and the direction of $\nabla^{\mathbb{S}^n} f$).\ Let us find bounds on $\alpha$ which will give us a lower bound on $g^N\left(\nabla^N f,\nabla^N f\right)$ : By definition, we have : $$\begin{aligned} df(\nabla^{\mathbb{S}^n} f) &= g^N\left(\nabla^N f,\nabla^{\mathbb{S}^n} f\right)\\ &= \alpha g^N\left(\nabla^{\mathbb{S}^n} f,\nabla^{\mathbb{S}^n} f\right)\\ &=g^{\mathbb{S}^n}\left(\nabla^{\mathbb{S}^n} f,\nabla^{\mathbb{S}^n} f\right)>0.\end{aligned}$$ Now, since $\beta_1^2g^{\mathbb{S}^n}\leqslant g^N\leqslant \beta_2^2g^{\mathbb{S}^n}$, we have : $$\frac{1}{\beta_2^2}\leqslant\alpha\leqslant \frac{1}{\beta_1^2}.$$ We are now ready to bound the $\mathcal{F}$ functional : $$\begin{aligned} \mathcal{F}&\left(f+\frac{n}{2}\log(4\pi\tau)\;,\;g^N\right) = \int_{\mathbb{S}^n} \left(g^N\left(\nabla^N f,\nabla^N f\right)+\operatorname{\textup{R}}^N\right)\frac{e^{-f}}{(4\pi\tau)^{\frac{n}{2}}}\;\theta dv^{\mathbb{S}^n} \\ &\geqslant \int_{\mathbb{S}^n} \left(\alpha^2g^N\left(\nabla^{\mathbb{S}^n} f,\nabla^{\mathbb{S}^n} f\right)+g^N(p^N(\nabla^N f)\;,\;p^N(\nabla^N f))+\frac{n(n-1)}{\beta_2^2}\right)\frac{e^{-f-\delta}}{(4\pi\tau)^{\frac{n}{2}}}\; e^{\delta}\beta_1^n dv^{\mathbb{S}^n}.\end{aligned}$$ By the decomposition of $\nabla^N f$ and the $C^0$-closeness.\ We can now use our bound on $\alpha$ and $\delta$ and conclude that : $$\begin{aligned} \mathcal{F}&\left(f+\frac{n}{2}\log(4\pi\tau)\;,\;g^N\right)\geqslant \int_{\mathbb{S}^n} \left(\frac{1}{\beta_2^4}g^N\left(\nabla^{\mathbb{S}^n} f+\delta,\nabla^{\mathbb{S}^n} f+\delta\right)+\frac{n(n-1)}{\beta_2^2}\right)\frac{e^{-f-\delta}}{(4\pi\tau)^{\frac{n}{2}}}\;\frac{\beta_1^{n}}{\beta_2^n}dv^{\mathbb{S}^n}\\ &\geqslant \frac{\beta_1^{n}}{\beta_2^{n+4}}\mathcal{F}\left(f+\delta+\frac{n}{2}\log(4\pi\tau)\;,\;g^{\mathbb{S}^n}\right).\end{aligned}$$ Which is the stated inequality.\ \ Let us now take care of the $\mathcal{N}$ functional.\ Let us note $u = \frac{e^{-f}}{(4\pi\tau)^{\frac{n}{2}}}$ and $w = \frac{e^{-f-\delta}}{(4\pi\tau)^{\frac{n}{2}}}$, which imply : $$\int_{\mathbb{S}^n}u\; \theta dv^{\mathbb{S}^n} =1,$$ and, $$\int_{\mathbb{S}^n}w \;dv^{\mathbb{S}^n} =1.$$ The expressions to compare are : $$\mathcal{N}\left(f+\frac{n}{2}\log(4\pi\tau)\;,\;g^N\right) = -\int_{\mathbb{S}^n}u\log u\; \theta dv^{\mathbb{S}^n},$$ and, $$\mathcal{N}\left(f+\delta+\frac{n}{2}\log(4\pi\tau)\;,\;g^{\mathbb{S}^n}\right) = -\int_{\mathbb{S}^n}w\log w \;dv^{\mathbb{S}^n}.$$ And note that : $u= e^\delta w$.\ Let us start the comparison of both expression by separating the positive and negative part of the integrand : $$\begin{aligned} \mathcal{N}&\left(f+\frac{n}{2}\log(4\pi\tau)\;,\;g^N\right) = -\int_{\mathbb{S}^n}u\log u\; \theta dv^{\mathbb{S}^n}\\ &=-e^\delta\int_{\mathbb{S}^n}w\log w\; \theta dv^{\mathbb{S}^n} + \delta\\ &=-e^\delta\int_{\{w\geqslant 1\}}w\log w\; \theta dv^{\mathbb{S}^n}-e^\delta\int_{\{w < 1\}}w\log w \; \theta dv^{\mathbb{S}^n} + \delta.\\\end{aligned}$$ Now, we have bounds on $\delta$ and $\theta$ coming from the $C^0$-closeness to the unit sphere, namely : $$-n\log \beta_2\leqslant\delta\leqslant -n\log \beta_1,$$ and, $$\beta_1^n\leqslant\theta\leqslant\beta_2^n.$$ Now that we have separated the positive and negative part, we can use our upper or lower bounds on each term : $$\begin{aligned} \mathcal{N}&\left(f+\frac{n}{2}\log(4\pi\tau)\;,\;g^N\right)= -e^\delta\int_{\{w\geqslant 1\}}w\log w\; \theta dv^{\mathbb{S}^n}-e^\delta\int_{\{w < 1\}}w\log w\; \theta dv^{\mathbb{S}^n} + \delta\\ &\geqslant -\beta_2^{2n}\int_{\{w\geqslant 1\}}w\log w \; dv^{\mathbb{S}^n}-\beta_1^{2n}\int_{\{w < 1\}}w\log w\; dv^{\mathbb{S}^n}+n\log\beta_1.\end{aligned}$$ We can now use the fact that for any positive real number $x$, $x\log x\geqslant -\frac{1}{e}$ : $$\begin{aligned} \mathcal{N}&\left(f+\frac{n}{2}\log(4\pi\tau)\;,\;g^N\right)\geqslant -\frac{\beta_2^{n}}{\beta_1^n}\int_{\{w\geqslant 1\}}w\log w \; dv^{\mathbb{S}^n}-\frac{\beta_1^{n}}{\beta_2^n}\int_{\{w < 1\}}w\log w \; dv^{\mathbb{S}^n}+n\log\beta_1\\ &= -\frac{\beta_2^{n}}{\beta_1^n}\int_{\mathbb{S}^n}w\log w \;dv^{\mathbb{S}^n}+\left(\frac{\beta_1^{n}}{\beta_2^n}-\frac{\beta_2^{n}}{\beta_1^n}\right)\int_{\{w < 1\}}w\log w\; dv^{\mathbb{S}^n}+n\log\beta_1 \\ &\geqslant -\frac{\beta_2^{n}}{\beta_1^n}\int_{\mathbb{S}^n}w\log w\; dv^{\mathbb{S}^n}+\left(\frac{\beta_1^{n}}{\beta_2^n}-\frac{\beta_2^{n}}{\beta_1^n}\right)\frac{vol(\mathbb{S}^n)}{e}+n\log\beta_1,\end{aligned}$$ which is exactly what we stated. In particular, such manifolds satisfy the lower bound $L(\epsilon_1,\; \epsilon_2,\;\epsilon_3)$ defined at the end of section 3 with : $$\left\{ \begin{array}{lll} \epsilon_1 = 1-\frac{\beta_1^{n}}{\beta_2^{n+4}},\\ \epsilon_2 = \frac{\beta_2^{n}}{\beta_1^n}-1,\\ \epsilon_3 = \left(\frac{\beta_1^{n}}{\beta_2^n}-\frac{\beta_2^{n}}{\beta_1^n}\right)\frac{vol(\mathbb{S}^n)}{e}+n\log\beta_1. \end{array} \right.$$ In the text, to preserve these estimates along a renormalized Ricci flow, we ask for some more stability of the flow and make use of the Li-Yau-Hamilton inequality generalized by Brendle, thus we ask for the positivity of the isotropic curvature when crossed with $\mathbb{R}^2$, which is a condition preserved by Ricci flow and implied by a positive curvature tensor.\ It is likely that a lower bound on $\lambda^N$ or $\nu^N$ rather than the scalar curvature is enough to get similar estimates. For a sphere of radius $\beta$, we have the result with : - If $\beta\geqslant 1$, then : $$\mathcal{W}^{\beta\mathbb{S}^n}(f,\beta^2g^{\mathbb{S}^n},\tau)\geqslant \frac{\tau}{\beta^2}\mathcal{F}^{\mathbb{S}^n}\left(f+C,g^{\mathbb{S}^n}\right)+\mathcal{N}^{\mathbb{S}^n}\left(f+C,g^{\mathbb{S}^n}\right)+\frac{n}{2}\log{4\pi\tau}-n,$$ that is : $1-\epsilon_1 = \frac{1}{\beta^2}$, $\epsilon_2 = 0$ and $\epsilon_3 = 0$. - If $\beta\leqslant 1$, then : $$\mathcal{W}^{\beta\mathbb{S}^n}\left(f,\beta^2g^{\mathbb{S}^n},\tau\right)\geqslant \tau\mathcal{F}^{\mathbb{S}^n}(f+C,g^{\mathbb{S}^n})+\mathcal{N}^{\mathbb{S}^n}(f+C,g^{\mathbb{S}^n})+\frac{n}{2}\log{4\pi\tau}-n+n\log\beta,$$ that is $\epsilon_1 = 0$, $\epsilon_2 = 0$ and $\epsilon_3 = -n\log\beta$. And in dimension $3$, thanks to the explicit $\eta_3$ given in the last section, we can smooth out the cones over a sphere of radius $\beta$ if $\beta\in[0.77,1.05]$. Particular case of positively curved Einstein links =================================================== Here we give the proofs of the remarks given along the paper about positively curved Einstein manifolds, with a particular interest in the sphere. The behavior of the $\mu$-functional of positively curved Einstein manifolds ---------------------------------------------------------------------------- #### The minimizing function of $\mathcal{W}^N(.,g,\tau)$ is constant if $\tau\geqslant T_N$ Let us consider $(N,g)$ an Einstein manifold such that, $$\operatorname{\textup{Ric}}= \frac{1}{2T_N}g,$$ (*of shrinking time $T_N$*), and $\phi_N$ its constant potential function such that : $$e^{-\phi_N} vol(N) = 1,$$\ and let us note $\phi^\tau$ a function such that $\phi^\tau -\frac{n}{2}\log(4\pi\tau)$ is a minimizer for $\mathcal{W}^N$ at $\tau$. $$\begin{aligned} \mathcal{W}^N\left(\phi^\tau -\frac{n}{2}\log(4\pi\tau),g^N,\tau\right) = \mu^N(g^N,\tau).\end{aligned}$$\ Then, for all $\tau \geqslant T_N$, $$\begin{aligned} \phi^\tau = \phi^{T_N}.\end{aligned}$$ In particular, it is also a minimizer of the $\mathcal{F}^N$-functional (which is consistent with the fact that when $\tau\to\infty$, $f^\tau$ gets closer and closer to a minimizer of the $\mathcal{F}$ functional). Note also that this is not true for $\tau<T_N$. For example, $\phi^{T_{\mathbb{S}^n}}$ is a constant function, but for small $\tau$, $\phi^{\tau}$ looks like a Gaussian on the sphere (see [@CHI]). Also note that the value of $\mathcal{W}^{\mathbb{S}^n}$ with a constant function tends to $+\infty$ when $\tau\to 0$.\ It is also false for shrinking solitons. Let us choose $\tau_0 \geqslant T_N$.\ \ There exists $0\leqslant t_0<T_N$ such that $$\tau_0 = \frac{T_N}{1-\frac{t_0}{T_N}}.$$\ Let us also consider $N$ a positively curved Einstein manifold of shrinking time $T_N$, that is : $$\begin{aligned} g^N(t) = \left(1-\frac{t}{T_N}\right)g^N(0).\end{aligned}$$ #### Let us compute $\mathcal{W}^N\left(g^N(0),\left[f^{T_N}+\frac{n}{2}\log\frac{\tau_0}{T_N}\right],\tau_0\right)$ :\ The Einstein manifold has a shrinking soliton structure : $\operatorname{\textup{Ric}}+ \mathcal{L}_{\nabla \phi^{T_N}}g = \frac{1}{2T_N}g$), with $\phi^{T_N} = \phi_N$.\ Thus, $\phi^{T_N}$ is also a minimizer of $\mathcal{F}^N$,so we have the following equality : $$\begin{aligned} \mu^N(g^N,\tau_0) \leqslant \mathcal{W}^N\left(g^N,\left[\phi^{T_N}-\frac{n}{2}\log(4\pi \tau_0)\right],\tau_0\right) =& \mathcal{W}^N\left(\left[\phi^{T_N}-\frac{n}{2}\log(4\pi T_N)\right],g^N,T_N\right)\\ &+ \lambda^N(\tau_0-T_N)-\frac{n}{2}\log\frac{\tau_0}{T_N}\\ =&\mu(g^N,T_N)+ \lambda^N(\tau_0-T_N)-\frac{n}{2}\log\frac{\tau_0}{T_N}.\end{aligned}$$ So there is equality in the inequality $$\begin{aligned} \mu^N(\tau) \geqslant \mu^N(T_N)+ \lambda^N(\tau-T_N)-\frac{n}{2}\log\frac{\tau}{T_N},\end{aligned}$$ so for all $T_N<\tau<\tau_0$, $\phi^\tau$ minimizes $\mathcal{W}^N(.,g^N,\tau)$,\ that is : $$\phi^{\tau_0} = \phi^{T_N}.$$ As a consequence : If $N$ is a positively curved Einstein manifold, such that $$\operatorname{\textup{Ric}}= \frac{1}{2T_N}g,$$ (*of shrinking time $T_N$*).\ Then, for all $\tau\geqslant T_N$, the minimizing function for $\mathcal{W}^N(.,g^N,\tau)$ is constant, and : $$\begin{aligned} \mu^N(\tau) = \mu^N(T_N)+ \lambda^N(\tau-T_N)-\frac{n}{2}\log\frac{\tau}{T_N} \end{aligned}$$ that is : If $\tau\geqslant T_N$ , then : $$\begin{aligned} \mu^N(\tau,g^N) = \tau\lambda^N+\log(vol(N))-\frac{n}{2}\log(4\pi\tau)-n. \end{aligned}$$ If $\tau\leqslant T_N$, then : $$\begin{aligned} 0\geqslant\mu^N(\tau,g^N) \geqslant \mu^N(T_N,g^N) = \frac{n}{2}+\log(vol(N))-\frac{n}{2}\log(4\pi T_N)-n. \end{aligned}$$ In particular, the $\mu$-functional of a manifold with $\operatorname{\textup{Ric}}= g$ for large $\tau$ only depends on the volume of the manifold. Smoothing rotationally symmetric cones by manifolds of high entropy ------------------------------------------------------------------- In this appendix, we provide an explicit computation of the smoothing process of the last section of the paper applied to spheres. That will illustrate last step of the construction of a renormalization of the Ricci flow preserving a $P(\beta_1',\beta_2')$ property. Where a ball centered at the “tip” of the manifold is arbitrarily close to that of a cone over a sphere. Since these are Einstein manifolds, there is no need to use a renormalized flow (they would all be constant), but only the last step of scaling the link. The result is : For all $n$, there exists $\beta_1(n)$ and $\beta_2(n)$ such that :\ For all $\beta\in(\beta_1,\beta_2)$, the cone $C(\beta\mathbb{S}^n)$ satisfies $$\nu^{C(\beta\mathbb{S}^n)}> \eta_n,$$ where $\eta_n$ is the number defined in the global pseudolocality lemma. Moreover, there exists $M_\beta$ a manifold smoothing it out such that : $$\nu^{M_\beta}>\eta_n.$$ In dimension $3$, given the value of $\eta_3$, it is possible to choose $\beta_1(3) = \frac{2}{e} \approx 0.74$ and $\beta_2 = \sqrt{\frac{2e}{e+2}} \approx 1.07$. The first statement is a direct consequence of the lower bounds found on the $\nu$-functional of cones thanks to some closeness to the unit sphere. So we will now focus on the construction of the manifold $M_\beta$.\ \ We will smoothen the cone by the euclidean space around its tip by warped product : $(M,g) = (\mathbb{R}^+\times \mathbb{S}^n,dr^2+h^2g^{\mathbb{S}^n})$\ Let us define $h : [0,\infty) \to \mathbb{R}^+$ 1. $h(r) = \frac{r}{\beta}$ for $0\leqslant r\leqslant 1$, 2. $h(r) = \frac{r}{\beta}+ \frac{(r-1)^2}{2\beta A}$ for $1\leqslant r\leqslant b = 1+A(\beta-1)$, 3. $h(r) = r-b+h(b)$ for $r\geqslant b$. This gives : 1. $h'(r) = \frac{1}{\beta}$ for $0\leqslant r\leqslant 1$, 2. $h'(r) = \frac{1}{\beta}+ \frac{r-1}{\beta A}$ for $1\leqslant r\leqslant b = 1+A(\beta-1)$, 3. $h'(r) = 1$ for $r\geqslant b$, and 1. $h''(r) = 0$ for $0\leqslant r\leqslant 1$, 2. $h''(r) = \frac{1}{\beta A}$ for $1\leqslant r\leqslant b = 1+A(\beta-1)$, 3. $h''(r) = 0$ for $r\geqslant b$. Let us now consider $(M,g) = (\mathbb{R}^+\times \beta \mathbb{S}^n,dr^2+h^2g_{\beta\mathbb{S}^n})$ and $f: M\to \mathbb{R}$ such that $\int_M \frac{e^{-f}}{(4\pi\tau)^{\frac{n+1}{2}}}$, and note $\beta \mathbb{S}^n = N$.\ Here, we note $w$ such that $w^2 = e^{-f}$, it satisfies $\int_M w^2dv = (4\pi\tau)^{\frac{n+1}{2}}$. Using the following general formula for the entropy of a warped product : $$\begin{aligned} \mathcal{W}^M(f,dr^2+h^2g_N,\tau) =& \int_0^{+\infty}h^n\int_N\left[\tau\left(4(\partial_r w)^2 +\frac{4|\nabla^Nw|^2+(\operatorname{\textup{R}}^N-n(n-1)(h')^2-2n h h'')w^2}{h^2}\right)\right. \\ &\left.- w^2 \log(w^2)-(n+1) w^2\right](4\pi\tau)^{-\frac{n+1}{2}}dv dr,\end{aligned}$$ we get : $$\begin{aligned} \mathcal{W}^M(f,dr^2+h^2g_N,\tau) =& \int_0^{1}\left(\frac{r}{\beta}\right)^n\int_N\left[\tau\left(4(\partial_r w)^2 +\frac{4|\nabla^Nw|^2}{(\frac{r}{\beta})^2}\right)\right. \\ &\left.- w^2 \log(w^2)-(n+1) w^2\right](4\pi\tau)^{-\frac{n+1}{2}}dv dr\\ &+\int_1^{b}h^n\int_N\left[\tau\left(4(\partial_r w)^2 +\frac{4|\nabla^Nw|^2+(n(n-1)(\beta^{-2}-(h')^2)-2n h h'')w^2}{h^2}\right)\right. \\ &\left.- w^2 \log(w^2)-(n+1) w^2\right](4\pi\tau)^{-\frac{n+1}{2}}dv dr\\ &+\int_{h(b)}^{+\infty}r^n\int_N\left[\tau\left(4(\partial_r w)^2 +\frac{4|\nabla^Nw|^2+n(n-1)(\beta^{-2}-1)w^2}{r^2}\right)\right. \\ &\left.- w^2 \log(w^2)-(n+1) w^2\right](4\pi\tau)^{-\frac{n+1}{2}}dv dr,\\ &=(1)+(2)+(3)\end{aligned}$$ where we have explicited the last term to show that it is corresponding to what we would get with $C(N)$. In the second term, for A big enough, the term involving $h''$ is like $A^{-1}$ times the term in $h'$. We will assume that A is big enough to forget about $h''$ (we only have strict inequalities in the end).\ Let us consider $v$ a function on $C(N)$ such that : $$\int_{C(N)}v^2(4\pi\tau)^{-\frac{n+1}{2}} = 1.$$\ We are going to define $w=\phi(v)$ a function on $M$ such that : $$\int_M w^2(4\pi\tau)^{-\frac{n+1}{2}} = 1.$$ Where $\phi$ is a natural one to one correspondance. We will then prove that the entropy of $w$ on $M$ is smaller than the entropy of $v$ on $C(n)$. Define : $$w(r,.) = \sqrt{h'(r)}v(h(r),.),$$ we have, $$\int_M w^2(4\pi\tau)^{-\frac{n+1}{2}} = 1.$$ And by the change of variable $\rho = h(r)$, we get : $$\begin{aligned} \mathcal{W}^M&(f,dr^2+h^2g_N,\tau) = \int_0^{\frac{1}{\beta}}(\rho)^n\int_N\left[\tau\left(4(\partial_\rho v)^2 +\frac{4|\nabla^Nv|^2}{\rho^2}\right)\right. \\ &\left.- v^2 \left(\log(v^2)+n\log\beta\right)-(n+1) v^2\right](4\pi\tau)^{-\frac{n+1}{2}}dv d\rho\\ &+\int_{\frac{1}{\beta}}^{h(b)}\rho^n\int_N\left[\tau\left(4(\partial_\rho v)^2 +\frac{4|\nabla^Nv|^2+n(n-1)\left[\beta^{-2}-\left((h' o h^{-1}(\rho)\left(1+\mathcal{O}(\frac{1}{A})\right)v^2+\frac{1}{2}\log h'o h^{-1}\right)\right]}{\rho^2}\right)\right. \\ &\left.- v^2 \log(v^2)-(n+1) v^2\right](4\pi\tau)^{-\frac{n+1}{2}}dv d\rho\\ &+\int_{h(b)}^{+\infty}\rho^n\int_N\left[\tau\left(4(\partial_\rho v)^2 +\frac{4|\nabla^Nv|^2+n(n-1)(\beta^{-2}-1)v^2}{\rho^2}\right)\right. \\ &\left.- v^2 \log(v^2)-(n+1) v^2\right](4\pi\tau)^{-\frac{n+1}{2}}dv d\rho\\ &=(1)+(2)+(3).\end{aligned}$$ Now, what we would have got with the cone is : $$\begin{aligned} \mathcal{W}^{C(N)}(v,dr^2+r^2g_N,\tau) =& \int_0^{\frac{1}{\beta}}(\rho)^n\int_N\left[\tau\left(4(\partial_\rho v)^2 +\frac{4|\nabla^Nv|^2+n(n-1)(\beta^{-2}-1)v^2}{\rho^2}\right)\right. \\ &\left.- v^2 \log(v^2)-(n+1) v^2\right](4\pi\tau)^{-\frac{n+1}{2}}dv d\rho\\ &+\int_{\frac{1}{\beta}}^{h(b)}\rho^n\int_N\left[\tau\left(4(\partial_\rho v)^2 +\frac{4|\nabla^Nv|^2+n(n-1)(\beta^{-2}-1)v^2}{\rho^2}\right)\right. \\ &\left.- v^2 \log(v^2)-(n+1) v^2\right](4\pi\tau)^{-\frac{n+1}{2}}dv d\rho\\ &+\int_{h(b)}^{+\infty}\rho^n\int_N\left[\tau\left(4(\partial_\rho v)^2 +\frac{4|\nabla^Nv|^2+n(n-1)(\beta^{-2}-1)v^2}{\rho^2}\right)\right. \\ &\left.- v^2 \log(v^2)-(n+1) v^2\right](4\pi\tau)^{-\frac{n+1}{2}}dv d\rho\\ &=(1')+(2')+(3').\end{aligned}$$ If we assume $\beta>1$, by taking the difference of these two expressions, we get : $$\mathcal{W}^{M}(w,dr^2 + r^2g_N,\tau)-\mathcal{W}^{C(\beta\mathbb{S}^n)}(v,dr^2 + h^2g_N,\tau) >-(n\log \beta+\mathcal{O}(A)).$$ In the case of a $\beta<1$, it is more convenient to compare to the Euclidean space. The computation is exacly the same, and we get : $$\mathcal{W}^{M}(w,dr^2 + r^2g_N,\tau)-\mathcal{W}^{\mathbb{R}^n}(v,dr^2 + h^2g_N,\tau) >(n\log \beta+\mathcal{O}(A)).$$ It is crucial that the right term doesn’t depend on $\tau$. In both cases, if $\beta$ is close enought to $1$ and $A$ large enough in our parametrization, we have $\nu^M>\eta_n$ Renormalizations of the Ricci flow ================================== In this section we first introduce a few renormalization of the Ricci flow and discuss their different properties. We will then use them to, on the one hand smooth out some cones by manifolds with higher entropy and on the other hand to define a flow on the link only that increases the entropy of a cone. Let us define a few renormalizations of the Ricci flow. Some of them are motivated by the constance of a quantity, and others by the fact that some of perelman’s quantities increase, at $\tau$ fixed. All of the flows introduced will act both on $g$ and a potential $f$. We will consider renormalization of the following form : $$\left\{ \begin{array}{ll} \partial_t g &= -2\left( \operatorname{\textup{Ric}}+Hess f-\frac{\alpha}{n}g \right)\\ \partial_t f &= -\Delta f-\operatorname{\textup{R}}+\alpha \end{array} \right.$$ We will be choosing some quite natural values for $\alpha$ (a lot more values of $\alpha$ give interesting properties to the flow) Evolution of some geometric quantities along the flows ------------------------------------------------------ Let us compute the evolution of Perelman’s quantities and other geometric quantities along such a flow. #### **The scalar curvature** We have the following expression : $$\begin{aligned} \partial_t \operatorname{\textup{R}}&= \Delta \operatorname{\textup{R}}+ 2\left<\operatorname{\textup{Ric}},\operatorname{\textup{Ric}}-\frac{\alpha}{n}g\right> + \mathcal{L}_{\nabla f}\operatorname{\textup{R}}\nonumber\\ &=\Delta \operatorname{\textup{R}}+ 2|\operatorname{\textup{Ric}}|^2-2\frac{\alpha}{n}\operatorname{\textup{R}}+ \mathcal{L}_{\nabla f}\operatorname{\textup{R}}\nonumber\\ &\geqslant \Delta \operatorname{\textup{R}}+ 2\frac{\operatorname{\textup{R}}-\alpha}{n}\operatorname{\textup{R}}+ \mathcal{L}_{\nabla f}\operatorname{\textup{R}}. \label{varR}\end{aligned}$$ #### **The volume** We have the following variation of the volume : $$\begin{aligned} \partial_t\left(vol(N)\right) &= \int_N\left(\alpha-\operatorname{\textup{R}}_{av}\right)dv \nonumber\\ &= \left(\alpha-\operatorname{\textup{R}}_{av}\right)vol(M). \label{varvol}\end{aligned}$$ Where $\operatorname{\textup{R}}_{av}$ is the average value of the scalar curvature. #### **Perelman’s $\mathcal{F}$-functional** The derivative for $\mathcal{F}(f)$ is : $$\begin{aligned} \partial_t\left(\mathcal{F}(f(t),g(t))\right) &= \int_N\left<\operatorname{\textup{Ric}}+Hess f-\frac{\alpha}{n}g,\operatorname{\textup{Ric}}+Hess f\right>dv \nonumber\\ &=\int_N\left|\operatorname{\textup{Ric}}+Hess f\right|^2dv-\frac{\alpha}{n}\mathcal{F}(f). \label{varF}\end{aligned}$$ #### **Perelman’s $\mathcal{W}$-functional at $\tau$ fixed** $$\begin{aligned} \partial_t\left(\mathcal{W}(f(t),g(t),\tau)\right) &= \int_N\left<\operatorname{\textup{Ric}}+Hess f-\frac{\alpha}{n}g,\operatorname{\textup{Ric}}+Hess f-\frac{1}{2\tau}\right>dv\nonumber \\ &=\int_N\left|\operatorname{\textup{Ric}}+Hess f\right|^2dv-\left(\frac{\alpha}{n}+\frac{1}{2\tau}\right)\mathcal{F}(f)+\frac{\alpha}{2\tau}. \label{varW}\end{aligned}$$ #### **Inequalities among some natural quantities** :\ We have the following inequalities : $$\begin{aligned} \operatorname{\textup{R}}_{min}\leqslant \lambda^N\leqslant \operatorname{\textup{R}}_{av}\leqslant \operatorname{\textup{R}}_{max}\leqslant \frac{n}{2T_N},\end{aligned}$$ where we have noted $f_\mu$ a minimizer of $\mathcal{W}$.\ And note that we have equality if the manifold is a positively curved Einstein manifold. Volume preserving - $\alpha = \operatorname{\textup{R}}_{av}$ ------------------------------------------------------------- Here we recall the standard renormalization introduced by Hamilton.\ It is a flow that preserves the volume of the whole manifold. It is useful to keep track of the limiting object thanks to its volume and some symmetries for example.\ Along this flow, $$\left\{ \begin{array}{lllll} \partial_t \operatorname{\textup{R}}_{av} &\geqslant 0,\\ \partial_t vol &= 0,\\ \partial_t \operatorname{\textup{R}}_{min}&\geqslant 2\frac{\operatorname{\textup{R}}_{min}}{n}\left(\operatorname{\textup{R}}_{min}-\operatorname{\textup{R}}_{av}\right),\\ \partial_t \lambda &?\\ \partial_t \mu &?\\ \partial_t T_N&\geqslant 0. \end{array} \right.$$ Shrinking time preserving - $\alpha = \frac{n}{2T_N}$ ----------------------------------------------------- We will first consider Ricci flows that goes extinct in finite time getting close to a sphere to reduce our problem to the case of cones over spheres that we have already taken care of. For that, we will define $T_N$ (where we have abusively noted $N = (N,g)$) such that : $$\begin{aligned} \left(\frac{1}{2(n-1)(T_N-t)}g_t\right) \xrightarrow[t\to T_N]{} g^{\mathbb{S}^n},\label{cvtoS}\end{aligned}$$ where $t\mapsto g_t$ evolves according to the Ricci flow equation and starts at $g_0 = g$. This leads to considering the flow : $$\left\{ \begin{array}{ll} \partial_t \hat{g} &= -2\left( \operatorname{\textup{Ric}}+Hess \hat{f}-\frac{1}{2T_N}\hat{g} \right),\\ \partial_t \hat{f} &= -\Delta \hat{f}-\operatorname{\textup{R}}+\frac{n}{2T_N}. \end{array} \right.$$ The reason is the following :\ Let us note $\hat{N}(t) = (N,\hat{g}(t))$ as an abusive notation, along the renormalized flow just introduced, the shrinking time is preserved : Along the renormalized flow, for all $t$ : $$T_{\hat{N}(t)} = T_{\hat{N}(0)}.$$ Let us consider $(N,g)$ such that is satisfied.\ Let us consider two families of metrics starting at $g_0$ : - $(g_t)_t$ starting at $g_0$ evolving according to the Ricci flow equation. - $(\hat{g}_t)_t$ starting at $g_0$ evolving according to the renormalized Ricci flow equation. By integrating the equation satisfied by $\hat{g}_t$ we have that :\ With $$\psi(t) = -T_N\log\left(1-\frac{t}{T_N} \right)\in[0,+\infty],$$ $$\begin{aligned} \hat{g}_{\psi(t)} = \frac{1}{\left(1-\frac{t}{T_N}\right)}g_{t}. \label{exphatt}\end{aligned}$$ Choosing a $t_0$ and starting a Ricci flow $t\mapsto \tilde{g}^{t_0}_t$ at $\hat{g}_{t_0}$, By translation by $t_0$ and parabolic scaling by $\left(1-\frac{t}{T_N}\right)$. $$\begin{aligned} \left(\frac{1}{2(n-1)(T_N-t)}\tilde{g}^{t_0}_t\right) &= \frac{1}{2(n-1)(T_N-t)\left(1-\frac{t_0}{T_N}\right)}g_{t_0+\left(1-\frac{t_0}{T_N}\right)t} \nonumber \\ &=\left(\frac{1}{2(n-1)(T_N-\tilde{t})}g_{\tilde{t}}\right),\label{**}\end{aligned}$$ with $\tilde{t} = t_0+\left(1-\frac{t_0}{T_N}\right)t$.\ Now, by hypothesis : $$\begin{aligned} \left(\frac{1}{2(n-1)(T_N-\tilde{t})}g_{\tilde{t}}\right) \xrightarrow[\tilde{t}\to T_N]{} g^{\mathbb{S}^n},\end{aligned}$$ and by , $T_{\hat{N}(t)} = T_N$. Along this flow, $$\left\{ \begin{array}{lllll} \partial_t \operatorname{\textup{R}}_{av} & ?\\ \partial_t vol &\geqslant 0\\ \partial_t \operatorname{\textup{R}}_{min}&\geqslant 2\frac{\operatorname{\textup{R}}_{min}}{n}\left(\operatorname{\textup{R}}_{min}-\left(\frac{n}{2T_N}\right)\right)\\ \partial_t \lambda &?\\ \partial_t \mu(f_t,g_t,\tau) &\geqslant 0 \text{, if $\tau\leqslant\frac{n}{2\mathcal{F}(f_\mathcal{W})}\leqslant T_N$}\\ \partial_t T_N& = 0. \end{array} \right.$$ Blow down of type III immortal solutions with high $\nu$-functional =================================================================== We have been able to construct some nonsingular Ricci flows, that have a global curvature decay in $\frac{C}{t}$. We call them *type III* solutions of the Ricci flow. The goal would be to construct Ricci flows smoothing out cones thanks to them. For this purpose, we want to look at what is happening at large times, this is done by parabolically scaling down our Ricci flow and taking a limit.\ *Scaling down* a Ricci flow $(M,g(t))_t$ corresponds intuitively to sending every positive time to $+\infty$ while keeping the metric from being infinite. Formally it is looking at the following sequence of Ricci flows (and its limit when $s\to \infty$ if it exists.) $$\begin{aligned} g_s(t) = \frac{1}{s}g(st).\end{aligned}$$ Hamilton’s compactness theorem for Ricci flows ---------------------------------------------- Let us start by presenting Hamilton’s compactness theorem for Ricci flows from [@ham] that will let us take (sub)limits of Ricci flows, and in particular of sequence of blowdowns under some assumptions. Given $r_0 \in (0, \infty]$, let ${g_i(t)}_i$ be a sequence of Ricci flow solutions on connected *pointed* manifolds $(M_i,g_i(t), m_i)_t$, defined for $t \in (A, B)$ with $-\infty \leqslant A < 0 < B \leqslant \infty$ (note that it is also possible to take a limit of intervals).\ \ We assume that for all i, $M_i$ equals the time-zero ball $B_0(m_i, r_0)$ and for all $r \in (0, r_0)$, $\overline{B_0(m_i, r)}$ is compact. Suppose that the following two conditions are satisfied : - For each $r \in (0, r_0)$ and each compact interval $I \subset (A, B)$, there is an $N_{r,I} < \infty$ so that for all $t \in I$ and all $i$, $$\sup_{B_0(m_i ,r)\times I}| \operatorname{\textup{Rm}}(g_i)| \leqslant N_{r,I}.$$ - The time-$0$ injectivity radii at $m_i$ are bounded from below :\ There exists $\rho>0$ such that : $$inj_{g_i(0)}(m_i)>\rho>0.$$ Then after passing to a subsequence, the solutions converge smoothly to a Ricci flow solution $g_\infty(t)$ on a connected pointed manifold $(M_\infty, m_\infty)$, defined for $t \in (A, B)$, for which :\ $M_\infty = B_0(m_\infty, r_0)$ and $\overline{B_0(m_\infty, r)}$ is compact for all $r \in (0, r_0)$. That is, for any compact interval $I \subset (A, B)$ and any $r < r_0$, there are pointed time-independent diffeomorphisms $\phi_{r,i} : B_0(m_\infty, r) \to B_0(m_i, r)$ so that ${(\phi_{r,i} \times Id)g_i}$ converges smoothly to $g_\infty$ on $B_0(m_\infty, r) \times I$. Lower bound on the injectivity radius ------------------------------------- To look at what’s happening at large times, we want to define a sublimit for $$\begin{aligned} (M,g_s(t),p)\end{aligned}$$ for some point $p$ in the manifold.\ Thanks to the bound : $$|\operatorname{\textup{Rm}}_g|(.,t)\leqslant \frac{C}{t},$$ we have $$|\operatorname{\textup{Rm}}_{g_s}|(.,t) = s|\operatorname{\textup{Rm}}_g|(.,st)\leqslant s\frac{C}{st} = \frac{C}{t}.$$ In particular, for positive times, there is no problem to get a uniform bound on $|\operatorname{\textup{Rm}}_{g_s}|$.\ Remains to get a positive lower bound on the injectivity radius. The injectivity radius is not a very convenient quantity to keep track of along a flow, so we are first going to recall a theorem by Cheeger, Gromov and Taylor in [@cgt] : Let $(M, g)$ be a complete Riemannian manifold such that : - There exists a constant $K\geqslant 0$ with $$|\operatorname{\textup{Rm}}| \leqslant K,$$ - There exists a point $p \in M$ and a constant $v_0 > 0$ such that $$Vol_g(B_g(p, r)) \geqslant v_0r^n.$$ Then there exists a positive constant $i_0 = i_0(n,Kr^2,v_0)$ such that $$inj_g(p)>i_0 r>0.$$ Now, in our case, to apply Hamilton’s compactness theorem. we would like to have $$\liminf_{t\to\infty} \left(\frac{inj_{g(t)}(p)}{\sqrt{t}}\right) > 0.$$ The particularity of our manifolds is that they have a quite large $\mu$-functional *for all $\tau$*.\ \ So, adapting the proof of the first no local collapsing theorem of Perelman (thanks to the $\mu$-functional), we get : Suppose that : - There exists $A>0$ such that : $$\nu^M(g(0)) > -A.$$ - There exists $C>0$, such that, for all $t>0$, $$|\operatorname{\textup{Rm}}|(.,t)\leqslant \frac{C}{t}.$$ Then, there exists $v_0(A,C,n)>0$ such that :\ For all $t\leqslant 0$ : $$Vol_{g(t)}(B_{g(t)}(p, \sqrt{t}))\geqslant v_0 t^{\frac{n}{2}}.$$ From the proof of Theorem 13.3 in the notes of Kleiner and Lott,\ If $Vol_{g(t)}(B_{g(t)}(p, \sqrt{t}))\to 0$ when $t\to \infty$,\ Then for a well chosen sequence $f_k$, such that $\int_M\frac{e^{-f_k}}{(4\pi t_k)^{\frac{n}{2}}} = 1$, $$\mathcal{W}(f_k,g(t_k),t_k)\to -\infty.$$\ In particular, there would be $k_0$ such that : $$\begin{aligned} \mu(g(t_{k_0}),t_{k_0})\leqslant\mathcal{W}(f_{k_0},g(t_{k_0}),t_{k_0})\leqslant -A, \label{izi}\end{aligned}$$ but, since $t\mapsto \mu\left(g(t),(2t_{k_0})-t\right)$ is nondecreasing, we get from that : $$\begin{aligned} \mu(g(0),2t_{k_0})&\leqslant\mu(g(t_{k_0}),2t_{k_0}-t_{k_0})\\ &=\mu(g(t_{k_0}),t_{k_0})\\ &\leqslant -A,\end{aligned}$$ which contradicts the hypothesis on $\mu(\tau)$. As a direct application of Cheeger-Gromov-Taylor theorem, we get : \[curv et inj\] For any Riemannian manifold $(M,g(0))$ such that : - The Ricci flow starting at $(M,g(0))$ is a type III solution :\ $\exists C>0$ such that : $$|\operatorname{\textup{Rm}}(.,t)|\leqslant \frac{C}{t}.$$ - There exists $A>0$, such that : $$\nu(g(0))\geqslant -A.$$ Then, defining : $g_s(t) = \frac{1}{s}g(st)$,\ we have for any $t_0>0$ : - $|\operatorname{\textup{Rm}}_{g_s(t_0)}| \leqslant \frac{C}{t_0}$ (uniform in $s$), - $inj_{g_s(t_0)}\geqslant v_0 t_0^{\frac{n}{2}}$ where $v_0 = v_0(n,C,A)$ is coming from the last lemma (it is again uniform in $s$). As a consequence, for every type III solution with a uniform lower bound on their $\mu$-functional, one can extract a scaling down sublimit by Hamilton’s compactness theorem. The corollary of the next part states this fact more precisely in our case. Scaling down manifolds of large $\mu$-functional ------------------------------------------------ Let us state a corollary of the global pseudolocality coupled with Hamilton’s compactness theorem : For any Riemannian manifold $(M,g(0))$ such that $$\nu^M(g(0))> -\eta_n,$$ there exists an immortal type III solution of the Ricci flow starting at $(M,g(0))$ : $t \mapsto g(t)$.\ And the scaling down sequence : $$\begin{aligned} (M,g_k(t),p)_k = \left(M,\frac{1}{k}g_k(kt),p\right)_k.\end{aligned}$$ Has a sublimit which is an immortal type III solution of the Ricci flow. Moreover, we have : 1. If $(M,g_0)$ is asymptotic to a cone $C(N)$, then the sublimit of blow downs is a Ricci flow coming out of the cone $C(N)$ and asymptotic to it at all times, see section 5.1 and 5.2 in [@lz] (where it is possible to use the global pseudolocality rather than the usual pseudolocality result). 2. If $\operatorname{\textup{Ric}}\geqslant 0$ along the flow, then the blowdown sublimits are gradient expanding solitons by [@Ma]. Thanks to the global pseudolocality pseudolocality, we know that a Ricci flow exists for all positive times, and is a type III solution of the Ricci flow. Let us first take a sublimit by Hamilton’s compactness theorem :\ The two hypothesis to check to take a sublimit in the sense of Hamilton are satisfied thanks to the corollary \[curv et inj\].\ So we get a sublimit $(M_\infty,g_\infty)$. If the Ricci flow was asymptotic to a cone, thanks to the section 5 of [@lz], the sublimit flow comes out of the cone. Note that if we had an actual limit, we could argue that we have an expanding soliton in the limit. [References]{} Richard Bamler, *Long-time behavior of 3 dimensional Ricci flow – A: Generalizations of Perelman’s long-time estimates*, to appear in Geometry and Topology Richard Bamler, Davi Maximo *Almost-rigidity and the extinction time of positively curved Ricci flows*, Math. Annalen, to appear Paul Beesack, *Hardy’s inequality and its extensions*, Pacific Journal of Mathematics Vol. 11, No. 1 November 1961 Simon Brendle ; Richard M. Schoen, *Manifolds with 1/4-pinched Curvature are Space Forms*, Journal Of The American Mathematical Society Volume 22, Number 1, January 2009, Pages 287–307 Huai-Dong Cao : Richard S. Hamilton ; Tom Ilmanen, *Gaussian densities and stability for some Ricci solitons*, Fabio Cavaletti : Andrea Mondino, *Sharp geometric and functional inequalities in metric measure spaces with lower Ricci curvature bounds*, Geometry and Topology 21 (2017) 603–645 Jeff Cheeger ; Mikhail Gromov ; and Michael Taylor, *Finite propagation speed, kernel estimates for functions of the Laplace operator, and the geometry of complete Riemannian manifolds*, J. Differential Geom. Volume 17, Number 1 (1982), 15-53. Bennett Chow ; Sun-Chin Chu ; David Glickenstein ; Christine Guenther ; James Isenberg ; Tom Ivey ; Dan Knopf ; Peng Lu ; Feng Luo ; Lei Ni, *The Ricci flow : I, II, III and IV*, Mathematical Surveys and Monographs Xianzhe Dai ; Changliang Wang, *-Generalization of Perelman’s $\lambda$ and $\nu$-functionals-*, unpublished. Alix Deruelle, *Géométrie à l’infini de certaines variétés riemanniennes non compactes* (in French), PhD thesis. Alix Deruelle, *Smoothing out positively curved metric cones by Ricci expanders*, Geometric and Functional Analysis February 2016, Volume 26, Issue 1, pp 188–249 Michael Feldman ; Tom Ilmanen ; Dan Knopf, *Rotationally symmetric shrinking and expanding gradient Kähler-Ricci solitons* , J. Differential Geom. Volume 65, Number 2 (2003), 169-209. Michael Feldman ; Tom Ilmanen ; Lei Ni, *Entropy and reduced distance for Ricci expanders*, The Journal of Geometric Analysis March 2005, Volume 15, Issue 1, pp 49–62 Panagiotis Gianniotis ; Felix Schulze, *Ricci flow from spaces with isolated conical singularities*, preprint. Richard Hamilton, *A Compactness Property for Solutions of the Ricci Flow*, American Journal of Mathematics Vol. 117, No. 3 (Jun., 1995), pp. 545-572 Gerhard Huisken, *Ricci deformation of the metric on a Riemannian manifold*, J. Differential Geom. Volume 21, Number 1 (1985), 47-62. Bruce Kleiner ; John Lott, *Notes on Perelman’s papers*, Geometry and Topology 12 (2008) 2587–2858 John Lott ; Zhou Zhang, *Ricci flow on quasiprojective manifolds II*, Duke Math. J. Volume 156, Number 1 (2011), 87-123. Li Ma, *Ricci expanders and type III Ricci flow*, preprint. Lei Ni, *The Entropy Formula for Linear HeatEquation*, The Journal of Geometric Analysis Volume 14, Number 1, 2004 Lei Ni, *Addenda to “The entropy formula for linear heat equation”*, The Journal of Geometric Analysis Volume 14, Number 2, 2004 Gregory Perelman, *The entropy formula for the Ricci flow and its geometric applications*, preprint. Gregory Perelman, *Ricci flow with surgery on three-manifolds*, preprint. Felix Schulze ; Miles Simon, *Expanding solitons with nonnegative curvature operators coming out of cones*, Mathematische Zeitschrift October 2013, Volume 275, Issue 1–2, pp 625–639 Miles Simon, *Deformation of $C^0$ Riemannian metrics in the direction of their Ricci curvature*,Communications in Analysis and Geometry Volume 10, Number 5, 1033-1074, 2002 <span style="font-variant:small-caps;">Département de mathématiques et applications, École Normale Supérieure, PSL Research University, 45 rue d’Ulm, Paris, France, 75005.</span> *E-mail address*, `tristan.ozuch-meersseman@ens.fr`
{ "pile_set_name": "ArXiv" }
--- abstract: 'Anatase (a-) exhibits a strong X-ray absorption linear dichroism with the X-ray incidence angle in the pre-edge, the XANES and the EXAFS at the titanium K-edge. In the pre-edge region the behaviour of the A1-A3 and B peaks, originating from the 1s-3d transitions, is due to the strong $p$-orbital polarization and strong $p-d$ orbital mixing. An unambiguous assignment of the pre-edge peak transitions is made in the monoelectronic approximation with the support of *ab initio* finite difference method calculations and spherical tensor analysis in quantitative agreement with the experiment. Our results suggest that previous studies relying on octahedral crystal field splitting assignments are not accurate due to the significant *p-d* orbital hybridization induced by the broken inversion symmetry in a-. It is found that A1 is mostly an on-site 3d-4p hybridized transition, while peaks A3 and B are non-local transitions, with A3 being mostly dipolar and influenced by the 3d-4p intersite hybridization, while B is due to interactions at longer range. Peak A2 which was previously assigned to a transition involving pentacoordinated titanium atoms is shown for the first time to exhibit a quadrupolar angular evolution with incidence angle which implies that its origin is primarily related to a transition to bulk energy levels of a- and not to defects, in agreement with theoretical predictions (Vorwerk *et al* , Phys. Rev. B, **95**, 155121 (2017)). Finally, *ab initio* calculations show that the occurence of an enhanced absorption at peak A2 in defect rich a- materials is a coincidence of a blue shifted peak A1 due to the chemical shift induced by oxygen vacancies on quadrupolar transitions in the pre-edge. These novel results pave the way to the use of the pre-edge peaks at the K-edge of a- to characterize the electronic structure of related materials and in the field of ultrafast X-ray absorption spectroscopy (XAS) where the linear dichroism can be used to compare the photophysics along different axes.' author: - 'T. C. Rossi$^1$, D. Grolimund$^2$, M. Nachtegaal$^3$, O. Cannelli$^{1,4}$, G. F. Mancini$^{1,4}$, C. Bacellar$^{1,4}$, D. Kinschel$^{1,4}$, J. R. Rouxel$^{1,4}$, N. Ohannessian$^6$, D. Pergolesi$^{5,6}$, T. Lippert$^{6,7}$, M. Chergui$^1$' title: 'X-ray Absorption Linear Dichroism at the Ti K-edge of anatase TiO$_2$ single crystal' --- Introduction ============ Titanium dioxide () is one of the most studied large-gap semiconductor due to its present and potential applications in photovoltaics [@Freitag:2017di] and photocatalysis [@Nakata:2012hu]. The increasingly strict requirements of modern devices call for sensitive material characterization techniques which can provide local insights at the atomic level [@Suenaga:2000ix; @Sherson:2010hg]. K-edge is an element specific technique, that is used to extract the local geometry around an atom absorbing the X-radiation, as well as about its electronic structure [@Milne:2014en]. A typical K-edge absorption spectrum usually consists of three parts: (i) in the high energy region above the absorption edge (typically $>\unit{50}{\electronvolt}$), the , contains information about bond distances. Modelling of the is rather straightforward, as the theory is well established [@Milne:2014en]; (ii) The edge region and slightly above it ($<\unit{50}{\electronvolt}$) represents the , which contains information about bond distances and bond angles around the absorbing atom, as well as about its oxidation state. In contrast to , features require more complex theoretical developments due to the multiple scattering events and their interplay with bound-bound atomic transitions; (iii) The pre-edge region consists of bound-bound transitions of the absorbing atom. In the case of transition metals, the final states are partially made of $d$-orbitals. Pre-edge transitions thus deliver information about orbital occupancies and about the local geometry because the dipole-forbidden $s$-$d$ transitions are relaxed by lowering of the local symmetry. The K-edge absorption spectrum of anatase (a-) exhibits four pre-edge features labelled A1, A2, A3 and B, while rutile only shows three [@Brouder:2010go; @Luca:2009bj]. Their assignment has been at the centre of a long debate, which is still going on, especially in the case of the a- polymorph [@Brydson:1999bx; @Uozumi:1992df; @Wu:1997dm]. In this article, we use linear dichroism at the Ti K-edge to assign the pre-edge transitions of a- since this technique can provide the orbital content in the final state of the bound transitions with the support of *ab initio* calculations and spherical tensor analysis of the absorption cross-section. Early theoretical developments to explain the origin of pre-edge features in a- were based on theory [@Fischer:1972gz; @RuizLopez:1991cy; @Grunes:1983gg] which showed that the first two empty states in a- are made of antibonding $t_{2g}$ and $e_g$ orbitals derived from the $3d$ atomic orbitals of . Transitions to these levels have, respectively, been assigned to the A3 and B peaks while the absorption edge is made of $t_{1u}$ antibonding orbitals derived from $4p$ atomic orbitals. Although theory can predict the energy position of the transitions accurately, it cannot compute the corresponding cross-sections and does not account for the core-hole to which quadrupolar transitions to $d$-orbitals at the K-edge are extremely sensitive [@Uozumi:1992df]. The corresponding transitions are usually red shifted by the core-hole and appear as weak peaks on the low energy side of the pre-edge. In a-, peak A1 contains a significant quadrupolar component [@Uozumi:1992df], sensitive to the core hole, which explains the inaccuracy of theory to predict this transition. is a suitable technique to treat large ensembles of atoms and obtain accurate cross-sections [@RuizLopez:1991cy; @Farges:1997kl; @Wu:1997dm; @Brydson:1999bx]. From calculations, a consensus has emerged assigning a partial quadrupolar character to A1, a mixture of dipolar and quadrupolar character with $t_{2g}$ orbitals to A3 and a purely dipolar transition involving $e_g$ orbitals to B [@RuizLopez:1991cy; @Triana:2016fi]. However, as correctly pointed out by Ruiz-Lopez [@RuizLopez:1991cy], this simple picture of octahedral symmetry energy split t$_{2g}$ and $e_g$ levels becomes more complicated in a- because of the local distorted octahedral environment (D$_{2d}$ symmetry) which allows local $p-d$ orbital hybridization [@Yamamoto:2008bi]. In that case, the dipolar contribution to the total cross-section becomes dominant for every transition in the pre-edge region [@Cabaret:2010fp]. In addition, the cluster size used for the calculations has a large influence on the A3 and B peak intensities showing that delocalized final states (off-site transitions) play a key role in the pre-edge absorption region [@RuizLopez:1991cy]. Finally, the local environment around atoms in a- is strongly anisotropic and - bond distances separate in two groups of apical and equatorial oxygens which cannot be correctly described with spherical muffin-tin potentials as implemented in . This limitation is overcome with the development of full potential calculations such as [@Joly:1999iq; @Joly:2001fu; @Joly:2009ha]. Empirical approaches have been used by Chen and co-workers [@Chen:1997by] and Luca and co-workers [@Luca:1998dn; @Hanley:2002go; @Luca:2009bj] to establish correlations between the K pre-edge transitions in a- and sample morphologies, showing that bond length and static disorder contribute to the change in the pre-edge peak amplitudes [@Chen:1997by] and that the A2 peak is due to pentacoordinated atoms [@Luca:1998dn; @Hanley:2002go; @Luca:2009bj]. Farges and co-workers confirmed this assignment with the support of calculations [@Farges:1997kl]. The recent works by Zhang et al. [@Zhang:2008gt] and Triana et al. [@Triana:2016fi] have shown the strong interplay between the intensity of pre-edge features and the coordination number and static disorder, in particular in the case of the A2 peak. However, the A2 peak is also present in the of single crystals which suggests that the underlying transition is intrinsic to defect free a-. Clear evidence of the nature of this transition is lacking which is provided in this work. The clear assignment of the pre-edge features of a- is important in view of recent steady-state and ultrafast [@RittmannFrank:2014fu; @Santomauro:2015er; @Obara:2017bq] and optical experiments [@Baldini:2016ua]. In the picosecond experiments on photoexcited a- nanoparticles above the band gap, a strong enhancement of the A2 peak was observed, along with a red shift of the edge [@RittmannFrank:2014fu]. This was interpreted as trapping of the electrons transferred to the conduction band at undercoordinated defect centres that are abundant in the shell region of the nanoparticles, turning them from an oxidation state of +4 to +3 [@RittmannFrank:2014fu]. The trapping time was determined by femtosecond to be ca. $\unit{200}{\femto\second}$, i.e. the electron is trapped immediately at or near the unit cell where it was created [@Santomauro:2015er; @Obara:2017bq]. Further to this, the trapping sites were identified as being due to oxygen vacancies () in the first shell of the reduced atom. These ’s are linked to two atoms in the equatorial plane and one atom in the apical position to which the biexponential kinetics (hundreds of ps and a few ns) at the K-edge transient was attributed [@Santomauro:2015er; @Budarz:2017iu]. However, this hypothesis awaits further experimental and theoretical confirmation. In this sense, the assignment of peak A2 which provides the most intense transient signal in the pre-edge of a- is a prerequisite. In this article, we provide a detailed characterization of the steady-state spectrum by carrying out a study of anatase single crystals at the K-edge, accompanied by detailed theoretical modelling of the spectra. We fully identify the four pre-edge bands (A1-A3 and B) beyond the octahedral crystal field splitting approximation used in previous studies [@Wu:2004bs; @Cabaret:2010fp]. Their dipolar and quadrupolar character is analyzed in detail as well as their on-site vs inter-site nature. The novelty resides in the quantitative reproduction of the experimental data with calculations, the observation of the quadrupolar nature of peak A2 in agreement with theoretical predictions [@Vorwerk:2017gs] and the corresponding assignment of peak A2 as originating from a quadrupolar transition in single crystals and from defect states in nanomaterials which accidentally trigger a blue shift of peak A1 in the region of peak A2. This delivers a high degree of insight into the environment of atoms, which is promising for future ultrafast X-ray studies of the photoinduced structural changes in this material. Experimental setup ================== Linear dichroism {#exp_LD} ---------------- The measurements are performed at the microXAS beamline of the SLS in Villigen, Switzerland using a double (311) crystal monochromator to optimize the energy resolution. Energy calibration is performed from the first derivative of the spectrum of a thin foil. We used a moderately focused rectangular-shaped X-ray beam of in horizontal and vertical dimension, respectively. The spectrum is obtained in total fluorescence yield with a Ketek Axas detector system with Vitus H30 SFF and ultra-low capacitance Cube-Asic preamplifier (Ketek Gmbh). The sample consists of a (001)-oriented crystalline a- thin film of thickness. Sample growth and characterization procedures are reported in the §1. Figure 1 shows a schematics of the sample motion required for the experiment. The sample was placed in the center of rotation of a system of stages which allow for both sample in-plane rotation ($\phi$) and orthogonal out-of plane rotation ($\theta$). By convention, a set of Euler angles $(\theta,\phi,\psi)$ orients the electric field $\hat\epsilon$ and wavevector $\hat k$ with respect to the sample. $\theta$ measures the angle between $\hat \epsilon$ and the $[001]$ crystal direction ($\hat z$ axis of the sample frame) orthogonal to the surface. $\phi$ measures the angle between $\hat\epsilon$ and the sample rotation axis $\hat x$. In principle, a third angle $\psi$ is necessary to fix the position of the wavevector in the orthogonal plane to the electric field but here $\psi=\unit{0}{\degree}$. The $\theta$ angles reported in the experimental datasets are with a maximum systematic offset of $\pm\unit{0.2}{\degree}$ which comes from the precision setting up the $\theta=\unit{0}{\degree}$ reference from the sample half-clipping of the X-ray beam at grazing incidence. The precision of the rotation stage of $\pm\unit{0.01}{\degree}$ is negligible with respect to this angular offset. is usually studied with the sample rotated in the plane orthogonal to the incident X-ray beam ($\phi$-rotation) [@Brouder:tj]. In this work, the novelty comes from the sample rotation around $\hat x$ ($\theta$-rotation) which provides the largest changes in the . This rotation induces a change of X-ray footprint onto the sample surface. We clearly show that it does not introduce spectral distortions because the effective penetration depth of the X-rays through the material (between 97 and across the absorption edge of a- for the largest footprint at $\theta=\unit{1}{\degree}$ used here [@HENKE1993181]) is kept constant as the sample is much thiner than the attenuation length at the K-edge. Instead, the total amount of material probed by the X-rays changes due to the larger X-ray footprint when $\theta$ increases and a renormalization over the detected number of X-ray fluorescence photons is required. This is done with the support of the calculations since a few energy points have $\theta$-independent cross-sections as previously reported on other systems [@George:1989ev; @Loupias:1990de; @Oyanagi:1987ik; @Oyanagi:1989gz; @Pettifer:1990kv; @Stizza:1986ff; @Fretigny:1986ea] (*vide infra*)[^1]. With this renormalization procedure performed at a single energy point (), we could obtain a set of experimental points with $\theta$-independent cross-sections at the energies predicted by the theory confirming the reliability of the method. Hence, crystalline thin films with suitable thicknesses with respect to the X-ray penetration depth offer more possibilities to study effects than single crystals and prevent the usual self-absorption distortion of bulk materials when using total fluorescence yield detection [@Carboni:2005jf]. reference A1 A2 A3 B ------------------- ------------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------- -------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------- [@Wu:1997dm] $\mathbf{3d_{x^2-y^2}}(b_1)$, $4p_x,4p_y,3d_{xz},3d_{yz}(e)$ $\mathbf{4p_z},\mathbf{3d_{xy}}(b_2)$, $4p_x,4p_y,3d_{xz},3d_{yz}(e)$ $\mathbf{4p_z},\mathbf{3d_{xy}}(b_2)$, $3d_{z^2}(a_1)$ $\mathbf{4p},\mathbf{4s}$ [@Cabaret:2010fp] $E1$: $\mathbf{p(t_{2g})}$, $E2$: $3d(t_{2g})$ $E1$: $\mathbf{p(e_g)}$, , $E2$: $3d(e_g)$ [@Triana:2016fi] $E1$: $\mathbf{4p-3d(t_{2g})}$, $E2$: $3d(t_{2g})$ $E1$: $\mathbf{4p}$,$\mathbf{d_{xy}}$,$\mathbf{d_{xz}}$,$\mathbf{d_{yz}}(\mathbf{t_{2g}})$ $E1$: $\mathbf{4p}$,$\mathbf{d_{x^2-y^2}}$,$\mathbf{d_{z^2}}(\mathbf{e_g})$ This work $E1$: $\mathbf{4p_{x,y}}-\mathbf{3d_{xz}}$,$\mathbf{3d_{yz}}$, $E2$: $d_{xz}$,$d_{yz}$,$d_{x^2-y^2}$ $E2$: $\mathbf{3d_{xy}}$, $3d_{z^2}$ \[literature\_assignment\] Theory ====== Recent developments in computational methods -------------------------------------------- Recently, there have been two main developments in the computation of spectra. The first is based on band structure calculations (LDA, LDA+U,…), which compute potentials self-consistently with and without the core-hole before the calculation of the absorption cross-section with a core-hole in the final state [@Cabaret:2010fp; @Gougoussis:2009kr]. This approach provides excellent accuracy but is limited to the few tens of eV above the absorption edge due to the computational cost of increasing the basis set to include the . The second one, the approach implemented by Joly [@Joly:2009ha; @Joly:2001fu], overcomes the limitations of the muffin-tin approximation in order to get accurate descriptions of the pre-edge transitions especially for anisotropic materials. The recent theoretical work by Cabaret and coworkers combining GGA-PBE self-consistent calculations with [@Cabaret:2010fp] concluded that in a-, peak A1 is due to a mixture of quadrupolar ($t_{2g}$) and dipolar transitions ($p-t_{2g}$), A3 to on-site dipolar ($p-e_g$), off-site dipolar ($p-t_{2g}$) and quadrupolar ($e_g$) transitions, while B is due to an off-site dipolar transition ($p_z-e_g$). These results, together with those of previous works are summarized in Table \[literature\_assignment\]. However, experimental support to the pre-edge assignments is still lacking, and is provided in this work using at the K-edge of a- with the theoretical support of *ab-initio* full potential calculations and spherical harmonics analysis of the cross-section. Finite difference *ab-initio* calculations ------------------------------------------ The *ab-initio* calculations of the cross-section were performed with the full potential as implemented in the FDMNES package [@Joly:1999iq; @Joly:2001fu]. A cluster of was used for the calculation with the fundamental electronic configuration of the oxygen atom and an excited state configuration for the titanium atom (: \[Ar\]3d$^1$4s$^2$4p$^1$) as performed elsewhere [@Zhang:2008gt]. We checked the convergence of the calculation for increasing cluster sizes and found minor evolution for larger cluster radii than (123 atoms). The Hedin-Lundqvist exchange-correlation potential is used [@Hedin:2001jx]. A minor adjustment of the screening properties of the $3d$ levels is needed to match the energy position of the pre-edge features with the experiment. We found the best agreement for a screening of 0.85 for the $3d$ electrons. After the convolution of the spectrum with an arctan function with maximum broadening of , a constant gaussian broadening of is applied to account for the experimental resolution of the experiment and get the closest agreement with the broadening of the pre-edge peaks. Spherical tensor analysis of the dipole and quadrupole cross-sections --------------------------------------------------------------------- Analytical expressions of the dipole and quadrupole cross-sections ($\sigma^D(\hat\epsilon)$ and $\sigma^Q(\hat\epsilon,\hat k)$, respectively) are obtained from their expansion into spherical harmonic components [@Brouder:tj; @Brouder:1990eo]. The expressions of $\sigma^D(\hat\epsilon)$ and $\sigma^Q(\hat\epsilon,\hat k)$ depend on the crystal point group which is D$_{4h}$ ($4/mmm$) for a-. The dipole cross-section is given by: $$\sigma^D(\hat\epsilon)=\sigma^D(0,0)-\frac{1}{\sqrt{2}}(3\cos^2\theta-1)\sigma^D(2,0) \label{dipolar_dichroic}$$ and the quadrupole cross-section by: $$\begin{split} \sigma^Q(\hat\epsilon,\hat k)=\sigma^Q(0,0)\\ +\sqrt{\frac{5}{14}}(3\sin^2\theta\sin^2\psi-1)\sigma^Q(2,0) \\ +\frac{1}{\sqrt{14}}[35\sin^2\theta\cos^2\theta\cos^2\psi\\ +5\sin^2\theta\sin^2\psi-4]\sigma^Q(4,0) \\ +\sqrt{5}\sin^2\theta[(\cos^2\theta\cos^2\psi\\ -\sin^2\psi)\cos4\phi-2\cos\theta\sin\psi\cos\psi\sin4\phi]\sigma^{Qr}(4,4) \end{split} \label{quadrupolar_dichroic}$$ with $\theta$, $\phi$ and $\psi$ as defined in the site point group (D$_{2d}$). $\sigma^X(l,m)$ with $X=D,Q$ is the spherical tensor with rank $l$ and projection $m$. $\sigma^{Xr}$ refers to the real part of the cross-section. The Euler angles $(\theta,\phi,\psi)$ in the experiment are referenced to the crystal frame which is rotated in the $(O,\hat x,\hat y)$ plane with respect to the Euler angles in the site frame. Consequently, the angles in equations \[dipolar\_dichroic\] and \[quadrupolar\_dichroic\] differ from the angles defined in Figure \[experimental\_design\] by a rotation of $\phi$. In the site frame, the $\hat x$ and $\hat y$ axes are bisectors of the bonds while the crystal frame is along the bonds. The matrix $R$ to go from the site frame to the crystal frame is, $$R=\begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 \\ -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}$$ In the following, the polarizations of $\hat\epsilon$ and $\hat k$ are given in the crystal frame. Consequently, the corresponding polarizations for the site frame are given by $\hat\epsilon_S=R^{-1}(\hat \epsilon)$ and $\hat k_S=R^{-1}(\hat k)$. Although some terms of $\sigma^D(\hat\epsilon)$ and $\sigma^Q(\hat\epsilon,\hat k)$ may be negative, the total dipolar and quadrupolar cross-sections must be positive putting constraints on the values of $\sigma^D(l,m)$ and $\sigma^Q(l,m)$. The electric field $\hat\epsilon$ and wavevector $\hat k$ coordinates in the $(\hat x,\hat y, \hat z)$ basis of Figure \[experimental\_design\] are given by: $$\hat\epsilon=\begin{pmatrix} \sin\theta\cos\phi \\ \sin\theta\sin\phi \\ \cos\theta \end{pmatrix},\text{ } \hat k=\begin{pmatrix} \cos\theta\cos\phi \\ \cos\theta\sin\phi \\ -\sin\theta \end{pmatrix}.$$ Hence the detail of the cross-section angular dependence in equations \[dipolar\_dichroic\] and \[quadrupolar\_dichroic\] requires the estimate of the spherical tensors $\sigma^D(l,m)$ and $\sigma^Q(l,m)$ as performed elsewhere [@Brouder:2008jc]. The cross-section measured experimentally is an average over equivalent atoms under the symmetry operations of the crystal space group. The analytical formula representing this averaged cross-section requires the site symmetrization and crystal symmetrization of the spherical tensors, which is provided in §7 and §8. From this analysis, we obtain nearly equal (up to a sign difference) crystal-symmetrized ($\braket{\sigma(l,m)}_X$), site-symmetrized ($\braket{\sigma(l,m)}$) and standard ($\sigma(l,m)$) spherical tensors. Assuming pure $3d$ and $4p$ final states in the one-electron approximation, analytical expressions are provided for $\sigma^D(\hat\epsilon)$ and $\sigma^Q(\hat\epsilon,\hat k)$ whose angular dependence with $\theta$ and $\phi$ are given in Table \[dipole\_table\]. The full expressions of the cross-sections are provided in §8. In this paper, we analyze the angular dependence of the pre-edge peak intensities with $\theta$ and $\phi$ and assign them to specific final states corresponding to -$3d$ and/or $4p$ orbitals with the support of both and spherical tensor analysis. **final state** **$\sigma^D(\hat\epsilon)$ or $\sigma^Q(\hat\epsilon,\hat k)$ $\theta$-dependence** **$\sigma^D(\hat\epsilon)$ or $\sigma^Q(\hat\epsilon,\hat k)$ $\phi$-dependence** ------------------- ------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- $p_x,p_y$ $-\cos^2\theta$ no dependence $p_z$ $\cos^2\theta$ no dependence $d_{z^2}$ $\sin^2\theta\cdot\cos^2\theta$ no dependence $d_{xy}$ $\sin^2\theta\cdot\cos^2\theta$ $\cos(4\phi)$ $d_{x^2-y^2}$ $\sin^2\theta\cdot\cos^2\theta$ $-\cos(4\phi)$ $d_{xz}$,$d_{yz}$ $-\sin^2\theta\cdot\cos^2\theta$ no dependence \[dipole\_table\] Results ======= The experimental evolution of the K-edge spectra with $\theta$ is depicted in Figure \[experiment\_and\_theory\]a. The spectra are normalized at where the cross-section is expected to be $\theta$-independent according to calculations (shown by the leftmost black arrow in Figure \[experiment\_and\_theory\]b). From this normalization procedure, a series of energy points with cross-section independent of the $\theta$ angle appear in the experimental dataset, as predicted by the theory (black arrows in Figure \[experiment\_and\_theory\]a and \[experiment\_and\_theory\]b) showing the reliability of the normalization procedure. In the pre-edge, the amplitude of peak A1 is dramatically affected by the sample orientation. In the post-edge regions, significant changes are observed as well. *Ab-initio* calculations of the total cross-section (including dipolar and quadrupolar terms) are presented in Figure \[experiment\_and\_theory\]b for the same angles of incidence $\theta$ as in the experiment. In the pre-edge region, the trends for peak A1 and A3 are nicely reproduced. The absence of peak A2 at first sight, partially originating from defects [@Luca:2009bj; @Chen:1997by; @Luca:1998dn; @Hanley:2002go], is due to our perfect crystal modelling in the calculations. In the post-edge region, a good agreement is found especially for the isosbestic points. This shows that a strong remains well above the edge in this material. The evolution of the spectra is also shown for a fixed incidence angle $\theta=\unit{45}{\degree}$ while the sample is rotated around $\phi$ (Figure \[experiment\_and\_theory\_mu\]a)[^2]. The changes in amplitude are significantly less than with $\theta$-rotation. We observe a minimal evolution for the amplitudes of the peak B and at the rising edge from while a larger effect is distinguished in the spectral region of peaks A1, A2 and A3. *Ab-initio* calculations with the same $\hat \epsilon$ and $\hat k$ orientations as in the experiment are depicted in Figure \[experiment\_and\_theory\_mu\]b. Only a weak evolution of the amplitude of the pre-edge features is expected and is essentially located in the region of peaks A2 and A3. The amplitude should reach its maximum for $\phi=\unit{180}{\degree}[\unit{90}{\degree}]$ which is inconsistent with the experiment. Instead the fitted evolution of the pre-edge peak amplitudes shows that A2 undergoes a 30% peak amplitude change whose angular variation is compatible with a quadrupolar transition ( Figure 6a) while A1, A3 and B have a maximum amplitude evolution of 10% (within the fitting confidence interval) with no specific periodicity ( Figure 7). The strong variation in A2 peak amplitude can be observed by the appearance of a pronounced shoulder for $\phi=\unit{150}{\degree}$ which becomes smoother for $\phi=\unit{180}{\degree}$. Consequently, the main evolution is due to peak A2 which explains the disagreement with the perfect crystal calculations. It also shows the essentially dipolar content of peaks A1, A3 and B which provide circles in polar plots along $\phi$ ( Figure 7) which is in agreement with the results obtained from $\theta$-scans (*vide infra*). A fit of the A2 peak with a -periodic function shows that it may be assigned to the contribution of $d_{x^2-y^2}$ orbitals from the expected angular evolution by spherical harmonic analysis ( Figure 6a,b). However, the $d_{x^2-y^2}$ in the region of peak A2 is negligible with respect to $d_{xy}$ and $d_{z^2}$ (*vide supra*) hence we rely on the more pronounced angular evolution with $\theta$ in the following to show the involvement of $d_{xy}$ orbitals in the formation of peak A2. In order to describe the origin of the with $\theta$ and assign the pre-edge resonances, the projected of the final states for the pre-edge and post-edge region is depicted in Figure \[DOS\_anatase\] (we drop the term “projected” in the following for simplicity). Due to the large differences between the of $s$, $p$ and $d$ states, a logarithmic scale is used vertically and normalized to the orbital having the largest contributing to the final state among $s$, $p$ and $d$ orbitals. For peaks A1, A3 and B, most of the comes from $d$-orbitals while $s$- and $p$- are comparable. However, due to the angular momentum selection rule, the spectrum resembles the $p$- as witnessed by the similarity between the integrated $p$- and the calculated spectrum (black line in Figure \[DOS\_anatase\]e). Importantly, peak A1 has only ($p_x,p_y$) contributions meaning that this transition is expected to have a much weaker intensity when the electric field gets parallel to the $\hat z$ axis, in agreement with the $\theta$-dependence of its amplitude (Figure \[experiment\_and\_theory\]). The $d$- at peak A1 involves d$_{xz}$, d$_{yz}$ and d$_{x^2-y^2}$ orbitals, among which the first two can hybridize with the ($p_x$,$p_y$) orbitals and relax the dipole selection rules. The dipolar nature of A1 is also seen from the monotonic increase of its amplitude from $\theta=\unit{0}{\degree}$ to $\theta=\unit{90}{\degree}$, inconsistent with a quadrupolar allowed transition with periodicity. Following the same analysis, peaks A3 and B do not undergo a strong change in amplitude under $\theta$-rotation because $(p_x,p_y)$ and $p_z$ contribute similarly to the for these transitions although calculations show that A3 should evolve in intensity with $\theta$ due to a $\sim$20% larger for $p_z$ than for $p_x,p_y$ as experimentally observed. From the integrated $d$- along $(x,y)$ and $z$ ( Figure 9a), we notice the inconsistency between the peak amplitudes in the theory and the experiment, which shows that they are essentially determined by the $p$- ( Figure 9b). For a more quantitative description of the dipolar and quadrupolar components in the pre-edge, we extracted the quadrupolar cross-section from calculations. It is depicted as thin lines in the inset of Figure \[experiment\_and\_theory\]b. The quadrupolar contributions are limited to peaks A1 and A3 with a contribution in the spectral region of peak A2. At peak A1, the quadrupolar amplitude is maximum for $\theta=\unit{0}{\degree}$ and $\theta=\unit{90}{\degree}$ and the total cross-section becomes mainly quadrupolar for $\theta=\unit{0}{\degree}$ while the quadrupolar component contributes $\sim$ 15% of the peak amplitude for $\theta=\unit{90}{\degree}$. From the development of the cross-section into spherical harmonics (Table \[dipole\_table\]), the dipolar transitions to $p_{x,y}$ final states are expected to vary as $-\cos^2\theta$ while transitions to $p_z$ vary as $\cos^2\theta$ plus a constant (see Figure 12a). The fitted evolution of the dipolar cross-section of peak A1 in the experiment and in the calculations is compatible with a transition to $p_{x,y}$ (green line in Figure \[fitting\_pre\_edge\]a, fitting details in §2). The quadrupolar component (red line in Figure \[fitting\_pre\_edge\]a) is compatible with a transition to $d_{xz},d_{yz}$ due to its $-\sin^2\theta\cos^2\theta$ predicted evolution in agreement with the $d$- at peak A1 (Table \[dipole\_table\] and Figure 12b in ). The comparison between the experimental and theoretical amplitudes of peak A1 (Figure \[fitting\_pre\_edge\]a) gives an excellent agreement further confirming that the A1 transition is mostly dipolar to $p_{x,y}$ final states. Following the same analysis, it is more difficult to determine the dominant $p$- contributing to the transitions at peak A3 and B due to the weak evolution of their amplitude with $\theta$. \ \ \ As pointed out earlier, the quadrupolar cross-section has a doublet structure in the region of peaks A2 and A3 (inset of Figure \[experiment\_and\_theory\]b). The most intense of the two peaks at $\theta=\unit{45}{\degree}$ is in the spectral region of peak A2 where the transition involving defects is expected in a- nanoparticles for instance. A closer look at the fitted evolution of the A2 amplitude with $\theta$ shows a quadrupolar evolution with maximum value at $\theta=\unit{45}{\degree}$ (Figure \[fitting\_pre\_edge\]b). This is in agreement with the expected angular evolution of $d_{z^2}$ and $d_{xy}$ final states from spherical tensor analysis ( Figure 12b) which contribute to the in the spectral region of peak A2 (Figure \[DOS\_anatase\]c). It indicates that although the amplitude of A2 is underestimated in the calculation, the consensus that A2 originates from undercoordinated and disordered samples may be more subtle because of the involvement of a transition in the perfect crystal and is discussed in the next section. From this combined experimental and theoretical analysis, we emphasize that consecutive peaks in the pre-edge of a- are not simply due to the energy splitting between t$_{2g}$ and e$_g$ as previously invoked [@Cabaret:2010fp]. This splitting is more complicated than the usual octahedral crystal field splitting because of the strong hybridization between $p$ and $d$ orbitals in a lowered symmetry environment which affects the relative ordering of the transitions. The consistent results between experiment, calculations and spherical tensor analysis show the reliability of the assignment provided in this work. Table \[literature\_assignment\] compares our results with previous assignments of peaks A1 to B. ![Overlap between the angular evolution of the amplitudes for peaks a) A1, b) A2 in the theory (lines) and the experiment (circles with error bars). The error bars represent 95% of confidence interval for the fitting of the amplitude.[]{data-label="fitting_pre_edge"}](anatase_yz_fitting_A1_amplitude.eps "fig:") ![Overlap between the angular evolution of the amplitudes for peaks a) A1, b) A2 in the theory (lines) and the experiment (circles with error bars). The error bars represent 95% of confidence interval for the fitting of the amplitude.[]{data-label="fitting_pre_edge"}](anatase_yz_fitting_A2_amplitude.eps "fig:") Discussion ========== Local versus non-local character of the pre-edge transitions ------------------------------------------------------------ Pre-edge transitions can originate either from on-site (localized) or off-site transitions involving neighbour atoms of the absorber. Off-site transitions are dipole allowed due to the strong $p-d$ orbital hybridization [@Yamamoto:2008bi]. This effect has been shown on , an charge-transfer insulator, for which the K-edge transition to $3d$ orbitals of the majority spin of the absorber is only possible between sites due to the ordering [@Gougoussis:2009kr]. Hence, to disentangle between the local or non-local character of the pre-edge transitions in a-, we performed calculations on clusters with increasing number of neighbour shells starting from an octahedral cluster with the same geometry and bond distances as in the bulk. The results are shown in Figure 8 with two orthogonal electric field orientations along $[001]$ ($\theta=\unit{0}{\degree}$) and $[010]$ ($\theta=\unit{90}{\degree}$). The calculation for (green curve) shows only A1 meaning that it is mostly an on-site transition. The absence of peaks A3 and B suggests that they are mostly non-local transitions in agreement with Ref. [@Cabaret:2010fp]. Increasing the cluster size to includes the second shell of ions, which generates most of the A3 amplitude. This shows that, similarly to , an energy gap opens between the on-site and off-site transitions to $3d$ orbitals of and that A3 is mostly dipolar and strongly influenced by the intersite $3d-4p$ hybridization. Peak B is missing for this cluster size which shows that it is due to a longer range interaction and can be reconstructed with a cluster including the next shell of neighbor atoms. A similar trend in the local or non-local character of the pre-edge transitions is observed at the metal K-edge of $3d$ transition metal oxides which has to do with the degree of $p-d$ orbital mixing [@Wu:2004bs]. Origin of peak A2 {#A2_origin} ----------------- The experimental $\phi$ and $\theta$ angular evolution of the A2 amplitude (Figure \[fitting\_pre\_edge\]b and 6a in the ) matches a quadrupolar transition, qualitatively consistent with the dominant quadrupolar cross-section obtained from calculations in the region of peak A2 (Figure \[experiment\_and\_theory\]b, inset and Figure \[experiment\_and\_theory\_mu\]b). Recent calculations accounting for the electron-hole interaction in the Bethe-Salpeter equation have reproduced peak A2, although with an underestimated amplitude as in our calculations [@Vorwerk:2017gs]. Peaks A1 and A2 are found to exhibit their maximum amplitude when the electric field is parallel to the $(\textbf{a},\textbf{b})$ and $\textbf{c}$ axes, respectively. This is in agreement with our measurement for peak A1 (Figure \[fitting\_pre\_edge\]a) as a result of the coupling between the $3d$ states of with the $p_{x,y}$ . For peak A2, we observe a dominant quadrupolar evolution with a deviation from the ideal behavior showing the presence of $p_z$ states which increase their contribution to the transition when $\theta\rightarrow\unit{0}{\degree}$. Although both peaks A1 and A2 peaks show *p-d* orbital mixing, this mixing is clearly stronger for the A1 peak where the dipole contribution becomes dominant over the quadrupolar in contrast to the A2 peak. It shows that the amount of *p-d* orbital mixing differs for these transitions which can be explained by the $\sim$100 times lower $p_z$ in the region of peak A2 than $p_{x,y}$ in the region of peak A1 (Figure \[DOS\_anatase\]b). The underestimated amplitude of peak A2 in our calculation is likely due to the lack of explicit treatment of the electron-hole interaction which would improve the agreement of energy and amplitudes for peaks A1 and A2 without resorting to changes in screening constants of the $3d$ electrons as in our study. A parallel can be made between the energy splitting of peaks A1 and A2 containing quadrupolar localized components and the splitting of the bound excitons of a- observed in the optical range where the $(\textbf{a},\textbf{b})$ plane exciton has a larger binding energy than the $\textbf{c}$ exciton [@Chiodo:2010eh; @Kang:2010bj; @Baldini:2016ua]. While we show that the presence of the A2 peak can be explained by the electronic structure of a-, a number of previous studies have concluded that A2 is related to lattice defects [@Luca:2009bj; @Chen:1997by; @Luca:1998dn; @Hanley:2002go; @RittmannFrank:2014fu; @Santomauro:2015er; @Budarz:2017iu]. The question arises as to the connection between the A2 peak and the lattice defects, if any. Oxygen vacancies are native defects in a- [@Morgan:2009cd]. The occurrence of an oxygen vacancy in the vicinity of a atom will further lower the D$_{2d}$ symmetry and introduce $p-d$ orbital mixing in the pentacoordinated atom increasing the transition amplitude while broadening the transition due to the inhomogeneous contribution of the vacancy distribution [@Zhang:2008gt; @Triana:2016fi]. In order to check the effect of an on the spectrum of a atom in the vicinity, *ab-initio* calculations are performed at the K-edge of atoms with a doubly ionized () at the apical or equatorial position in a supercell of 768 atoms. The calculations are performed with a bulk a- 4x4x4 superlattice structure from which one oxygen atom is removed in the center and neighbor titanium atoms are moved along the broken – bond to simulate lattice relaxation. We have taken the local structural relaxation reported in another work with hybrid functional calculations where the titanium atoms move away from by in the equatorial plane and in the apical position [@Finazzi:2008ff]. The results, depicted in Figure \[defect\_simulation\], show a chemical shift of A1 in the region between peaks A1 and A3 of the perfect lattice where peak A2 is expected, while peaks A3 and B remain essentially unaffected by the . The blue-shift of peak A1 is explained from the essentially local character of its final state involving $3d$ orbitals in the final state of the quadrupolar transition sensitive to the core-hole. The generates a redistribution of the electrons among the three nearest atoms which better screen the core-hole charge and lead to a blue shift of the transition. Since peaks A3 and B involve the first and second coordination shell of atoms, the effect of the is likely negligible on the corresponding final states. Peak A2 can be viewed as a peak A1 replica undergoing an $\sim$ blue shift under the influence of an . Similar results have been obtained in rutile for which doubly ionized oxygen vacancies introduced blue shift of the $3d$- by [@Vasquez:2016jz]. A similar effect is present in a- at the O K-edge where the asymmetry of the so-called $t_{2g}$ and $e_g$ peaks with a tail on the high-energy side cannot be reproduced in the calculations with a bulk structure [@Kruger:2017bl]. The amplitude of these peaks increases upon heavy ion irradiation compatible with the formation of more oxygen vacancies [@Thakur:2011fq]. Hence, we find that the occurrence of a transition corresponding to undercoordinated atoms in the region of peak A2 is a coincidence. The experimental spectrum of a- nanoparticles with defects would be a linear combination of the spectra (red and blue curves in Figure \[defect\_simulation\]) and the spectrum of hexacoordinated atoms in the bulk (black curve in Figure \[defect\_simulation\]) which depends on the amount of vacancy in the system. However, this study shows that peak A2 is expected to be present even in crystalline a- nanoparticles because the intrinsic quadrupolar transition is likely dominant over the defect contribution. The large spectral weight transfer from peak A1 to a transition in the region of peak A2 for pentacoordinated atoms is fully compatible with our recent studies on photoexcited a- nanoparticles [@RittmannFrank:2014fu; @Santomauro:2015er; @Budarz:2017iu]. We therefore conclude that the intensity enhancement at the peak A2 originates from an essentially quadrupolar transition in the regular lattice and from a spectral shift of peak A1 in the region of peak A2 for pentacoordinated atoms with an . ![Effect of an oxygen vacancy introduced at the equatorial (blue) or apical position (red) of a octahedron on the spectrum of a $4\times4\times4$ a- supercell including lattice relaxation. The spectrum at the K-edge for the perfect supercell is shown in black. The calculation is angle averaged (no specific orientation taken for the crystal and the incident X-ray beam).[]{data-label="defect_simulation"}](relaxed_unrelaxed_vacancy.eps) Conclusion ========== In summary, a complementary approach using experimental measurements at the K-edge of a-, *ab-initio* calculations and spherical tensor analysis provides an unambiguous assignment of the pre-edge features. We show that A1 is mainly due to a dipolar transition to on-site hybridized $4p_{x,y}-3d_{xz},3d_{yz}$ final states which give a strong dipolar to the transition with a weak quadrupolar component from $(3d_{xz},3d_{yz},3d_{x^2-y^2}$) states. The A3 peak is due to a mixture of dipolar transitions to hybridized $4p_{x,y,z}-(3d_{xy},3d_{z^2})$ final states as a result of strong hybridization with the $3d$ orbitals of the nearest neighbour with a small quadrupolar component. The B peak is purely dipolar ($4p$ orbitals in the final state) and is an off-site transition (the electron final state is delocalized around the absorbing atom). The distinction between on-site and off-site transitions is possible using different cluster sizes in the calculations. The is visible well above the absorption edge due to the strong $p$-orbital polarization in a- which affects the amplitude of the . Surprisingly, a quadrupolar angular evolution of peak A2 is observed for the first time with a narrow bandwidth showing that it is an intrinsic transition of the single crystal. A connection between the unexpectedly large experimental amplitude of this peak in nanoparticles is made with oxygen vacancies forming pentacoordinated atoms. Crude FDMNES calculations show that A2 may be viewed as a A1 peak undergoing a blue shift because of the change in the core hole screening due to ’s. This explains the relatively intense A2 peak in amorphous [@Zhang:2008gt] or upon electron trapping at defects after photoexcitation of anatase or rutile [@Budarz:2017iu; @RittmannFrank:2014fu; @Santomauro:2015er]. The unprecedented quantitative agreement provided in this work is made possible by the continued improvement of computational codes including full potentials [@Joly:1999iq; @Joly:2001fu; @Joly:2009ha] and the more accurate description of the core-hole interaction in Bethe-Salpeter calculations [@Shirley:2004id; @Vorwerk:2017gs]. Experiments are on-going to extend this work to rutile . The present results and analysis should be cast in the context of ongoing ultrafast X-ray spectroscopy studies at Free Electron Lasers [@Abela:2017jr; @Chergui:2016hba]. For materials such as a-, the increased degree of detail that can be gathered from such sources was nicely illustrated in a recent paper by Obara et al. [@Obara:2017bq] on a-, showing that the temporal response of the pure electronic feature (at the K-edge) was much faster ($\sim$) than the response of structural features ($\sim$) such as the pre-edge and the above-edge . The present work shows that by exploiting the angular dependence of some of the features, even up to the region, one could get finer details about the structural dynamics, in particular, of non equivalent displacements of nearest neighbours. We thank Yves Joly and Christian Brouder for fruitful discussions and Hengzhong Zhang for providing the input files. We also thank Beat Meyer and Mario Birri of the microXAS beamline for their technical support as well as the Bernina station staff of the SwissFEL for lending us the goniometer stage. This work was supported by the Swiss NSF via the NCCR:MUST and grants 200020\_169914 and 200021\_175649 and the European Research Council Advanced Grants H2020 ERCEA 695197 DYNAMOX. G. F. M. and C. B. were supported via the InterMUST Women Fellowship. [^1]: For a spectrum measured well above the absorption edge, the atomic background absorption converges for any incident polarization and can also be used in principle to renormalize the spectra. [^2]: The normalization energy is at
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate phase shifts in the strong coupling regime of single-atom cavity quantum electrodynamics (QED). On the light transmitted through the system, we observe a phase shift associated with an antiresonance and show that both its frequency and width depend solely on the atom, despite the strong coupling to the cavity. This shift is optically controllable and reaches 140$^\circ$ – the largest ever reported for a single emitter. Our result offers a new technique for the characterization of complex integrated quantum circuits.' author: - 'C. Sames, H. Chibani, C. Hamsen, P. A. Altin, T. Wilk and G. Rempe' bibliography: - 'antiresonance.bib' title: Antiresonance phase shift in strongly coupled cavity QED --- The strongly coupled atom-cavity system plays a central role in research on fundamental quantum optics. Important achievements to date include the creation of single photon sources [@Kuhn:2002; @McKeever:2004] and non-classical microwave states [@Deleglise:2008; @Guerlin:2007], single-atom squeezing [@Ourjoumtsev:2011], the observation of novel photon statistics [@Birnbaum:2005; @Kubanek:2008; @Koch:2011] and the nondestructive detection of microwave and optical photons [@Nogues:1999; @Reiserer:2013]. More complex interacting systems based on this basic element are now attracting much attention in quantum information and simulation. Recent achievements in this direction include the coupling of a single qubit to two cavities [@Kirchmair:2013], the interaction of multiple qubits with a single cavity bus [@Majer:2007; @Mariantoni:2011], and the exchange of quantum states between single qubits in remote cavities [@Ritter:2012; @Nolleke:2013]. Integrated quantum circuits are promising candidates for on-chip quantum computation [@Ladd:2010; @Monroe:2013; @Awschalom:2013; @Politi:2008; @Benson:2011] and large strongly coupled networks have been proposed for simulating quantum phase transitions [@Greentree:2006; @Hartmann:2006; @Houck:2012]. However, in such strongly interacting systems, the couplings no longer represent merely a perturbation of the subsystem dynamics, necessitating a holistic analysis of the coupled system. This makes the characterization of strongly coupled quantum circuits a challenging task [@Devoret:2013; @Nigg:2012]. In this Letter, we propose a new technique for characterizing complex quantum circuits, which emerges from an analysis of the phase of light transmitted through a strongly coupled single-atom–cavity system. In particular, we report on the observation of an antiresonant phase shift caused by destructive interference between the coherent drive and the field radiated by the atom. The signature of the antiresonance is a large negative phase shift which depends solely on the atom, despite the strong coupling to the resonator. This is in sharp contrast to the normal modes [@Boca:2004; @Maunz:2005], which depend on properties of both atom and cavity as well as the coupling strength [@Raizen:1989]. Our measurement paves the way for individual components of strongly interacting quantum systems to be characterized via measurements performed only on the overall coupled system. Previous work on phase spectroscopy in cavity QED has focused on the so-called “bad-cavity” limit in which the cavity decay rate exceeds the coupling strength, $\kappa \gtrsim g$, and only modest phase shifts were observed [@Turchette:1995; @Fushman:2008]. Phase changes due to strongly coupled atoms were seen in Ref. [@Mabuchi:1999], but the antiresonance phase shift was not observed. The presence of a transmission dip at the atomic frequency (associated with the antiresonance) was noted in theoretical work in the intermediate-coupling limit [@Rice:1996] [^1]. In contrast, in a strongly interacting system the coupling exceeds all decay rates, such that excitations are coherently exchanged between atom and cavity, leading to the formation of a new set of eigenstates. In this limit, the antiresonance occurs far from these new eigenstates, which impedes its observation via the intensity transmitted through the cavity [@Boca:2004; @Maunz:2005]. Here we clearly reveal the antiresonant behavior through a measurement of phase. In the limit of low atomic excitation, the expectation value of the cavity field (represented by the photon annihilation operator $\hat{a}$) can be straightforwardly calculated within the framework of the Jaynes-Cummings model, extended to take into account driving and dissipation: $$\langle\hat{a}\rangle = \frac{ \eta ( \Delta_{pa} + i\gamma ) } {( \Delta_{pa} + i\gamma ) ( \Delta_{pc} + i\kappa ) - g^2} \,,$$ where $\gamma$ denotes the atomic dipole decay rate, $\eta$ is the strength of the coherent drive, and $\Delta_{pa} = \omega_p - \omega_a$ and $\Delta_{pc} = \omega_p - \omega_c$ represent the probe-atom and probe-cavity frequency detunings, respectively. The antiresonance phase shift in this system occurs when the numerator of Eq. (1) is minimized, at $\Delta_{pa} = 0$. Remarkably, this depends only on atomic parameters; the antiresonance occurs at exactly the resonance frequency $\omega_a$ of the uncoupled atom, and has a width equal to the bare atomic linewidth $\gamma$, despite the strong coupling between atom and cavity. If the roles of atom and cavity are exchanged by driving the atom at the empty-cavity resonance, the steady-state light field in the cavity reaches a magnitude equal to the drive, such that the atom then remains in its ground state [@Alsing:1992; @Zippilli:2004]. Our strongly coupled system consists of a single $^{85}$Rb atom ($\gamma/2\pi = 3.0$MHz) in a high-finesse ($\mathcal{F} = 195,000$) Fabry-Perot cavity of length 260$\mu$m ($\kappa/2\pi = 1.5$MHz). An atom-cavity coupling constant of $g_0/2\pi = 16$MHz at an antinode of the cavity field puts the system well into the strong coupling regime of cavity QED, $g \gg (\gamma,\kappa)$. A circularly polarized laser beam at 785nm serves as an intra-cavity dipole trap for single atoms and is used to actively stabilize the cavity length, which matches a frequency $\omega_c$ blue detuned by 40MHz from the $5S_{1/2}, F=3\rightarrow5P_{3/2},F=4$ cycling transition. The ac-Stark shift caused by the dipole trap then results in an atom-cavity detuning $\Delta_{ac}=\omega_a - \omega_c$ of only a few MHz. Heterodyne detection is used to probe the magnitude and phase of the light field transmitted through the coupled system (see Figure 1). ![Experimental setup. The circularly polarized probe beam transmitted through the cavity is changed to linear before being overlapped on polarizing beam splitter PBS1 with a local oscillator of orthogonal polarization. The subsequent $\lambda/2$-waveplate rotates the polarization of both beams by $45^\circ$ so that they are split equally at PBS2 before photodetection (PD). The difference of the photocurrents from the two arms is digitized by an analog-to-digital converter (ADC) and fed into a field-programmable gate array (FPGA), which reconstructs the amplitude and phase information. The dipole-trap and probe beams are merged and separated by dichroic mirrors (DM). A typical trace is shown in the inset for an atom held in the intracavity dipole trap for 180ms. The drop in the field intensity heralds the presence of an atom in the cavity.[]{data-label="fig:figure1"}](figure1.pdf) The atom is loaded into the cavity by means of a pulsed atomic fountain. A drop in the amplitude of a resonant probe beam ($\Delta_{pc} = 0$) heralds the arrival of an atom, triggering an increase in the power of the dipole laser and thus capturing the atom. In order to prepare the system reliably in the strong coupling regime, probe intervals are interleaved with cooling intervals, in which cavity and feedback cooling are applied [@Maunz:2004; @Kubanek:2009; @Koch:2010]. During probe intervals, the feedback algorithm is disabled and the trap is kept at a constant value. The frequency of the probe beam is then changed to the value under study and its intensity decreased to avoid heating and saturation of the atom [^2]. Theoretical amplitude and phase spectra for our atom-cavity parameters are shown in Figure 2a,b. The black lines represent the frequency response of the empty cavity. The response changes significantly when an atom is strongly coupled to the cavity mode, resulting in the appearance of normal modes (denoted $| 1,- \rangle$ and $| 1,+ \rangle$) where the excitation is shared between the atom (green) and the cavity (red). In a logarithmic plot of the cavity excitation, a dip (the antiresonance) becomes evident at the resonance frequency of the bare atom. No feature is apparent in either the atomic excitation or phase at this frequency, which demonstrates that the effect is not merely interference between the two normal modes. ![Phase spectroscopy. (a,b) Theoretical excitation probability (a) and phase (b) of the empty cavity (black) and the two constituents of the strongly coupled system, i.e. the atom (green) and the cavity (red), versus the probe-cavity detuning $\Delta_{pc}$ for our parameters. The experimentally measured phase shift induced by the empty cavity with respect to the driving field is also shown in (b), as a histogram color-coded from white (no events) to black, which is normalized to the maximum number of events for each frequency setting. Vertical dashed lines mark the frequencies of the two normal modes and the dashed line at $-3$MHz marks the frequency of the bare atom. (c) Histogram of the additional phase shift caused by an atom strongly coupled to the cavity, referenced to the empty cavity. (d) Measured overall phase shift of the coupled system, derived by adding (b) and (c). The red line is the result of a numerical simulation.[]{data-label="fig:figure2"}](figure2.pdf) Phase spectra recorded by heterodyne detection are shown in Figure 2b,c. The phase shift acquired by light transmitted through the empty cavity is overlaid onto the theoretical plot in Fig. 2b, and shows the expected arctangent behavior, increasing by $\pi$ as the probe laser is scanned over the resonance. Figure 2c shows the additional phase shift induced by a strongly coupled atom. The sum of the two is the overall phase shift of the coupled system, shown in Fig. 2d. Instead of a histogram as in Fig. 2c, the data is here shown as points representing the mean phase shift deduced from fitting to the data a Gaussian distribution that is periodic in phase. The error bars represent the geometric mean of the standard error in the mean and the uncertainty of the mean phase obtained from the fit. The solid red line is the result of a numerical simulation based on Eq. (1) which includes effects due to residual atomic motion. The normal-mode resonances can be clearly identified by sharp increases in phase. Between the normal modes, at the antiresonance, an inverse behavior is apparent, with the phase shift exhibiting a negative slope which is maximal at the frequency of the uncoupled atom. ![Tuning of the antiresonance phase shift via the ac-Stark effect. (a) Theoretical phase shift of light transmitted through the strongly coupled system as a function of the probe-cavity $\Delta_{pc}$ (horizontal axis) and atom-cavity $\Delta_{ac}$ (vertical axis) detuning. The diagonal black line indicates where the probe beam is on resonance with the atom. The horizontal dotted lines show the atom-cavity detuning for the scans depicted in the lower plots. Vertical arrows indicate the frequencies of the antiresonances. (b-d) Measured phase shift of the transmitted light for atom-cavity detunings of 12MHz (b), $-5$MHz (c) and $-14$MHz (d), corresponding to dipole-trap laser powers of 1400nW, 950nW, and 700nW, respectively. The solid lines are numerical simulations of the phase shift for each dipole trap laser power.[]{data-label="fig:figure3"}](figure3.pdf) The ac-Stark shift induced by the dipole trap light provides a simple way of altering the atom’s resonance frequency. In order to verify the behavior depicted in Fig. 2, we perform phase measurements across the normal modes for different ac-Stark shifts (i.e. different dipole-trap intensities). Figure 3a shows a contour plot of the expected phase as a function of the probe-cavity $\Delta_{pc}$ and atom-cavity $\Delta_{ac}$ detunings. The diagonal line indicates where the probe is resonant with the atom. The horizontal dotted lines mark the atom’s detuning at different dipole-trap intensities. The subplots (b-d) show the corresponding measured phase of the light transmitted through the strongly coupled system. The atom is red-detuned from the cavity resonance in (b) and (c), whereas blue detuning is shown in (d). In all scans, the two normal modes are recognizable as positive slopes in the phase on either side of $\Delta_{pc}=0$. The interesting feature, however, is the negative slope of the antiresonance phase shift in between, which always occurs at the atom’s resonance frequency (marked with a vertical arrow). This shows that the phase shift indeed directly reflects the frequency of the uncoupled atom. ![The phase shift induced by a single atom as a function of probe-atom detuning $\Delta_{pa}$. The probe beam is kept on resonance with the cavity ($\Delta_{pc}$) while the atomic resonance frequency is tuned via the ac-Stark effect induced by the intracavity dipole trap. The dipole trap power is shown on the upper axis. This plot corresponds to a vertical scan in Fig. 3 (a). In the central region, error bars are small and omitted for clarity. The larger error bars for $\Delta_{pa}<0$ are caused by the blue detuning of the atom with respect to the cavity, which causes cavity heating [@Maunz:2004]. The red line shows an arctangent fit, with a measured width of $3.2\pm0.3$MHz that corresponds to the bare-atom decay rate.[]{data-label="fig:figure4"}](figure4.pdf) Since the frequency of the antiresonance is exclusively determined by the atom, the ac-Stark shift induced by the dipole trap can be used to optically control the corresponding phase shift. We demonstrate this by measuring the phase shift of the probe light as the dipole power is varied between 450nW and 1700nW (Fig. 4), with the probe laser kept resonant to the empty cavity ($\Delta_{pc} = 0$). As the atom moves across the cavity resonance, we observe a phase shift of 140$^\circ$. This is the largest shift yet observed from a single emitter [@Turchette:1995; @Fushman:2008; @Aljunid:2009; @Pototschnig:2011; @Jechow:2013]. The theoretical maximum for our system, assuming no atomic motion and maximal coupling to the cavity, is 150$^\circ$. An arctangent fit to the experimental data yields a width of $(3.2 \pm 0.3)$MHz, which is in good agreement with the bare atomic decay rate of 3.0MHz. This verifies that the atom alone, despite its strong coupling to the cavity, determines the characteristics of the antiresonance phase shift. Moreover, our measurement demonstrates a large ($\sim\pi$) and optically controllable (by means of the dipole trap power) phase shift induced on a single-mode light beam by a single atom. We now propose to use antiresonance phase shifts for the characterization of complex quantum circuits. Their utility stems from the general result that antiresonances represent what the resonances of the system *would* be if the driven component were held unexcited [@Wahl:1999]. This explains why the phase shift in our system has the frequency and width of the atomic resonance, as we drive the cavity mode. Consider a system of resonators and qubits coupled together in some arbitrary topology (Fig. 5a). The excitation spectrum of such a system exhibits distinct resonance-antiresonance behavior under driving (Fig. 5b). The resonances depend on properties of all components and their couplings, and are independent of which is driven. The antiresonances, however, depend on everything *except* the component being driven, and therefore provide information about how it affects the total system. By driving each component in turn, information about all of the individual subsystems can be obtained, despite the couplings between them. As a simple example of this principle, let us suppose that one subsystem exhibits a much larger dissipation than the others, and it is desired to find the lossy component. The system resonances are of no help; their linewidths are an average of the decay rates of all components in the circuit, regardless of which we choose to drive. However, the antiresonances display properties of only the undriven components. Therefore, when the offending component is driven the antiresonances become suddenly narrower, allowing it to be easily identified. ![Antiresonance characterization of complex coupled systems. (a) A notional integrated quantum circuit: the red dots represent circuit components (e.g. qubits or cavities) and the blue lines show the couplings. (b) When driving different components, the system’s resonances remain fixed while the positions and widths of the antiresonances change. Measuring the antiresonance phase shifts under different driving conditions therefore facilitates the characterization of the circuit.[]{data-label="fig:figure5"}](figure5.pdf) In conclusion, the experimental study carried out here demonstrates a powerful spectroscopic technique that should prove useful in future experiments with interacting quantum systems. In addition, many other potential applications of antiresonances in quantum systems can be envisaged. First, the ability to measure the properties of a single constituent in a strongly coupled system will be valuable in situations where probing the constituents in isolation is impractical, e.g. in solid-state cavity QED systems where the emitter and cavity are physically inseparable. Second, the grossly imbalanced distribution of energy among the system constituents at the antiresonance frequency could be useful for cavity cooling of molecules [@Horak:1997; @Vuletic:2000; @Morigi:2007], since driving the molecules at the empty-cavity resonance frequency would limit their excitation and thus prevent optical pumping into unwanted molecular states. Third, using an emitter with a narrow linewidth may render the antiresonance phase shift useful for optical clock experiments, as it is immune to fluctuations of the cavity. Fourth, nonlinear effects like electromagnetically induced transparency could be incorporated in order to remove the opacity [@Mucke:2010; @Kampschulte:2010]. The huge phase shift that can be imparted on a light beam by a single emitter might then find an application in quantum-information-processing devices [@Turchette:1995]. Finally, our simulations predict giant intensity fluctuations at the cavity-driven antiresonance. One can thus expect large dipole fluctuations for an atom-driven antiresonance. It would be interesting to further explore the connection between these fluctuations and the anomalous atomic momentum diffusion noted by Murr [*et al.*]{} [@Murr:2006]. P. A. would like to thank G. R. Dennis for useful discussions. C. S. acknowledges financial support from the Bavarian Ph.D. program of excellence QCCC, and P. A. from the Alexander von Humboldt foundation and the EU through the ITN-project CCQED. [^1]: The work of Ref. [[@Rice:1996]]{} likened the transmission dip in the intermediate-coupling regime to electromagnetically induced transparency. However, we note that the phase shift studied in this work cannot be observed on the light transmitted through an EIT medium, despite the apparent similarity between Eq. (1) and the EIT susceptibility $\chi$. This is because the phase acquired by light passing through an EIT medium is proportional to $\Re(\chi)$ (the refractive index), not to its argument. Here, we measure directly the phase of [$\langle \hat{a} \rangle$]{}. [^2]: See Supplemental Material for details of cooling and confinement, heterodyne detection, phase shift measurement and numerical simulation.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study the excitation of spin waves in magnetic insulators by the current-induced spin-transfer torque. We predict preferential excitation of surface spin waves induced by an easy-axis surface anisotropy with critical current inversely proportional to the penetration depth and surface anisotropy. The surface modes strongly reduce the critical current and enhance the excitation power of the current-induced magnetization dynamics.' author: - 'Jiang Xiao (萧江)' - 'Gerrit E. W Bauer' bibliography: - 'all.bib' title: 'Spin wave excitation in magnetic insulators by spin-transfer torque' --- [UTF8]{}[gbsn]{} Spintronics is all about manipulation and transport of the spin, the intrinsic angular momentum of the electron [@zutic_spintronics:_2004]. These two tasks are incompatible, since manipulation requires strong coupling of the spin with the outside world, which perturbs transport over long distances. In normal metals spin can be injected and read out easily, but the spin information is lost over short distances [@bass_spin-diffusion_2007]. In spin-based interconnects, transporting spins over longer distances is highly desirable [@khitun_non-volatile_2011]. The long-range transport of spin information can be achieved by encoding the information into spin waves that are known to propagate coherently over centimeters [@serga_yig_2010]. It has been demonstrated in Refs. for the magnetic insulator Yttrium-Iron-Garnet (YIG) that spin waves can be actuated electrically by the spin-transfer torque [@slonczewski_current-driven_1996; @berger_emission_1996] and detected by spin pumping [@tserkovnyak_enhanced_2002] at a distant contact. In the experiment by Kajiwara [@kajiwara_transmission_2010], Pt was used as spin current injector and detector, making use of the (inverse) spin Hall effect [@saitoh_conversion_2006]. In a $d = 1.3~{\mu}$m-thick YIG film spin waves were excited by a threshold charge current of $J_c {\sim} 10^9$ A/m$^2$. This value is much less than expected for the bulk excitation that in a linear approximation corresponds to the macrospin mode and is estimated as $J_c = (1/{\theta}_H) e{\alpha}{\omega}M_sd/ {\gamma}{\hbar}{\sim} 10^{11{\sim}12}$ A/m$^2$, where $e$ and ${\gamma} $ are the electron charge and gyromagnetic ratio, respectively, and we used the parameter values in Table \[tab:param\] for the ferromagnetic resonance frequency ${\omega}$, the spin Hall angle of Pt ${\theta}_H$, magnetic damping ${\alpha}$, and saturation magnetization $M_s$. In this Letter, we address this large mismatch between observed and expected critical currents by studying the threshold current and excitation power of current-induced spin wave excitations. We present a possible answer to the conundrum by proving that the threshold current is strongly decreased in the presence of an easy-axis surface anisotropy (EASA). Simultaneously, EASA increases the power of the spin wave excitation by at least two orders of magnitude. We study a structure as depicted in , where a non-magnetic (N) metallic thin film of thickness $t$ is in contact with a ferromagnetic insulator (FI), whose equilibrium magnetization is along the $z$-direction. The spin current injected into the ferromagnetic insulator is polarized transverse to the magnetization $\JJ_s = J_s \mm {\times} \hzz {\times} \mm$. The bulk magnetization is described by the Landau-Lifshitz-Gilbert (LLG) equation: $$ \label{eqn:llg} \dmm = -{\gamma}\mm{\times}\midb{\HH_0 + (A_{\rm ex}/ {\gamma}){\nabla}^2\mm + \hh} + {\alpha}\mm{\times}\dmm, $$ where $\HH_0$ includes the external and internal magnetic field, $A_{\rm ex}$ is the exchange constant, and $\hh$ is the dipolar field that satisfies Maxwell’s equations. In the quasistatic approximation, [*i.e.*]{} disregarding retardation in the electromagnetic waves, ${\nabla}{\times}\hh = 0$ and ${\nabla}{\cdot}\bb = {\nabla}{\cdot}(\hh+{\mu}_0M_s\mm) = 0$. All quantities are position and time dependent. In the absence of pinning, the total torque vanishes at the interface [@gurevich_magnetization_1996]: $$A_{\rm ex}\mm{\times}{{\partial}\mm\ov{\partial}\nb} - {2{\gamma}K_s\ov M_s}(\mm{\cdot}\nb)\mm{\times}\nb + {{\gamma}J_s\ov M_s}\mm{\times}\hzz{\times}\mm = 0, \label{eqn:bc}$$ where $\nb$ is the outward normal as seen from the ferromagnet. The first term in is the surface exchange torque, the second term the torque due to a perpendicular uniaxial surface anisotropy $\HH_a = {2K_1\ov M_s}(\mm{\cdot}\nb)\nb$ and $K_s = {\int}dx K_1$ across the surface, and the last term is the current-induced spin-transfer torque [@stiles_anatomy_2002]. We parameterize the surface anisotropy and spin current as wave numbers $k_s = 2{\gamma}K_s/A_{\rm ex}M_s$ and $k_j = {\gamma}J_s/A_{\rm ex}M_s$. The dipolar fields $h_{y,z}$ and $b_x$ are continuous across the interface. in combination with Maxwell’s equations describe the low energy magnetization dynamics and can be transformed into a 6th-order differential equation for the scalar potential ${\psi}$ with $\hh = - {\nabla}{\psi}$ [@de_wames_dipole-exchange_1970; @hillebrands_spin-wave_1990]. ![(Color online) An electrically insulating magnetic film of thickness $d$ with magnetization $\mm$ ($\|\hzz$ at equilibrium) in contact with a normal metal. A spin current $J_s\|\hzz$ is generated in the normal metal and absorbed by the ferromagnet.[]{data-label="fig:NF"}](NMFM){width="40.00000%"} Parameter YIG Unit --------------------------------- ------------------------ --------- ${\gamma}$ $1.76{\times}10^{11}$ 1/(T s) $M_s$ $^a1.56{\times}10^5$ A/m ${\omega}_M={\gamma}{\mu}_0M_s$ $34.5$ GHz $A_{ex}$ $4.74{\times}10^{-6}$ m$^2$/s ${\alpha}$ $^a6.7{\times}10^{-5}$ - ${\omega}_0 = {\gamma}H_0$ $0.5{\omega}_M$ GHz $K_s$ $^b5{\times}10^{-5}$ J/m$^2$ : Parameters for YIG. $^a$Ref. , $^bK_s$ ranges $0.01 {\sim} 0.1$ erg/cm$^2$ or $10^{-5} {\sim} 10^{-4}$ J/m$^2$, Ref. . []{data-label="tab:param"} The method described above extends a previous study by Hillebrands [@hillebrands_spin-wave_1990] by including the current-induced spin-transfer torque. We predict the critical conditions under which magnetization dynamics becomes amplified by the current-induced driving torque. We start with the limiting case of $d\ra {\infty}$ (semi-infinite ferromagnet). After linearization and Fourier transformation in both time and space domains, reduces to a 4th-order differential equation in ${\psi}$. Focusing for simplicity first on the case of vanishing in-plane wave-vector $\qq = (q^y, q^z) = 0$, the scalar potential can be written as: ${\psi}(\rr) = {\sum}_{j=1}^2a_je^{iq_jx}e^{i{\omega}t}$ with $$q_j({\omega}) = -i\smlb{{\omega}_0+\half {\omega}_M {\pm} \sqrt{{\omega}^2+\quarter{\omega}_M^2} + i{\alpha}{\omega}\over A_{\rm ex}}^{\half}$$ and $\abs{q_1}{\gg} \abs{q_2}$ when ${\omega} {\sim} {\omega}_0$. Imposing the boundary condition in , up to the first order in $k_j$: $$ 0 = 2q_1q_2(q_1+q_2) +ik_s\midb{(q_1+q_2)^2+{{\omega}_M\ov A_{\rm ex}}} + 4k_j{\omega}. \label{eqn:det}$$ The solutions of are the [*complex*]{} eigen-frequencies ${\omega}$, whose real part represents the energy and imaginary part the inverse lifetime. To 0th-order in dissipation, [*i.e.*]{} with vanishing bulk damping (${\alpha} = 0$) and spin current injection ($k_j = 0$), and using $\abs{q_1} {\gg} \abs{q_2}$, simplifies to $k_s = iq_2/[1+{\omega}_M/(Aq_1^2)]$, which has no non-trivial solution for $k_s \le 0$. The single real solution for $k_s > 0$ obeys ${\omega} < \sqrt{{\omega}_0({\omega}_0+{\omega}_M)}$ such that both $q_{1,2}$ are negative imaginary: $q_1 {\simeq} -i\sqrt{(2{\omega}_0+{\omega}_M)/A_{\rm ex}}, q_2 {\simeq} -ik_s{\omega}_0/(2{\omega}_0+{\omega}_M) + O(k_s^2)$, [*i.e.*]{} a surface spin wave induced by the easy-axis surface anisotropy. With the criteria Im$~{\omega} < 0$ and to leading order in $0 < k_s{\ll} q_1$, leads to the critical current: $$k_j^c {\approx} -{{\alpha}\ov k_s}{ ({\omega}_0+{\omega}_M/2)^2\ov A_{\rm ex}{\omega}_0} + {\alpha} {{\omega}_0+2{\omega}_M\ov 4{\omega}_0}\sqrt{2{\omega}_0+{\omega}_M\ov A_{\rm ex}}. \label{eqn:kjc}$$ When there is no surface anisotropy ($k_s \ra 0$), the critical current diverges because the macrospin mode cannot be excited in a semi-infinite film. Using the parameters given in Table \[tab:param\] in , we estimate the critical current for exciting the EASA induced surface wave (at $\qq = 0$) to be $k_j^c = -0.08k_c$, where $k_c = {\alpha}({\omega}_0+{\omega}_M/2)d/A_{\rm ex}$ is the critical current for bulk excitation in a YIG thin film of thickness $d = 0.61~{\mu}$m (used below). EASA pulls down a surface spin wave for the following reason: when $k_j = J_s = 0$, the boundary condition in requires cancellation between the exchange and surface anisotropy torques: ${\partial}_xm_x - k_sm_x = {\partial}_xm_y = 0$. The exchange torque depends on the magnetization derivative in the normal direction, and can only take one sign in the whole film, and $m_{x,y} \ra 0$ as $x\ra -{\infty}$, therefore $(1/m_x){\partial}_x m_x > 0$. Torque cancellation (for a non-trivial solution) is therefore possible only for $k_s > 0$. The surface spin wave induced by EASA ($k_s > 0$) for the [*in-plane*]{} magnetized film ($m_z {\sim} 1$) discussed in this Letter is analogous to the surface spin waves for the [*perpendicular*]{} magnetized film ($m_x {\sim} 1$) induced by easy-plane surface anisotropy ($k_s < 0$) studied before in YIG films [@puszkarski_surface_1973; @wigen_microwave_1984; @patton_magnetic_1984; @kalinikos_theory_1986; @gurevich_magnetization_1996]. For perpendicular magnetization, a different boundary condition: ${\partial}_xm_{y,z}+k_sm_{y,z} = 0$ results in a surface wave for $k_s < 0$. We now include all ingredients: finite thickness ($d = 0.61~{\mu}$m), surface anisotropy, intrinsic magnetic damping, spin current injection, exchange coupling, and dipolar fields. We calculate numerically the complex eigen-frequencies ${\omega}(\qq, k_j)$ as a function of the in-plane wave-vector $\qq$ and the applied spin current at the surface $k_j$. Im$~{\omega}$, the effective dissipation, can be either positive (damping) or negative (amplification) when driven by the spin-transfer torque. ![(Color online) Spin wave band structure and magnetization profiles in YIG for $d = 0.61~{\mu}$m without surface anisotropy: $k_s = 0$ at ${\theta} = {\angle}(\mm,\qq) = 90^o$. Top (from left to right): $\re{{\omega}/{\omega}_M}$ vs. $qd$, $\im{{\omega}/{\omega}_M}$ at $k_j = 0$, $\im{{\omega}/{\omega}_M}$ at $k_j = -0.2k_c$. Bottom: $m_x$ of the same 6 modes for $qd = 0.09$ and $3.74$ indicated by the dashed vertical lines in the top panels. The colors label different bands. The amplitude of the green mode mode (shaded/yellow panel) is amplified.[]{data-label="fig:ks0"}](ks0band "fig:"){width="48.00000%"} ![(Color online) Spin wave band structure and magnetization profiles in YIG for $d = 0.61~{\mu}$m without surface anisotropy: $k_s = 0$ at ${\theta} = {\angle}(\mm,\qq) = 90^o$. Top (from left to right): $\re{{\omega}/{\omega}_M}$ vs. $qd$, $\im{{\omega}/{\omega}_M}$ at $k_j = 0$, $\im{{\omega}/{\omega}_M}$ at $k_j = -0.2k_c$. Bottom: $m_x$ of the same 6 modes for $qd = 0.09$ and $3.74$ indicated by the dashed vertical lines in the top panels. The colors label different bands. The amplitude of the green mode mode (shaded/yellow panel) is amplified.[]{data-label="fig:ks0"}](ks0wave "fig:"){width="50.00000%"} First, we disregard the surface anisotropy: $K_s = k_s = 0$. With ${\theta}$ the angle between $\qq$ and $\mm$, the results for ${\theta} = 90^o$ are shown in . In the top left panels Re$~{\omega}$, the magnetostatic surface wave (MSW) is seen to cross the flat bulk bands [@de_wames_dipole-exchange_1970]. When no spin current is applied ($k_j = 0$), the dissipative part Im$~{\omega} {\sim} {\alpha}({\omega}_0+{\omega}_M/2) > 0$, as shown in the top middle panels. At a spin current that is 20% of that required for bulk excitation: $k_j = 0.2k_c$, the dissipative part Im$~{\omega}$ (top right panel) decreases while Re$~{\omega}$ remains unchanged because the spin-transfer torque as magnetic (anti-)damping mainly affects Im$~{\omega}$. Negative effective dissipation implies spin wave amplification. This happens for the 5th (green) band at $qd {\in} [2,6.5]$, which corresponds to a (chiral) MSW (mixed with bulk modes) formed near the interface (shaded/yellow panel). On the other hand, for ${\theta} = -90^o$ (not shown), the magnetostatic surface wave at the opposite surface to vacuum ($x = -d$) is only weakly affected by the spin current injection at $x = 0$. We now turn on EASA: $k_s = 25.0/{\mu}$m (or $K_s = 5{\times}10^{-5}$J/m$^2$) at the top surface ($x = 0$). shows the results for ${\theta} = 90^o$. The changes of Re$~{\omega}$ and Im$~{\omega}$ at $k_j = 0$ are modest (), but an additional band (black) appears, [*viz*]{}. the surface spin wave band induced by EASA. The spin-transfer torque strongly affects this mode because of its strong surface localization [@sandweg_enhancement_2010]. As seen in the top right panel, almost the whole band is strongly amplified by a spin current injection of $k_j = 0.2k_c$. Inspecting the spin wave profiles at two different $q$ values, we observe a surface spin wave near $x = 0$ for the black band at small $q$ (shaded/yellow panel in the middle row in ). At larger $q$, the 1st (black) band loses its surface wave features to the 5th (red) band (see top right panel in ). The red band mode starts out as a magnetostatic surface spin wave, but the EASA enhances its surface localization by hybridization with the black mode to become strongly amplified by the spin current at higher $q$. Also in the lower panel of we observe that the red band has acquired the surface character. ![(Color online) Same as but with $k_s = 25/{\mu}$m.[]{data-label="fig:ks1"}](ks25band "fig:"){width="48.00000%"} ![(Color online) Same as but with $k_s = 25/{\mu}$m.[]{data-label="fig:ks1"}](ks25wave "fig:"){width="50.00000%"} ![(Color online) Top: Power spectrum (resolution ${\delta}{\omega}/{\omega}_M = 0.01$) at various current levels ($k_j = 0.2k_c$ from the top decreasing by ${\Delta}k_j = 0.01k_c$) without (left: $k_s = 0$) and with (right: $k_s = 25.0$/${\mu}$m) surface anisotropy. Inset: the integrated power versus $k_j$. []{data-label="fig:Pw"}](Pw){width="50.00000%"} We introduce an approximate power spectrum () that summarizes all information about the mode-dependent current-induced amplification: $$P({\omega}) = {\sum}_n{\int}_{\rm Im~{\omega}_{\it n}<0} \abs{\rm Im~{\omega}_{\it n}(\qq)} {\delta}[{\omega}-Re~{\omega}_{\it n}(\qq)]d\qq \label{eqn:Pw}$$ with $n$ the band index is the density of states at frequency ${\omega}$ weighted by its amplification. Without surface anisotropy, only a few modes are excited even at a relatively large current ($k_j = 0.2k_c$). However, when $k_s = 25$/${\mu}$m, the excitation is strongly enhanced by more than two orders of magnitude due to the easily excitable surface spin wave modes. Furthermore, we observe broadband excitation over a much larger range of frequencies. This power spectrum is rather smooth, while the experiments by Kajiwara [*et al*]{}. [@kajiwara_transmission_2010] show a large number of closely spaced peaks. The latter fine structure is caused by size quantization of spin waves due to the finite lateral extension of the sample that has not been taken into account in our theory since it complicates the calculations without introducing new physics. The envelope of the experimental power spectrum compares favorably with the present model calculations. The insets in show the integrated power and allow the following conclusions: 1) the excitation power is enhanced by at least two orders of magnitude by the EASA; 2) the critical current for magnetization dynamics is $k_j {\sim} 0.08k_c$ for $k_s = 25/{\mu}$m, which agrees very well with the estimates from . This critical current is about one order of magnitude smaller than that for the bulk excitation ($k_c$), and about half of that for MSW without surface anisotropy ($k_j = 0.16k_c$). For $k_s = 25/{\mu}$m, it corresponds to $J_c = 3{\times}10^{10}$A/m$^2$ for ${\theta}_H = 0.01$ [@mosendz_quantifying_2010] and $3.8{\times}10^9$A/m$^2$ for ${\theta}_H = 0.08$ [@ando_electric_2008; @liu_spin-torque_2011]. These values are calculated for a film thickness of $d = 0.61~{\mu}$m, but should not change much for $d = 1.3~{\mu}$m corresponding to the experiment [@kajiwara_transmission_2010], because the excited spin waves are localized at the interface. Compared to the original estimate $J_c {\sim} 10^{11{\sim}12}$A/m$^2 $, the critical current for a surface spin wave excitation is much closer to the experimental value of $J_c {\sim} 10^9$A/m$^2$ [@kajiwara_transmission_2010] (although these experiments report a very inefficient spin wave absorption in contrast to the present model assumption). According to , critical current (excitation power) would be further reduced (increased) by a larger EASA. Ref. reports an enhancement of the YIG surface anisotropies for capped as compared to free surfaces. A Pt cover on a YIG surface [@kajiwara_transmission_2010] may enhance the surface anisotropy as well. As seen from , the surface mode (black band) has group velocity ${\partial}{\omega}/{\partial}\qq$ comparable to that of the MSW. The excited surface spin wave therefore propagate and can be used to transmit spin information over long distance at a much lower energy cost than the bulk spin waves. In conclusion, we predict that an easy-axis surface anisotropy gives rises to a surface spin wave mode, which reduces the threshold current required to excite the spin waves and dramatically increases the excitation power. Multiple spin wave modes can be excited simultaneously at different frequencies and wave-vectors, thereby explaining recent experiments. Surface spin wave excitations could be useful in low-power future spintronics-magnonics hybrid circuits. This work was supported by the National Natural Science Foundation of China (Grant No. 11004036), the special funds for the Major State Basic Research Project of China (No. 2011CB925601), the FOM foundation, DFG Priority Program SpinCat, and EG-STREP MACALO. J. X. acknowledges the hospitality of the G. B. Group at the Kavli Institute of NanoScience in Delft.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We use a method from descriptive set theory to investigate the two complete clones above the unary clone on a countable set.' address: | Discrete Mathematics and Geometry\ Algebra Research GroupTU Wien\ Wiedner Hauptstra[ß]{}e 8–10/104 1040 Wien, Austria (Europe) author: - Martin Goldstern bibliography: - 'listb.bib' - 'listx.bib' - 'goldstrn.bib' - 'other.bib' date: '2004-04-11' title: Analytic Clones --- Introduction. Known results =========================== An “operation” on a set $X$ is a function $f:X^n\to X$, for some $n\in {{\mathbb N}}\setminus\{0\}$. If $f$ is such an “$n$-ary operation”, $g_1$, …, $g_n$ are $k$-ary, then the “composition” $f(g_1,\ldots, g_n)$ is defined naturally: $$f(g_1,\ldots, g_n)(\vec x) = f(g_1(\vec x), \ldots, g_n(\vec x) ) \ \ \mbox{ for all $\vec x \in X^k$}$$ A clone on a set $X$ is a set ${\mathscr{C}}$ of operations which contains all the projections and is closed under composition. (Alternatively, ${\mathscr{C}}$ is a clone on $X$ [if]{} ${\mathscr{C}}$ is the set of term functions of some universal algebra over $X$.) The family of all clones forms a complete algebraic lattice $Cl(X)$ with greatest element ${{\mathscr O}}= \bigcup_{n=1}^\infty {{{\mathscr O}}^{(n)}} $, where ${{{\mathscr O}}^{(n)}} = X^{X^n} $ is the set of all $n$-ary operations on $X$. (In this paper, the underlying set $X$ will always be the set ${{\mathbb N}}= \{0,1,2,\ldots \} $ of natural numbers.) The coatoms of this lattice $Cl(X) $ are called “precomplete clones” or “maximal clones” on $X$. For any set ${\mathscr{C}}\subseteq {{\mathscr O}}$ we write $\langle {\mathscr{C}}\rangle$ for the smallest clone containing ${\mathscr{C}}$. In particular, $\langle {{{\mathscr O}}^{(1)}} \rangle$ is the set of all functions $\pi\circ f$, where $f:X\to X$ is arbitrary and $\pi:X^n\to X$ is a projection to one coordinate. However, to lighten the notation we will identify ${{{\mathscr O}}^{(1)}} $ (the set of all unary functions) with $\langle {{{\mathscr O}}^{(1)}} \rangle$ (the set of all “essentially” unary functions). For singleton sets $X$ the lattice $Cl(X)$ is trivial; for $|X|=2$ the lattice $Cl(X)$ is countable, and well understood (“Post’s lattice”). For $|X|\ge 3$, $Cl(X)$ is uncountable. For infinite $X$, $Cl(X)$ has $2^{2^{|X|}}$ elements, and there are even $2^{2^{|X|}}$ precomplete clones on $X$. In this paper we are interested in the interval $[{{{\mathscr O}}^{(1)}}, {{\mathscr O}}]$ of the clone lattice on a countable set $X$. It will turn out that methods from descriptive set theory are useful to describe the complexity of several interesting clones in this interval, and also the overall structure of the interval. For simplicity we concentrate on binary clones, i.e., clones generated by binary functions. Equivalently, we can define a binary clone to be a set ${\mathscr{C}}$ of functions $f:{{\mathbb N}}^2\to {{\mathbb N}}$ which contains the two projections and is closed under composition: if $f,g,h\in {\mathscr{C}}$, then also the function $f(g,h)$ (mapping $(x,y)$ to $ f(g(x,y),h(x,y))$) is in ${\mathscr{C}}$. The set of binary clones, $Cl^{(2)}(X)$, also forms a complete algebraic lattice. Occasionally we will remark on how to modify the definitions or theorems for the case of “full” clones, i.e., for clones that are not necessarily generated by binary functions. (In some cases this generalization is trivial, in other cases it is nontrivial but known, and in some cases it is still open.) By [@Gavrilov:1965] (see also [@GoSh:737]), we know that there are exactly 2 precomplete binary clones above ${{{\mathscr O}}^{(1)}} $, which we call ${{\sf T}}_1$ and ${{\sf T}}_2$ (see below). It is known that the interval $[{{{\mathscr O}}^{(1)}}, {{{\mathscr O}}^{(2)}} ] $ of binary clones is dually atomic, so it can be written $$[{{{\mathscr O}}^{(1)}}, {{{\mathscr O}}^{(2)}} ] = [{{{\mathscr O}}^{(1)}} , {{\sf T}}_1] \cup [{{{\mathscr O}}^{(1)}} , {{\sf T}}_2]\cup \{{{{\mathscr O}}^{(2)}} \},$$ i.e., every binary clone above ${{{\mathscr O}}^{(1)}} $ other than ${{{\mathscr O}}^{(2)}} $ itself is contained in ${{\sf T}}_1$ or in ${{\sf T}}_2$. So we will have to investigate the intervals $ [{{{\mathscr O}}^{(1)}}, {{\sf T}}_1]$ and $[{{{\mathscr O}}^{(1)}}, {{\sf T}}_2]$. We will see that these two structures are very different, and that this difference can be traced back to a difference in “complexity” of the two binary clones ${{\sf T}}_1$ and ${{\sf T}}_2$. More precisely, ${{\sf T}}_1$ is a Borel set, while ${{\sf T}}_2$ is a complete coanalytic set. We will see that ${{\sf T}}_1$ is finitely generated over ${{{\mathscr O}}^{(1)}}$, but ${{\sf T}}_2$ is not countably generated over ${{{\mathscr O}}^{(1)}} $. A function $f:{{\mathbb N}}\times {{\mathbb N}}$ is called “almost unary”, if at least one of the following holds: 1. There is a function $F:{{\mathbb N}}\to {{\mathbb N}}$ such that $\forall x\,\forall y: f(x,y)\le F(x)$. 2. There is a function $F:{{\mathbb N}}\to {{\mathbb N}}$ such that $\forall x\,\forall y: f(x,y)\le F(y)$. We let ${{\sf T}}_1$ be the set of all binary functions which are almost unary. It is easy to see that ${{\sf T}}_1$ is a binary clone containing ${{{\mathscr O}}^{(1)}} $. Let $B \subseteq {{{\mathscr O}}^{(2)}}$. The set ${{\rm Pol}}(B)$ is defined as $$\bigcup_{k=1 }^\infty \{f\in {{{\mathscr O}}^{(k)}} : \forall g_1,\ldots, g_k\in B: f(g_1,\ldots, g_k)\in B\}$$ (Background and a more general definition of ${{\rm Pol}}$ can be found in [@PK:1979].) ${{\rm Pol}}(B)$ is a clone. If $B$ is a binary clone, then ${{\rm Pol}}(B) \cap {{{\mathscr O}}^{(2)}} = B$. Let $\Delta:= \{(x,y)\in {{\mathbb N}}\times {{\mathbb N}}: x > y\}$, $\nabla:= \{(x,y): x < y\}$. For $S_1,S_2\subseteq {{\mathbb N}}$ we let $\Delta_{S_1,S_2}: =\Delta\cap (S_1\times S_2)$. We define $\nabla_{S_1,S_2}$ similarly. If $S_1,S_2$ are infinite subsets of ${{\mathbb N}}$, and $g: \Delta_{S_1,S_2}\to {{\mathbb N}}$ or $g:\nabla_{S_1,S_2} \to {{\mathbb N}}$, then we say that $g$ is “canonical” iff one of the following holds: 1. $g$ is constant 2. There is a 1-1 function $G:S_1\to {{\mathbb N}}$ such that $$\forall (x,y)\in {{\rm dom}}(g): g(x,y) = G(x)$$ 3. There is a 1-1 function $G:S_2\to {{\mathbb N}}$ such that $$\forall (x,y)\in {{\rm dom}}(g): g(x,y) = G(y)$$ 4. $g$ is 1-1. The “type” of $g$ is one of the labels “constant”, “x”, “y”, or “1-1”, respectively. Let $f:{{\mathbb N}}\times {{\mathbb N}}\to {{\mathbb N}}$. We say that $f$ is canonical on $S_1\times S_2$ iff both functions $f{{\upharpoonright}}\Delta_{S_1,S_2}$ and $f{{\upharpoonright}}\nabla_{S_1,S_2}$ are canonical (but not necessarily of the same type), and moreover: > Either the ranges of $f{{\upharpoonright}}\Delta_{S_1,S_2}$ and $f{{\upharpoonright}}\nabla_{S_1,S_2}$ are disjoint,\ > or $S_1=S_2$, and $f(x,y)=f(y,x)$ for all $x,y\in S_1$. The following fact is a consequence of Ramsey’s theorem, see [@GoSh:737]. It was originally proved in a slightly different formulation already in [@Gavrilov:1965]. \[ramsey\] Let $f:{{\mathbb N}}\times {{\mathbb N}}\to {{\mathbb N}}$. Then there are infinite sets $S_1$, $S_2$ such that $f$ is canonical on $S_1\times S_2$. Moreover, for any infinite sets $S_1, S_2$ we can find infinite $S'_1\subseteq S_1 $, $S'_2\subseteq S_2$ such that $f$ is canonical on $S'_1\times S'_2$. Let $f:{{\mathbb N}}\times {{\mathbb N}}\to {{\mathbb N}}$. We say that $f$ is “nowhere injective”, if: > whenever $f$ is canonical on $S_1\times S_2$, then neither $f{{\upharpoonright}}\Delta_{S_1,S_2}$ nor $f{{\upharpoonright}}\nabla_{S_1,S_2}$ is 1-1. We let ${{\sf T}}_2$ be the set of all nowhere injective functions. Using fact \[ramsey\], it is easy to check that ${{\sf T}}_2$ is a binary clone; clearly ${{\sf T}}_2$ contains ${{{\mathscr O}}^{(1)}}$. (More precisely, ${{\sf T}}_2$ contains $\langle {{{\mathscr O}}^{(1)}} \rangle\cap {{{\mathscr O}}^{(2)}}$.) ${{\sf T}}_1$ and ${{\sf T}}_2$ are precomplete binary clones, and every binary clone containing ${{{\mathscr O}}^{(1)}}$ is either contained in one of ${{\sf T}}_1$, ${{\sf T}}_2$, or equal to the clone of all binary functions. \[For the non-binary case: ${{\rm Pol}}({{\sf T}}_1)$ and ${{\rm Pol}}({{\sf T}}_2)$ are precomplete clones, and every clone $\supseteq {{{\mathscr O}}^{(1)}}$ is either $={{\mathscr O}}$, or $\subseteq {{\rm Pol}}({{\sf T}}_1)$, or $\subseteq {{\rm Pol}}({{\sf T}}_2)$.\] We will prove the following: - (Section \[t1\]) ${{\sf T}}_1$ is finitely generated over ${{{\mathscr O}}^{(1)}} $, so the interval $[{{{\mathscr O}}^{(1)}}, {{\sf T}}_1]$ in the lattice of binary clones is dually atomic.\ In fact, the interval contains a unique coatom: ${{\sf T}}_1\cap {{\sf T}}_2$. - (Section \[t2\]) ${{\sf T}}_2$ is neither finitely nor countably generated over ${{{\mathscr O}}^{(1)}} $.\ ${{\sf T}}_1\cap {{\sf T}}_2$ is a coatom in the interval $[{{{\mathscr O}}^{(1)}}, {{\sf T}}_2]$. (Easy)\ Any clone which is a Borel set (or even an analytic set) cannot be a coatom in this interval. I am grateful to J. Jezek, K. Kearnes, R. Pöschel, A. Romanowska, A. Szendrei and R. Willard for inviting me to Bela Csakany’s birthday conference (Szeged, 2002), at which I first presented the main ideas from this paper. Descriptive Set Theory {#dst} ====================== We collect a few facts and notions from descriptive set theory. (For motivation, history, details and proofs see the textbooks by Moschovakis [@Moschovakis:1980] or Kechris [@Kechris:1995].) Let $X$ be a countable set (usually $X={{\mathbb N}}$, or $X={{\mathbb N}}^k$), and $Y$ a finite or countable set, $|Y|\ge 2$ (usually $Y={{\mathbb N}}$, or $Y= 2:= \{0,1\}$). $Y^X$ is the space of all functions from $X$ to $Y$. We equip $Y$ with the discrete topology, $Y^X$ and $Y^{X^n}$ with the product topology, and $\bigcup_{n=1}^\infty Y^{X^n}$ with the sum topology. All these spaces are “Polish spaces”, i.e., they are separable and carry a (natural) complete metric. The family of Borel sets is the smallest family $B$ that contains all open sets and is closed under complements and countable unions (equivalently: contains all open sets and all closed sets, and is closed under countable unions and countable intersections). A function $f$ between two topological spaces is called a Borel function iff the preimage of any Borel set under $f$ is again a Borel set. A [*finite sequence on $Y$*]{} is a tuple $(a_0,\ldots, a_{n-1})\in Y^n$. If $s\in Y^k$ and $t\in Y^n$ are finite sequences, $k<n$, then we write $s\vartriangleleft t$ iff $s$ is an initial segment of $t$. We write $Y^{<\omega} := \bigcup_{n\in {{\mathbb N}}} Y^n$ for the set of all finite sequences on $Y$. If $Y$ is countable, then also $Y^{<\omega}$ is countable. We can identify ${{\mathscr P}}(Y^{<\omega})$, the power set of $Y^{<\omega}$, with the set $2^{Y^{<\omega}}$ of all characteristic functions, so also ${{\mathscr P}}(Y^{<\omega})$ carries a natural topology. A “tree on $Y$” is a set $T \subseteq \bigcup_{n\in {{\mathbb N}}} Y^n$ of finite sequences which is downward closed, i.e., > whenever $t\in T$, $s\vartriangleleft t$, then also $s\in T$ The set of all trees is easily seen to be a closed subset of ${{\mathscr P}}(Y^{<\omega})$. For any tree $T$ on $Y$ we call $f\in Y^{{\mathbb N}}$ a [*branch*]{} of $T$ iff $\forall n: f{{\upharpoonright}}n \in T$. (Here we write $f{{\upharpoonright}}n$ for $(f(0), \ldots, f(n-1))$.) We write $[T]$ for the set of all branches of $T$. It is easy to see that $[T]$ is always a closed set in $Y^{{\mathbb N}}$, and that every closed set $\subseteq Y^{{\mathbb N}}$ is of the form $[T]$ for some tree $T$. We call a tree $Y$ [*well-founded*]{} if $[T] = \emptyset$, i.e., if there is no sequence $s_0 \vartriangleleft s_1 \vartriangleleft \cdots$ of elements of $T$. We write ${{{\bf WF}}}$ for the set of all well-founded trees. The class of [*analytic sets*]{} is a proper extension of the class of Borel sets. There are several possible equivalent definitions of “analytic”, for example one could choose the equivalence (1)$\Leftrightarrow$(3) in fact \[def.ana\] as the definition of “analytic”. \[def.ana\] Let ${{\mathscr X}}$ be a Polish (=complete metric separable) topological space, $A \subseteq {{\mathscr X}}$, $C:= {{\mathscr X}}\setminus A$. Then the following are equivalent: 1. $A$ is analytic 2. $C$ is coanalytic 3. $A=\emptyset$, or there is a continuous function $f:{{\mathbb N}}^{{\mathbb N}}\to {{\mathscr X}}$ with $A=f[{{\mathbb N}}^{{\mathbb N}}]$ 4. There is a Borel set $B \subseteq {{\mathbb N}}^{{\mathbb N}}$ and a continuous function $f:{{\mathbb N}}^{{\mathbb N}}\to{{\mathscr X}}$ with $A=f[B]$ 5. There is a continuous function $f:{{\mathscr X}}\to {{\mathscr P}}({{\mathbb N}}^{<\omega})$ such that $C = f^{-1}[ {{{\bf WF}}}] $. 6. (Assuming ${{\mathscr X}}= Y^{{\mathbb N}}$.) There is a set $R \subseteq Y^{<\omega}\times {{\mathbb N}}^{<\omega}$ such that $$A = \{ f\in Y^{{\mathbb N}}: \ \exists g\in {{\mathbb N}}^{{\mathbb N}}\,\forall n\,\, (f{{\upharpoonright}}n, g{{\upharpoonright}}n) \in R \}$$ The coanalytic sets are just the sets whose complement is analytic. Borel sets are of course both analytic and coanalytic, and the “Separation theorem” states that the converse is true: Let $A\subseteq {{\mathscr X}}$ be both analytic and coanalytic. Then $A$ is a Borel set. Analytic sets have the following closure properties: \[closure\] 1. All Borel sets are analytic (and coanalytic). 2. The countable union or intersection of analytic sets is again analytic. Similarly, the countable union or intersection of coanalytic sets is again analytic. 3. The continuous preimage of an analytic set is analytic. The continuous preimage of a coanalytic set is coanalytic. 4. The continuous image of an analytic set is analytic. (Note that the continuous image of a Borel set is in general not Borel.) 5. In particular, if $C \subseteq {{\mathbb N}}^{{\mathbb N}}\times {{\mathbb N}}^{{\mathbb N}}$ is a Borel set, then the set $\{f\in {{\mathbb N}}^{{\mathbb N}}: \exists g\in {{\mathbb N}}^{{\mathbb N}}\, (f,g)\in C\}$ is analytic, and the set $\{f\in {{\mathbb N}}^{{\mathbb N}}: \forall g\in {{\mathbb N}}^{{\mathbb N}}\, (f,g)\in C\}$ is coanalytic. However, while the Borel sets are closed under complements, the analytic sets are not. There are coanalytic sets which are not analytic, for example the set ${{{\bf WF}}}$. We call a set $D \subseteq Y^X$ “complete coanalytic” iff 1. $D$ is coanalytic 2. For any coanalytic set $C\subseteq Y^X$ there is a continuous function $F:Y^X\to Y^X$ with $C= F^{-1}[D]$. It is known that the set ${{\bf WF}}$ is complete coanalytic. In fact, ${{\bf WF}}$ is the “typical” coanalytic set: Let $D$ be coanalytic. Then $D$ is complete coanalytic iff there is a continous function $F:{{\mathscr P}}({{\mathbb N}}^{<\omega})\to Y^X$ with ${{\bf WF}}= F^{-1}[D]$. Equivalently, $D$ is complete coanalytic iff there is a function as above which is defined only on the set of trees. The existence of coanalytic sets which are not analytic easily implies that a complete coanalytic set can never be analytic. The following theorem should be read as “analytic sets can never reach $ \omega_1$.” \[bound\] 1. Every coanalytic set is the union of an increasing $\omega_1$-chain of Borel sets. 2. Let ${{\bf WF}}= \bigcup_{\alpha \in \omega_1} {{\bf WF}}_\alpha$ be an increasing union of Borel sets, and let $A \subseteq {{\bf WF}}$ be Borel (or even analytic).\ Then there is $\alpha\in \omega_1 $ such that $A\subseteq {{\bf WF}}_\alpha$. Clones below ${{\sf T}}_1$ {#t1} ========================== We fix a 1-1 function $p$ from ${{\mathbb N}}\times {{\mathbb N}}$ onto $ {{\mathbb N}}\setminus \{0\}$. Let $\chi_\Delta$ and $\chi_\nabla $ be the characteristic functions of $\Delta$ and $\nabla$, and let $p_\Delta:= p \cdot \chi_\Delta$, i.e., $p_\Delta(x,y) = p(x,y)$ for $x>y$, and $=0$ otherwise. Similarly, let $p_\nabla:= p\cdot \chi_\nabla$. The following is clear: - $\chi_\nabla$ and $\chi_\Delta$ are canonical, and in ${{\sf T}}_1\cap {{\sf T}}_2$. - $p_\Delta $ and $p_\nabla$ are in ${{\sf T}}_1\setminus {{\sf T}}_2$, and are canonical. - $p\notin {{\sf T}}_1\cup {{\sf T}}_2$. In fact, the only clone containing ${{{\mathscr O}}^{(1)}} \cup \{p\}$ is ${{\mathscr O}}$ itself. ${{\sf T}}_1$ is generated by $\{p_\Delta\}\cup {{{\mathscr O}}^{(1)}}$. Let ${\mathscr{C}}$ be the binary clone generated by $\{p_\Delta\}\cup {{{\mathscr O}}^{(1)}}$. We will first find a function $q\in {\mathscr{C}}$ satisfying 1. $q$ is 1-1 on $\Delta$ 2. $q(x,y) = Q(x)$ on $\nabla$, for some 1-1 function $Q$ 3. $q[\Delta]\cap q[\nabla] = \emptyset$. Note that any two functions $q$, $q'$ satisfying these properties will be equivalent, in the sense that there is a unary function $u$ with $q(x,y)=u(q'(x,y))$ for all $(x,y)\in \Delta\cup \nabla$.) \#1\#2\#3\#4\#5[ @font ]{} (5037,3264)(0,-10) (1575,3237)(1575,12)(5025,12) (1575,12)(4725,3162) (3750,1062)[(0,0)\[lb\][$p(x,y)$]{}]{} (2475,2412)[(0,0)\[lb\][$x$]{}]{} Properties of $q$ (simplified) Let $P(x) = \max \{p(x,y): y\le x\}+1$, and let $$q(x,y):= p_\Delta(P(x), p_\Delta(x,y)).$$ Note that this actually means $q(x,y) = p(P(x), p_\Delta(x,y))$, as $P(x)>p_\Delta(x,y)$ for all $x,y$. So, $$q(x,y) = \KNUTHcases{ p(P(x), p(x,y)) & for $x>y$\cr p(P(x),0) & for $x \le y$,\cr}$$ So $ q$ satisfies (1)–(3), and $q\in {\mathscr{C}}$. We now consider an arbitrary almost unary function $f$, say $f(x,y) < F(x)$ for all $x,y$. Wlog we assume $f(x,y)>0$ for all $(x,y)$. Let $p':{{\mathbb N}}\times {{\mathbb N}}\to {{\mathbb N}}$ be a 1-1 function satisfying $p'(x,y)>x$ for all $x,y$. Define $$\begin{aligned} f_1(x,y) &= \KNUTHcases{ p'(F(x), f(x,y)) & for $x> y$\cr F(x) & for $x\le y$\cr }\\ f_2(x,y) &= \KNUTHcases{\rlap{0} \hphantom{ p'(F(x), f(x,y)) } & for $x> y$\cr f(x,y) & for $x\le y$\cr } \end{aligned}$$ \#1\#2\#3\#4\#5[ @font ]{} (9037,3264)(0,-10) (0075,3237)(0075,12)(3525,12) (0075,12)(3225,3162) (0975,2412)[(0,0)\[lb\][$F(x)$]{}]{} (1750,562)[(0,0)\[lb\][$f(x,y)$ (but $>x$)]{}]{} (5575,3237)(5575,12)(8925,12) (5575,12)(8725,3162) (6475,2412)[(0,0)\[lb\][$f(x,y)$]{}]{} (7250,562)[(0,0)\[lb\][$0$]{}]{} Definitions of $f_1$ and $f_2$ (simplified) Then $f_1(x,y) = u_1(q(x,y))$ for some unary $u_1$, and $f_2(x,y) = u_2(p_\Delta(y+1, x)$ for some unary $u_2$. So $f_1,f_2\in {\mathscr{C}}$. Let $f'(x,y):= p_\Delta(f_1(x,y),f_2(x,y))$. Now $f_2(x,y)< F(x) \le f_1(x,y)$ for all $x,y$, so $f'(x,y) = p(f_1(x,y),f_2(x,y))$. As $f(x,y)$ can be recovered from the pair $(f_1(x,y), f_2(x,y))$, and hence also from $f'(x,y)$, we conclude that $f(x,y) = v(f'(x,y))$ for some unary $v$. Hence $f\in {\mathscr{C}}$. If ${\mathscr{C}}\subseteq {{\sf T}}_1$ is a binary clone containing ${{{\mathscr O}}^{(1)}}$, then either ${\mathscr{C}}= {{\sf T}}_1$, or ${\mathscr{C}}\subseteq {{\sf T}}_2$. Hence: ${{\sf T}}_1\cap {{\sf T}}_2$ is the unique coatom in the interval $[{{{\mathscr O}}^{(1)}}, {{\sf T}}_1]$ of binary clones, and every binary clone in this interval (except for ${{\sf T}}_1$ itself) is included in $ {{\sf T}}_1\cap {{\sf T}}_2$. Assume $ {{{\mathscr O}}^{(1)}} \subseteq {\mathscr{C}}\subseteq {{\sf T}}_1$, but ${\mathscr{C}}\not \subseteq {{\sf T}}_2$. So let $f\in {\mathscr{C}}\setminus {{\sf T}}_2$. So there are 1-1 unary functions $u$ and $v$ such that $f(u(x), v(y))$ is canonical and 1-1 on $\Delta$ (or on $\nabla$). So wlog $f$ is canonical and 1-1 on $\Delta$. Moreover, either $f$ is symmetric, or ${{\rm ran}}(f{{\upharpoonright}}\nabla)\cap {{\rm ran}}(f{{\upharpoonright}}\Delta)= \emptyset$. In the first case, the function $f'(x,y):=f(2x, 2y+1)$ is 1-1 on all of ${{\mathbb N}}\times {{\mathbb N}}$, so $\langle \{f'\}\cup {{{\mathscr O}}^{(1)}}\rangle = {{\mathscr O}}$, which contradicts our assumption ${\mathscr{C}}\subseteq {{\sf T}}_1$. In the second case, we can find a unary function $u$ such that $$\forall x,y: \ u(f(2x,2y+1)) = p_\Delta(x,y),$$ so $p_\Delta\in{\mathscr{C}}$, i.e., $C={{\sf T}}_1$. Pinsker [@Pinsker:2004a] has analyzed the interval $({{\sf T}}_1,{{\rm Pol}}({{\sf T}}_1))$ of (full) clones, and shown the following: Let ${\min^+}_n(x_1,\ldots, x_n):= x_{\pi (2)}$, where $\pi$ is any permutation such that $x_{\pi(1)}\le x_{\pi(2)}\le \cdots \le x_{\pi(n)}$.\ (So ${\min^+}_2(x,y)=\max(x,y)$, and ${\min^+}_3(x,y,z)$ is the median of $x,y,z$.) Then the clones ${{\mathscr M}}_n:=\langle {{\sf T}}_1\cup \{{\min^+}_n\}\rangle$ are all distinct, $${{\sf T}}_1 \subseteq \cdots \subsetneq {{\mathscr M}}_5 \subsetneq {{\mathscr M}}_4 \subsetneq {{\mathscr M}}_3 = {{\rm Pol}}({{\sf T}}_2) \subsetneq {{\mathscr M}}_2 = {{\mathscr O}},$$ and every clone in the interval $[{{\sf T}}_1,{{\rm Pol}}({{\sf T}}_1)]$ is equal to some ${{\mathscr M}}_n$. So ${{\mathscr M}}_4$ is a coatom in the interval $[{{{\mathscr O}}^{(1)}} , {{\rm Pol}}({{\sf T}}_1)]$ in the lattice of all clones. It is also easy to see that $ {{\rm Pol}}({{\sf T}}_1)\cap {{\rm Pol}}({{\sf T}}_2) = {{\rm Pol}}({{\sf T}}_1\cap {{\sf T}}_2) $ is another coatom. ${{\sf T}}_1$ is a Borel set. The set ${{\sf T}}_1^{\rm x}:= \{f\in{{{\mathscr O}}^{(2)}}: \exists F \, \forall x,y:f(x,y)\le F(x)\}$ is apparently only $\Sigma_1^1$, but we can rewrite it as $$\begin{split} {{\sf T}}_1^{\rm x} &= \{ f\in{{{\mathscr O}}^{(2)}}: \forall x\,\exists z\, \forall y: f(x,y)\le z \} \\ &= \bigcap_{x\in {{\mathbb N}}} \, \bigcup_{z\in {{\mathbb N}}} \, \bigcap_{y\in {{\mathbb N}}} \, \bigcup_{t\le z} \, \{f\in {{{\mathscr O}}^{(2)}}: f(x,y) = t \}, \end{split}$$ which is $F_{\sigma\delta}$. ${{\sf T}}_1^{\rm y}$ can be defined similarly, and ${{\sf T}}_1 = {{\sf T}}_1^{\rm x}\cup {{\sf T}}_1^{\rm y}$. Clearly, ${{\rm Pol}}({{\sf T}}_1)$ is coanalytic. (See \[closure\](5).) By Pinsker’s theorem, ${{\rm Pol}}({{\sf T}}_1) = \langle {{\sf T}}_1\cup \{{\min^+}_3\}\rangle$ is finitely generated over ${{\sf T}}_1$, hence analytic and therefore even Borel. An explicit Borel description can be found in [@Pinsker:2004a]. Clones below ${{\sf T}}_2$ {#t2} ========================== In the previous section we have seen: ${{\sf T}}_1 = \langle {{{\mathscr O}}^{(1)}} \cup \{p_\Delta\}\rangle$. Thus, ${{\sf T}}_1$ is finitely generated over ${{{\mathscr O}}^{(1)}} $. The next theorem and its corollaries show that ${{\sf T}}_2$ is not finitely generated over ${{{\mathscr O}}^{(1)}} $. Let $B \subseteq {{\mathscr O}}$ be a Borel or analytic set. Then $\langle B\rangle$ is analytic. Similarly, if $B \subseteq {{{\mathscr O}}^{(2)}} $ is a Borel or analytic set, then $\langle B\rangle_{{{{\mathscr O}}^{(2)}}}$ (the binary clone generated by $B$) is analytic. Is there a Borel set $B$ (perhaps even a closed set? a countable set? a set of the form ${{{\mathscr O}}^{(1)}} \cup \{f_1,\ldots, f_n\}$?) such that $\langle B\rangle$ is not Borel? ${{\sf T}}_2$, ${{\rm Pol}}({{\sf T}}_2)$, ${{\sf T}}_1\cap {{\sf T}}_2$ and ${{\rm Pol}}({{\sf T}}_1\cap {{\sf T}}_2)$ are complete coanalytic sets. We will define a continuous map $F$ from the set of all trees $T \subseteq {{\mathbb N}}^{<{{\mathbb N}}}$ into ${{\sf T}}_1$ such that > for all $T $: $T$ is wellfounded iff $F(T)\in {{\sf T}}_2$. Let $\{s_n:n\in {{\mathbb N}}\}$ enumerate all finite sequences of natural numbers, with $ s_k\vartriangleleft s_n \Rightarrow k<n$. For any tree $T \subseteq \{s_n:n\in {{\mathbb N}}\}$ let $F(T)$ be defined as follows: $$F(T)(k,n) = \KNUTHcases { p(k,n) & if $k<n$ and $s_k,s_n\in T$, $s_k\vartriangleleft s_n$\cr 0 & otherwise\cr }$$ Now if $A:=\{s_{n_1}, s_{n_2},\ldots \,\}$ is an infinite branch in $T$, then $F(T){{\upharpoonright}}\nabla_{A,A} $ is 1-1. Conversely: Assume $A = \{n_1<n_2< \cdots \}$, $B= \{m_1<m_2<\cdots \}$, and $F(T){{\upharpoonright}}\nabla_{A,B}$ is 1-1.\ We claim that $s_{n_1}\vartriangleleft s_{n_2}$. Indeed, for any large enough $k$ we have $F(T)(n_1,m_k)\not=0$, so $s_{n_1}\vartriangleleft s_{m_k}$, and similarly $s_{n_2}\vartriangleleft s_{m_k}$. So $s_{n_1}\vartriangleleft s_{n_2}$.\ Similarly we get $s_{n_1}\vartriangleleft s_{n_2} \vartriangleleft s_{n_3}\vartriangleleft \cdots $. ${{\rm Pol}}({{\sf T}}_2)$, ${{\sf T}}_2$, ${{\sf T}}_2\cap {{\sf T}}_1$ are not countably generated over ${{{\mathscr O}}^{(1)}}$. If $C$ is a countable set, then $C\cup {{{\mathscr O}}^{(1)}} $ is Borel, so $\langle C\cup {{{\mathscr O}}^{(1)}} \rangle$ is analytic, hence not complete coanalytic. The well-known analysis of coanalytic sets now gives the following: There is a sequence $(C_i:i\in \omega_1)$ of Borel clones such that: $i<j$ implies $C_i\subsetneq C_j$, $\bigcup_{i\in \omega_1} C_i = {{\sf T}}_2$, and:\ For every analytic clone $C\subseteq {{\sf T}}_2$ there is $i< \omega_1$ such that $C\subseteq C_i$. In other words: There is an increasing family of $\aleph_1$ many analytic clones below ${{\sf T}}_2$ such that every analytic clone below ${{\sf T}}_2$ is covered by a clone from the family. A similar representation can be found for ${{\rm Pol}}({{\sf T}}_2)$, ${{\sf T}}_1\cap {{\sf T}}_2$, etc. By \[bound\](1), we can find an increasing family of Borel sets $(B_i:i<\omega_1)$ such that ${{\sf T}}_2=\bigcup_i B_i$. Clearly each clone $\langle B_i\rangle$ is analytic. By the boundedness theorem (\[bound\](2)) we know that for all $i$ there is $j$ with $\langle B_i\rangle \subseteq B_j$. Let $h:\omega_1\to \omega_1$ be continuous and strictly increasing such that $\forall i: \langle B_i\rangle \subseteq B_{f(i)}$. Now the family $\{B_i: f(i)=i\}$ is as desired. Find a nice cofinal family in $\{{\mathscr{C}}: {\mathscr{C}}\subsetneq {{\sf T}}_2\}$. I.e., a nice family ${\mathscr{F}}$ such that $\forall {\mathscr{C}}\subsetneq {{\sf T}}_2$ there is ${\mathscr{C}}'\in {\mathscr{F}}$ with ${\mathscr{C}}\subseteq {\mathscr{C}}'$. Since we already have a family that covers all analytic clone, this question asks really: which nonanalytic clones are there below ${{\sf T}}_2$? Can we get a family $B_i$ as in the theorem where each $B_i$ is generated by a single function? Analyze the interval $[ {{\sf T}}_2, {{\rm Pol}}({{\sf T}}_2)]$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a catalog of near-infrared surveys of young ($\la$ a few 10$^6$yr) stellar groups and clusters within 1 kpc from the Sun, based on an extensive search of the literature from the past ten years. We find 143 surveys from 69 published articles, covering 73 different regions. The number distribution of stars in a region has a median of 28 and a mean of 100. About 80% of the stars are in clusters with at least 100 members. By a rough classification of the groups and clusters based on the number of their associated stars, we show that most of the stars form in large clusters. The spatial distribution of cataloged regions in the Galactic plane shows a relative lack of observed stellar groups and clusters in the range 270$^\circ< {\it l}<$ 60$^\circ$ of Galactic longitude, reflecting our location between the Local and Sagittarius arms. This compilation is intended as a useful resource for future studies of nearby young regions of multiple star formation.' author: - 'Alicia Porras, Micol Christopher, Lori Allen, James Di Francesco, S. Thomas Megeath, and Philip C. Myers' title: A Catalog of Young Stellar Groups and Clusters Within 1 kpc of the Sun --- Introduction ============ There is much evidence to support the idea that many, and perhaps most, stars form in multiple systems (e.g. Lada et al. 1991, Carpenter 2000). The degree of observed clustering varies greatly between star-forming regions, ranging from the low-density Taurus-Auriga complex, where localized stellar densities are typically on order 1-2 pc$^{-3}$ (Jones & Herbig 1979, Gómez et al. 1993, Larson 1995), to the rich Orion Nebula cluster, where the central stellar density approaches $2\times10^4$ pc$^{-3}$ (Hillenbrand & Hartmann 1998). How this range of clustering takes place is still under study, and clearly there is much to be learned about the processes governing star formation through the examination of young multiple systems (Elmegreen et al. 2000, Meyer et al. 2000). Additionally, in recent years, the evolution in infrared (IR) array size and sensitivity has led to a larger sample of young multiple systems. Therefore, the goal of this paper is to draw together a useful sample of the closest embedded stellar groups and clusters (within 1 kpc from the Sun), as an aid for more detailed comparative studies. This paper extends two recent catalogs: (1) the catalog of infrared clusters of Bica, Dutra & Barbuy (2003), which covers clusters out to distances $\sim$15 kpc using NIR data as in the present study; and (2) the catalog of Lada & Lada (2003), which includes clusters within $\sim$2 kpc of the Sun with 35 or more stars. The present catalog covers a smaller distance from the Sun, but gives more entries within that distance than do the catalogs of Bica et al. (2003) and Lada & Lada (2003). Many of the regions presented here are also observed in the molecular gas ($^{13}$CO and C$^{18}$O) survey by Ridge et al. (2003). We consider a “young” region of multiple star formation to be one where the stars are still “embedded” in or associated with substantial molecular gas. The ages of these stars are typically less or about a few Myr according to their positions on the Hertzsprung-Russell diagram (Lada & Lada 2003, Hartmann 2001). Our sample is limited to regions of multiple star formation within 1 kpc since at greater distances, surveys begin to suffer significant stellar incompleteness due to increasing extinction, poor resolution, and poor sensitivity. Also, we chose to focus on infrared observations because young groups and clusters are often heavily obscured by the gas and dust in their parent molecular clouds, and extinction due to interstellar dust is approximately ten times less at 2 $\mu$m than in the visual V band. Our catalog contains information which we think will be relevant to future studies of young stellar groups and clusters. We provide a list of regions imaged, cross-indexed to their published surveys. We also include areas covered by the surveys, their depth, and, where available, completeness, distances to the regions, and the number of stars detected. For simplicity, in this paper we adopt the term “regions of multiple star formation” or “regions” for short, to refer to all stellar groupings which meet the criteria of §2.1. Then we classify these “regions” based on the number of their associated stars in §3. Catalog of Regions of Multiple Star Formation ============================================= Scope of the Catalog -------------------- In constructing the catalog, we wished to compile a resource that would be helpful in future research. Consequently, we restricted our catalog to surveys meeting certain criteria. The survey must: - contain observations in the near-infrared in at least one of these bands: J (1.25 $\mu$m), H (1.65 $\mu$m), K (2.2 $\mu$m), K$^{\prime}$ (2.11 $\mu$m), or K$_s$ (2.2 $\mu$m). - be centered on a young star forming region in association with molecular gas and/or exhibiting other signposts of star formation, such as the presence of Herbig Ae/Be stars, outflows, HII regions or reflection nebulae at an estimated distance within $\sim$1 kpc of the Sun. - indicate clustering on at least a minimal scale, considered in this work to be at least 5 “associated” stars. - have been designed as a survey of an area for near-IR sources (as opposed to searches for binaries or other phenomena). - have been conducted within approximately the last 10 years (up to November 2002). - have been published in any of the following astronomical publications searched by ADS: ApJ, ApJS, AJ, MNRAS, PASP, A&A and A&AS. As described above, near-infrared wavelengths were selected because of their ability to penetrate the extinction caused by dust in star forming regions. Regions within 1 kpc, owing to their distance, have been surveyed to much greater depths (i.e. fainter stars have been recorded), allowing for greater completeness in the stellar samples. Likewise, developments in observational instruments and techniques have allowed recent surveys to obtain deeper and wider coverage than older surveys, while also confirming the information from these previous studies. Therefore, no information is lost by restricting our catalog to the more recent surveys. Under these conditions, we expect most large clusters to be included. Our surveys catalog contains 143 entries compiled from 69 different papers and covering 73 star forming regions. Twenty-five of the entries in our catalog were recorded in the Hodapp (1994) survey of IRAS sources with molecular outflows and 14 are in recent surveys of Herbig Ae/Be stars by Testi et al. (1997; 1998; 1999). Forty-seven of the 73 entries in our catalog of groups and clusters also appear in the catalog of Bica, Dutra & Barbuy (2003) and 25 clusters in the catalog of Lada & Lada (2003). Twenty-seven of the 30 regions studied by Ridge et al. (2003) are coincident with our catalog. Construction of the Catalog --------------------------- The first step in constructing the catalog was to identify all regions of nearby star formation. This was done through a combination of [*a priori*]{} knowledge and searches on the NASA Astrophysical Data System (ADS)[^1]. Once this list of regions was compiled, ADS searches were conducted on each region, searching for articles containing “infrared” and the regions’ names in the titles. For most regions this produced a number of journal articles. Relevant information on the surveys described in each of these articles was recorded. In addition, any surveys referenced in each article were noted and subsequently investigated. This search method produced 109 of the 143 entries recorded in the catalog. In addition, SIMBAD[^2] searches were conducted on these regions, either by searching on an approximate central celestial coordinate or on stars known within the region. In addition, ADS author searches were conducted for authors whose other surveys were already included in the catalog. SIMBAD and author searches combined to add another 2 entries. Finally, this preliminary catalog was sent to 41 scientists involved in near-infrared research with requests for their input regarding additional surveys that they had conducted, or of which they were aware. Input was received from approximately 30 people and resulted in 8 entries in our catalog. More recently, 24 entries were added based only on ADS searches under “JHK photometry” and “young stellar clusters”, bringing the total number of surveys to 143, up to November 2002. Seventeen of these 24 entries were studies based on the point source catalog of the 2 Micron All Sky Survey (2MASS, Beichman et al. 1998) data. In the future, it should be possible to extend this catalog based on new data from 2MASS[^3] and SIRTF observations. Description of the Catalog -------------------------- Table \[tbl-1\] contains the catalog of near-IR surveys of star-forming regions. We include in the table all relevant information about each survey that could aid in selection for future studies, particularly studies of the spatial distribution of stars within clusters. Because there is considerable overlap between many of the surveys, we present in Table \[tbl-2\], a catalog of the groups and clusters themselves. For this reason, columns 1 and 2 in Table \[tbl-1\] contain survey and group or cluster identification numbers. These are also included in Table \[tbl-2\] to facilitate cross-indexing between the tables. Columns 3 and 4 in Table \[tbl-1\] contain the approximate center position of each survey, arranged in order of increasing RA. These centers were either listed by the authors or were calculated from the RA and Dec coordinates given for the entire survey by averaging the minimum and maximum coordinates observed. Columns 5 and 6 give the spatial extent of each survey, in arc minutes. The range denotes the full extent of the survey and not the maximum offset from the center position. Column 7 presents the name of the star-forming region as listed by the author. Columns 8 - 10 give the quoted completeness limits of each survey in J, H, and K bands respectively. Since completeness estimates are reported in various ways, the notes to Table 1 contain explanatory comments. Finally, Column 11 contains the references used to compile the catalog. Some articles, most notably Hodapp (1994) and Testi et al. (1997, 1998, 1999) contain information on surveys of multiple star-forming regions. Each individual region meeting the criteria in §2.1 is listed separately in the catalog, hence the repetition of Hodapp and Testi et al. references. Table \[tbl-2\] contains the regions covered by the surveys listed in Table \[tbl-1\]. Columns 1 and 2 are the same as in Table 1 and column 3 gives the most commonly used name (or names) for the cluster. Columns 4 and 5 give the cluster RA and Dec. In cases where more than one survey was made of a region, the coordinates listed in Table \[tbl-2\] correspond to the center positions of the survey which covered the largest area. We do not attempt to refine the central positions, however in some cases, more detailed estimates of the coordinates and sizes of regions have been done by Bica, Dutra & Barbuy (2003). Column 6 gives the distance to the group or cluster, as given by the authors. In cases where more than one survey has covered the region and the distances differ, we have adopted the value more generally accepted. Column 7 contains the number of stars ($A$) that are associated with the star-forming region according to the original authors. When there are several estimates in the literature of the number of associated stars in one region, we adopt values that seem to be more complete, according to the size of the surveyed area and the sensitivity of the photometric measurements. In some cases, we adopt the $I_c$ value defined by Testi et al. (1999) as “the integral over distance of the source surface density profile subtracted by the average source density measured at the edge of each field”. Note that we include only regions with $A\ge$ 5 (§2.1). Finally, the last three columns list the references for the adopted values of coordinates, distance and associated stars. These numbers are the same as in references at the end of Table \[tbl-1\]. We note that there are some cases in which the values of distance, for groups and clusters in the same cloud or region, may change from author to author. Such is the case of NGC 2023, NGC 2024, NGC 2068 and NGC 2071, which are all associated with the Orion B molecular cloud at a distance of $\sim$400-500 pc, but values listed in Table \[tbl-2\] differ because they correspond to the estimation given by different authors. A similar situation occurs with regions XY Per and LkH$\alpha$ 101, and with regions IRAS 06046-0603, Mon R2, GGD 12-15, and GGD 17. Discussion ========== Classification -------------- The groups and clusters in Table \[tbl-2\] differ greatly in their properties. However, a very crude classification based on the number of associated stars in the regions is possible, despite the differences in observational sensitivity, resolution, and wavelength coverage from region to region. The number of apparently associated stars is given by the original authors for 76 of the 77 (99%) groups or clusters listed in Table \[tbl-2\]. We make no attempt to improve the quality of these estimates, but take them as given. We adopt the term “region of multiple star formation” to mean a stellar concentration with at least five members. We use this general term to include“clusters” and “groups”. We follow the standard usage where a “cluster” has a larger spatial extent and/or a greater surface density than a “group”. The number of members which divides a cluster from a group differs somewhat from author to author, and here we adopt 30 members, the approximate number of stars required for a typical open cluster to survive against evaporation (Binney & Tremaine 1987, Adams & Myers 2001, Lada & Lada 2003). For convenience we further divide “small” and “large” clusters at 100 members. Thus, we call a region of multiple star formation a “group” if it has 5-30 members, a “small cluster” if it has 31-100 members, and a “large cluster” if it has more than 100 members. Statistics ---------- Considering the number distribution of 7202 associated stars in Table \[tbl-2\], the median number of members is 28 and the mean is 100. Nearly all the stars are in the most massive clusters: 80% are in 17 clusters with at least 100 members, and about 50% are in the 5 clusters with at least 345 members. Considering “groups”, “small clusters”, and “large clusters”, we note that the choice of dividing lines between these categories is arbitrary, but this particular choice gives a substantial number of regions (38, 18, and 16) in each category. We depict the number of associated stars in a histogram (see Fig. \[fig1\]). Fig. \[fig1\] shows that the number of groups exceeds the number of large clusters, but as expected most of the stars are contained in the large clusters. In other words, the fraction of the associated stars (8%, 12%, 80%) in groups, small and large clusters, trends inversely with the fraction of these regions (53%, 25%, 22%) in the solar neighborhood. This result, reinforces a point made by many investigators (Lada et al. 1991, Carpenter et al. 2000, Lada & Lada 2003), that most stars form in large clusters. Spatial Distribution -------------------- To convey an idea of the Galactic area covered by the surveys in this catalog, Fig. \[fig2\] shows a projected view of the spatial distribution of the young regions. A different symbol size is used to show the rough classification of the content of stars: small circles for 5 $\leq A \leq$ 30, medium size circles for , and large circles for $A >$ 100. One dot shows the position of the region without estimate of associated stars. As can be seen, there is not a clear correlation between the number of stars in the regions ($A$) and their distance from the Sun. Also, from the K$_{lim}$ and the distance values, we conclude that the clusters with more associated stars are not preferentially from surveys that are deeper. It is striking that the spatial distribution of regions of multiple star formation is non-uniform, with a much lower surface density of observed groups and clusters in the Galactic longitude range 270$^\circ< {\it l}<$ 60$^\circ$ than elsewhere. This deficiency can be understood by the absence of molecular clouds in the inter-arm region between the Local spiral arm and the Sagittarius arm which lies towards the forth quadrant at $\sim$1700 pc from the Sun (Dame et al. 1987). More systematic observations of young clusters, for example via 2MASS and SIRTF, will help to greatly extend the type of analysis given here, to gain insight into the processes which make groups and clusters, as discussed in Meyer et al. (2001), Clarke et al. (2000), and Adams & Myers (2001). Conclusions =========== Our main conclusions are: 1. Most stars are in a relatively few large clusters. About 80% of the stars in the sample are in large clusters with more than 100 members. These large clusters represent 22% of the regions of multiple star formation. 2. Most regions of multiple star formation are small groups, whose total population of stars is relatively small. Groups with 5-30 members represent 53% of the regions of multiple star formation, yet the total number of stars in such groups is only about 8% of the stars in the sample. 3. The spatial distribution of regions of multiple star formation follows roughly the distribution of molecular gas, and shows an asymmetry between northern and southern latitudes expected from the placement of the nearest Galactic spiral arms. We thank Tom Dame for helpful discussion, Charles Lada for an advance copy of Lada & Lada (2003), and an anonymous referee for useful comments and suggestions. We thank those who responded to our inquiries regarding additional NIR surveys, Part of this work was undertaken at the Smithsonian Astrophysical Observatory Research Experience for Undergraduates (REU) program. A. P. acknowledges support from the SIRTF Legacy Program via the University of Texas contract UTA 02-370 to SAO. Adams, F. C., & Myers, P. C., 2001, , 553, 744 Ali, B., & DePoy, D. L. 1995, , 109, 709 Allen, L. E., Myers, P. C., Di Francesco, J., Mathieu, R., Chen, H., & Young, E. 2002, , 566, 993 Alves, J. F., & Yun, J. L. 1995, , 438, L107 Aspin, C., & Barsony, M. 1994, , 288, 849 Aspin, C., & Sandell, G. 1997, , 289, 1 Aspin, C., Sandell, G., & Russell, A. P. G. 1994, , 106, 165 Aspin, C., & Walther, D. M. 1990, , 235, 387 Barsony, M., Burton, M. G., Russell, A. P. G., Carlstrom, J. E., & Garden, R. 1989, , 346, L93 Barsony, M., Kenyon, S. J., Lada, E. A., & Teuben, P. J. 1997, , 112, 109 Beichman, C. A., Chester, T. J., Skrutskie, M., Low, F. J., & Gillett, F. 1998, , 110, 480 Bica, E., Dutra, C. M., & Barbuy, B., 2003, , 397, 177 Binney, J., & Tremaine, S., “Galactic dynamics”, Princeton, NJ, Princeton University Press, 1987, p. 187 Burkert, A., Stecklum, B., Henning, T., & Fischer, O. 2000, , 353, 153 Cambresy, L., Copet, E., Epchtein, N., de Batz, B., Borsenberger, J., Fouque, P., Kimeswenger, S., & Tiphene, D. 1998, , 338, 977 Cambresy, L., Epchtein, N., Copet, E., de Batz, B., Kimeswenger, S., Le Bertre, T., Rouan, D., & Tiphene, D. 1997, , 324, L5 Carpenter, J. M., 2000, , 120, 3139 Carpenter, J. M., Heyer, M. H., & Snell, R. L., 2000, , 130, 381 Carpenter, J. M., Meyer, M. R., Dougados, C., Strom, S. E., & Hillenbrand, L. A. 1997, , 114, 198 Chen, H., Tafalla, M., Greene, T. P., Myers, P. C., & Wilner, D. J. 1997, , 475, 163 Chen, H., & Tokunaga, A. T. 1994, , 90, 149 Clarke, C. J., Bonnell, I. A., & Hillenbrand, L. A., 2000, in Protostars and Planets IV, ed. V. Mannings, A. P. Boss, & S. S. Russell (Tucson: Univ. Arizona Press), 151 Comeron, F., Rieke, G. H., Burrows, A., & Rieke, M. J. 1993, , 416, 185 Comeron, F., Rieke, G. H., & Neuhauser, R. 1999, , 343, 477 Comeron, F., Rieke, G. H., & Rieke, M. J. 1996, , 473, 294 Dame, T. M., Ungerechts, H., Cohen, R. S., de Geus, E. J., Grenier, I. A., May, J., Murphy, D. C., Nyman, L.-Å., & Thaddeus, P., 1987, , 322, 706 DePoy, D. L., Lada, E. A., Gatley, I., & Probst, R. 1990, , 356, L55 Eiroa, C., & Casali, M. M. 1992, , 262, 468 Elmegreen, B. G., Efremov, Y., Pudritz, R. E., Zinnecker, H., 2000. Observations and Theory of Star Cluster Formation. In: Protostars and Planets IV, Tucson: University of Arizona Press; eds. Mannings, V., Boss, A. P. and Russell, S. S., p. 179 Evans II, N. J., Mundy, L. G., Kutner, M. L., & DePoy, D. L. 1989, , 346, 212 Giovannetti, P., Caux, E., Nadeau, D., & Monin, J.-L. 1998, , 330, 990 Gómez, M., Hartmann, L., Kenyon, S. J., & Hewett, R. 1993, , 105, 1927 Gómez, M., & Kenyon, S. J. 2000, , 121, 974 Greene, T. P., & Young, E. T. 1992, , 395, 516 Haisch, K. E., Lada, E. A., & Lada, C. J. 2000, , 120, 1396 Hartmann, L. W. 2001, , 121, 1030 Herbig, G. H., & Dahm, S. E. 2002, , 123, 304 Hillenbrand, L. A., & Carpenter, J. M. 2000, , 540, 236 Hillenbrand, L. A., & Hartmann, L. W. 1998, , 492, 540 Hillenbrand, L. A., Meyer, M. R., Strom, S. E., & Skrutskie, M. F. 1995, , 109, 280 Hodapp, K.-W. 1994, , 94, 615 Hodapp, K.-W., & Deane, J. 1993, , 88, 119 Hodapp, K.-W., & Rayner, J. 1991, , 102, 1108 Howard, E. M., Pipher, J. L., & Forrest, W. J. 1994, , 425, 707 Jones, B., & Herbig, G. H. 1979, , 84, 1872 Jones, T. J., Mergen, J., Odewahn, S., Gehrz, R. D., Gatley, I., Merrill, K. M., Probst, R., & Woodward, C. E. 1994, , 107, 2120 Kaas, A. A., 1999, , 118, 558 Kenyon, S. J., Lada, E. A., & Barsony, M. 1998, , 115, 252 Lada, C. J., Alves, J., & Lada, E. A. 1996, , 111, 1964 Lada, C. J., & Lada, E. A., 2003, , 41, 57 Lada, C. J., Muench, A. A., Haisch, K. E., Lada, E. A., Alves, J. F., Tollestrup, E. V., & Willner, S. P. 2000, , 120, 3162 Lada, C. J., Young, E. T., & Greene, T. P. 1993, , 408, 471 Lada, E. A., DePoy, D. L., Evans II, N. J., & Gatley, I. 1991, , 371, 171 Lada, E. A., & Lada, C. J. 1995, , 109, 1682 Larson, R. 1995, , 272, 213 Li, W., Evans II, N. J., & Lada, E. A. 1997, , 488, 277 Liseau, R., Lorenzetti, D., Nisini, B., Spinoglio, L., & Moneti, A. 1992, , 265, 577 Lorenzetti, D., Spinoglio, L., & Liseau, R. 1993, , 275, 489 Luhman, K. L. 2001, , 560, 287 Luhman, K. L., Rieke, G. H., Young, E. T., Cotera, A. S., Chen, H., Rieke, M., Schneider, G., & Thompson, R. I. 2000, , 540, 1016 Massi, F., Lorenzetti, D., Giannini, T., Vitali, F., 2000, , 353, 598 Massi, F., Giannini, T., Lorenzetti, D., Liseau, R., Moneti, A., & Andreani, P. 1999, , 136, 471 McCaughrean, M. J., & Stauffer, J. R. 1994, , 108, 1382 Meyer, M. R., Adams, F. C., Hillenbrand, L. A., Carpenter, J. M., & Larson, R. B., 2000. The Stellar Initial Mass function: Constraints from Young Clusters and Theoretical Perspectives. In: Protostars and Planets IV, Tucson: University of Arizona Press; eds. Mannings, V., Boss, A. P. and Russell, S. S., p. 121 Muench, A. A., Lada, E. A., Lada, C. J. & Alves, J. 2002, , 573, 366 Nakajima, Y., Tamura, M., Oasa, Y., & Nakajima, T. 2000, , 119, 873 Oasa, Y., Tamura, M., & Sugitani, K. 1999, , 526, 336 Persi, P., Marenzi, A. R., Kaas, A. A., Olofsson, G., Nordh, L., & Roth, M. 1999, , 117, 439 Petr, M. G., du Foresto, V. C., Beckwith, S. V. W., Richichi, A., & McCaughrean, M.J. 1998, , 500, 825 Rebull, L. M., Makidon, R. B., Strom, S. E., Hillenbrand, L. A., Birmingham, A., Patten, B. M., Jones, B. F., Yagi, H., & Adams, M. T. 2002, , 123, 1528 Ridge, N. A., Wilson, T. L., Megeath, S. T., Allen, L. E., & Myers, P. C., 2003, accepted in (http://arxiv.org/abs/astro-p/0303401) Simon, M., Close, L. M., & Beck, T. L. 1999, , 117, 1375 Sogawa, H., Tamura, M., Gatley, I., & Merrill, K. M. 1997, , 113, 1057 Strom, K. M., Kepner, J., & Strom, S. E. 1995, , 438, 813 Strom, K. M., Strom, S. E., & Merrill, K. M. 1993, , 412, 233 Sugitani, K., Fukui, Y., & Ogura, K. 1991, , 77, 59 Sugitani, K., Tamura, M., & Ogura, K. 1995, , 455, L39 Tapia, M., Persi, P., Bohigas, J., & Ferrari-Toniolo, M. 1997, , 113, 1769 Tej, A., Sahu, K. C., Chandrasekhar, T., & Ashok, N. M. 2002, , 578, 523 Testi, L., Palla, F., & Natta, A. 1998, , 133, 81 Testi, L., Palla, F., & Natta, A. 1999, , 342, 515 Testi, L., Palla, F., Prusti, T., Natta, A., & Maltagliati, S. 1997, , 320, 159 Thompson, R. I., Corbin, M. R., Young, E., & Schneider, G. 1998, , 492, L177 Tuthill, P. G., Monnier, J. D., Danchi, W. C., Hale, D. D. S., & Townes, C. H., 2002, , 577, 826 Wilking, B. A., Greene, T. P., Lada, C. J., Meyer, M. R., & Young, E. T. 1992, , 397, 520 Wilking, B. A., McCaughrean, M. J., Burton, M. G., Giblin, T., Rayner, J.-T., & Zinnecker, H. 1997, , 114, 2029 Yao, Y., Hirata, N., Ishii, M., Nagata, T., Ogawa, Y., Sato, S., Watanabe, M., & Yamashita, T. 1997, , 490, 281 [^1]: http://adswww.harvard.edu [^2]: http://simbad.harvard.edu, [^3]: available at http://www.ipac.caltech.edu/2mass/releases/allsky
{ "pile_set_name": "ArXiv" }
‘=11 makefntext\#1[ to 3.2pt [-.9pt $^{{\ninerm\@thefnmark}}$]{}\#1]{} makefnmark[to 0pt[$^{\@thefnmark}$]{}]{} PS. @myheadings[mkbothgobbletwo oddhead[ ]{} oddfootevenheadevenfoot \#\#1\#\#1]{} \[appendixc\] \[subappendixc\] \#1 =1.5pc citex\[\#1\]\#2[@fileswauxout citeacite[forciteb:=\#2]{}[\#1]{}]{} @cghi cite\#1\#2[[$\null^{#1}$@tempswa ]{}]{} =cmbx10 scaled1 =cmr10 scaled1 =cmti10 scaled1 =cmbxti10 scaled=cmbx10 scaled=cmr10 scaled=cmti10 scaled=cmbxti10 =cmbx10 =cmr10 =cmti10 =cmbx9 =cmr9 =cmti9 =cmbx8 =cmr8 =cmti8 6.0in 8.6in -0.25truein 0.30truein 0.30truein =1.5pc PM–97–01 February 1997 **SUSY HIGGS BOSON DECAYS[^1]** ABDELHAK DJOUADI *Laboratoire de Physique Mathématique et Théorique, UPRES–A 5032,* *Université de Montpellier II, F–34095 Montpellier Cedex 5, France.* E-mail: djouadi@lpm.univ-montp2.fr Introduction ============ In the Minimal Supersymmetric extension of the Standard Model$^1$ (MSSM), the Higgs sector$^2$ is extended to comprise three neutral $h/H$ (CP=+), $A$ (CP=–) and a pair of charged scalar particles $H^\pm$. The Higgs sector is highly constrained since there are only two free parameters at tree–level: a Higgs mass parameter \[generally $M_A$\] and the ratio of the vacuum expectation values of the two doublet fields responsible for the symmetry breaking, ${\mbox{tg}\beta}$ \[which in Grand Unified Supersymmetric models with $b$–$\tau$ Yukawa coupling unification$^3$ is forced to be either small, ${\mbox{tg}\beta}\sim 1.5$, or large, ${\mbox{tg}\beta}\sim 50$\]. After the inclusion of the large radiative corrections$^4$, while the lightest Higgs boson $h$ is predicted to be lighter than $M_h {\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~}130$ GeV, the $H,A$ and $H^\pm$ states are expected to have masses of the order of a few hundred GeV. The decay pattern of the MSSM Higgs bosons is determined to a large extent by their couplings to fermions and gauge bosons, which in general depend strongly on ${\mbox{tg}\beta}$ and the mixing angle $\alpha$ in the CP–even sector. The pseudoscalar and charged Higgs boson couplings to down (up) type fermions are (inversely) proportional to ${\mbox{tg}\beta}$; the pseudoscalar $A$ has no tree level couplings to gauge bosons. For the CP–even Higgs bosons, the couplings to down (up) type fermions are enhanced (suppressed) compared to the SM Higgs couplings \[${\mbox{tg}\beta}>1$\]; the couplings to gauge bosons are suppressed by $\sin/\cos(\beta-\alpha)$ factors \[see Table 1.\] For large values of ${\mbox{tg}\beta}$ the pattern is simple, a result of the strong enhancement of the Higgs couplings to down–type fermions. The neutral Higgs bosons will decay into $b\bar{b}$ ($\sim 90\%$) and $\tau^+ \tau^-$ ($\sim 10\%)$ pairs, and $H^\pm$ into $\tau \nu_\tau$ pairs below and $tb$ pairs above the top–bottom threshold. For the CP–even Higgs bosons $h$ and $H$, only when $M_h$ approaches its maximal value is this simple rule modified: in this decoupling limit, the $h$ boson is SM–like and decays into charm and gluons with a rate similar to the one for $\tau^+ \tau^- $ \[$\sim 5\%$\] and in the high mass range, $M_h \sim 130$ GeV, into $W$ pairs with one of the $W$ bosons being virtual; the $H$ boson will mainly decay into $hh$ and $AA$ final states. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $\ \ \ \Phi \ \ \ $ $ g_{\Phi \bar{u}u} $ $ g_{\Phi \bar{d} d} $ $g_{ \Phi VV} $ --------------------- ----------------------------------------------------------------- ---------------------------------------------------------- -------------------------------------------- -- $h$  $\; \cos\alpha/\sin\beta \rightarrow 1 \; $    $ \; -\sin\alpha/  $ \; \sin(\beta-\alpha) \rightarrow 1 \; \cos\beta \rightarrow 1 \; $   $   $H$   $\; \sin\alpha/\sin\beta \rightarrow 1/{\mbox{tg}\beta}\; $    $ \;  $ \; \cos(\beta-\alpha) \cos\alpha/ \cos\beta \rightarrow {\mbox{tg}\beta}\; $   \rightarrow 0 \; $   $A$  $\; 1/ {\mbox{tg}\beta}\; $   $ \; {\mbox{tg}\beta}\; $    $ \; 0 \; $   -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [Table 1: Higgs couplings to fermions and gauge bosons normalized to the SM Higgs couplings, and their limit for $M_A \gg M_Z$.]{} For small values of ${\mbox{tg}\beta}\sim 1$ the decay pattern of the heavy neutral Higgs bosons is much more complicated. The $b$ decays are in general not dominant any more; instead, cascade decays to pairs of light Higgs bosons and mixed pairs of Higgs and gauge bosons are important and decays to $WW/ZZ$ pairs will play a role. For very large masses, they decay almost exclusively to top quark pairs. The decay pattern of the charged Higgs bosons for small ${\mbox{tg}\beta}$ is similar to that at large ${\mbox{tg}\beta}$ except in the intermediate mass range where cascade decays to $Wh$ are dominant. When the decays into supersymmetric particles are kinematically allowed, as it should be the case at least for the heavy CP–even, CP–odd and charged Higgs bosons, the pattern becomes even more complicated since the decay channels into charginos, neutralinos and squarks play will play a non–negligible role. In the following, I will discuss three topics related to the decay modes of the Higgs particles in the MSSM: (a) the QCD corrections to the hadronic decay modes$^5$, (b) the below threshold three–body decays$^6$ and (c) the decays into SUSY particles$^{7}$ of the heavy $H,A$ and $H^\pm$ bosons, including the QCD corrections to the squark decay modes$^8$. I will then briefly introduce a Fortran code$^9$ which calculates the various decay branching ratios. For more details and for a complete list of references, see the original papers Refs. \[5–9\]. Hadronic Decay Modes: QCD Corrections$^5$ ========================================= The particle width for decays to massless $b,c$ quarks directly coupled to the Higgs particle is given, up to ${\cal O}(\alpha_{s}^{2})$ QCD corrections \[the effect of the electroweak radiative corrections in the branching ratios is negligible\], by the well-known expression $$\begin{aligned} \Gamma [\Phi {\rightarrow}Q{\overline{Q}}] = \frac{3 G_F M_\Phi } {4\sqrt{2}\pi} g^2_{\Phi QQ} \overline{m}_Q^2(M_\Phi) \left[ 1 + 5.67 \frac{\alpha_s} {\pi} + (35.94 - 1.36 N_F) \frac{\alpha_s^2}{\pi^2} \right] \end{aligned}$$ in the ${\overline{\rm MS}}$ renormalization scheme; the running quark mass and the QCD coupling are defined at the scale of the Higgs mass, absorbing this way any large logarithms. The quark masses can be neglected in general except for top quark decays where this approximation holds only sufficiently far above threshold. Since the relation between the charm pole mass $M_{c}$ and the ${\overline{\rm MS}}$ mass evaluated at the pole mass ${\overline{m}}_{c}(M_{c})$ is badly convergent, one can adopt the running quark masses ${\overline{m}}_{Q}(M_{Q})$ \[which have been extracted directly from QCD sum rules evaluated in a consistent ${\cal O} (\alpha_{s})$ expansion\] as starting points. The evolution from $M_{Q}$ to a scale $\mu \sim M_\Phi$ is given by: $$\begin{aligned} {\overline{m}}_{Q} (\mu )={\overline{m}}_{Q}\,(M_{Q}) \, c[\alpha_{s}(\mu)/\pi ] / c [\alpha_{s}(M_{Q})/\pi] \nonumber \end{aligned}$$ $$\begin{aligned} c(x) &=& (25/6x)^{12/25} \, [1+1.014x+1.39x^{2}] \ \ {\rm for} \ M_{c}<\mu <M_{b} \nonumber \\ c(x) &=&(23/6 x)^{12/23} \, [1+1.175x+1.50 x^{2}] \ \ {\rm for} \ M_{b}<\mu \end{aligned}$$ Typical values of the running $b,c$ masses at the scale $\mu = 100$ GeV, characteristic for $M_\Phi$, are displayed in Table 2, with the evolution calculated for $\alpha_{s}(M_{Z})=0.118\,\pm \, 0.006$; $M_{Q}^{\rm pt2}$ are the quark pole masses. $ \alpha_{s}(M_{Z}) $ $ {\overline{m}}_{Q}(M_{Q})$   $M_{Q}\,=\,M_{Q}^{\rm pt2} $  ${\overline{m}}_{Q}\,(\mu\,=\ 100~{\rm GeV}) $ ----- ----------------------- ------------------------------- ------------------------------- ------------------------------------------------- $b$ $0.112$ $(4.26 \pm 0.02)~{\rm GeV}$ $(4.62 \pm 0.02)~{\rm GeV}$ $(3.04 \pm 0.02)~{\rm GeV}$ $0.118$ $(4.23 \pm 0.02)~{\rm GeV}$ $(4.62 \pm 0.02)~{\rm GeV}$ $(2.92 \pm 0.02)~{\rm GeV}$ $0.124$ $(4.19 \pm 0.02)~{\rm GeV}$ $(4.62 \pm 0.02)~{\rm GeV}$ $(2.80 \pm 0.02)~{\rm GeV}$ $c$ $0.112$ $(1.25 \pm 0.03)~{\rm GeV}$ $(1.42 \pm 0.03)~{\rm GeV}$ $(0.69 \pm 0.02)~{\rm GeV}$ $0.118$ $(1.23 \pm 0.03)~{\rm GeV}$ $(1.42 \pm 0.03)~{\rm GeV}$ $(0.62 \pm 0.02)~{\rm GeV}$ $0.124$ $(1.19 \pm 0.03)~{\rm GeV}$ $(1.42 \pm 0.03)~{\rm GeV}$ $(0.53 \pm 0.02)~{\rm GeV}$ [Table 2: The running $b$ and $c$ quark masses in the $\overline{\rm MS}$ scheme at a scale $\mu=100$ GeV.]{} The decay of the Higgs bosons to gluons is, to a good approximation mediated by heavy top quark loops; the partial decay width, including QCD radiative corrections which are built up by the exchange of virtual gluons and the splitting of a gluon into two gluons or into $N_F$ massless quark–antiquark pairs, is given by \[$\mu \sim M_{\Phi}$\] $$\begin{aligned} \Gamma^{N_F} [ \phi {\rightarrow}gg+..] &=& \frac{G_{F} g^2_{\phi tt} \alpha_{s}^2 M_{\phi}^{3}} {36 \sqrt{2} \pi^{3}} \left[ 1+ \frac{\alpha_s}{\pi} \left( \frac{95}{4} -\frac{7}{6}N_{F} +\frac{33-2N_{F}} {6}\log \frac{\mu ^{2}}{M_{\phi}^{2}} \right) \right] {\nonumber}\\ \Gamma^{N_F} [ A {\rightarrow}gg+..] &=& \frac{G_{F} g^2_{A tt} \alpha_{s}^2 M_{A}^{3}} {16 \sqrt{2} \pi^{3}} \left[ 1+ \frac{\alpha_s}{\pi} \left( \frac{97}{4} -\frac{7}{6}N_{F} +\frac{33-2N_{F}} {6}\log \frac{\mu ^{2}}{M_{A}^{2}} \right) \right]\end{aligned}$$ with $\phi = h,H$ and $\alpha_s\equiv \alpha_s^{N_F}(\mu^2)$. The radiative corrections are very large, nearly doubling the partial width. The final states $\Phi {\rightarrow}b{\overline{b}}g$ and $c{\overline{c}}g$ are also generated through processes in which the $b,c$ quarks are coupled to the Higgs boson directly. Gluon splitting $g{\rightarrow}b{\overline{b}}$ in $\Phi {\rightarrow}gg$ increases the inclusive decay probabilities $\Gamma(\Phi {\rightarrow}b\bar{b}+ \dots)$ [*etc.*]{} Since $b$ quarks, and eventually $c$ quarks, can in principle be tagged experimentally, it is physically meaningful to consider the particle width of Higgs decays to gluon and light $u,d,s$ quark final jets separately. The contribution of $b,c$ quark final states to the coefficient in front of $\alpha_s$ in eq. (3) is: $$-\frac{7}{3} + \frac{1}{3} [\log \frac{M_{\Phi}^{2}} {M_{b}^{2}}+\log \frac{M_{\Phi}^{2}} {M_{c}^{2}} ]$$ Instead of naively subtracting this contribution, it may be noticed that the mass logarithms can be absorbed by changing the number of active flavors from $N_{F}=5$ to $N_{F}=3$ in the QCD coupling $\alpha_{s}^{(N_F)}$. The subtracted parts may be added to the partial decay widths into $c$ and $b$ quarks. The numerical analysis of the branching ratios for the lightest CP–even Higgs decays in the decoupling limit where $h$ is SM–like, with the quark masses and QCD couplings given above and a top mass $M_{t}=(176 \pm 11)$ [GeV]{}, is shown in Fig. 1. To estimate systematic uncertainties, the variation of the $c$ mass has been stretched over $2\sigma$ and the uncertainty of the $b$ mass to 0.05 [GeV]{}. However, the dominant error in the predictions is due to the uncertainty in $\alpha_{s}$ and the errors in the prediction for the charm and gluon branching ratios are very large. Nevertheless, the expected hierarchy of the Higgs decay modes is clearly visible despite these uncertainties. Similar results hold for the heavy CP–even and CP–odd Higgs decays. [Fig. 1: Branching ratios of the $h$ boson in the decoupling limit, including the uncertainties from the quark masses and the QCD coupling $\alpha_s$ \[shaded bands\].]{} Three Body decay modes$^6$ ========================== Besides these two–body decays, below–threshold modes can play an important role. It is well–known that SM Higgs decays into real and virtual $Z$ pairs are quite substantial: the suppression by the off–shell propagator and the additional $Zff$ coupling is at least partly compensated by the large Higgs coupling to the $Z$ bosons. For the same reason, three–body decays of MSSM Higgs particles mediated by gauge bosons, heavy Higgs bosons and top quarks, are of physical interest. Important three-body decays for the $H,A$ and $H^\pm$ bosons are $[V=W,Z]$: $$\begin{aligned} H &{\rightarrow}& VV^* {\rightarrow}V f \bar{f}^{(')} \ , \ AZ^* {\rightarrow}A f \bar{f} \ , \ H^\pm W^{\mp *} {\rightarrow}H^\pm f \bar{f}' \ , \ \bar{t}t^* {\rightarrow}\bar{t} b W^+ \\ A & {\rightarrow}& hZ^* {\rightarrow}h f \bar{f} \ , \ \bar{t}t^* {\rightarrow}\bar{t} bW^+ \\ H^\pm & {\rightarrow}& hW^* {\rightarrow}h f \bar{f}' \ , \ AW^* {\rightarrow}A f \bar{f}' \ , \ \bar{b}t^* {\rightarrow}\bar{b}bW\end{aligned}$$ For the lightest Higgs boson $h$, the only releveant below threshold decay mode is $h {\rightarrow}W^* W^*$ for $M_h \sim 130$ GeV. In this case, both the $W$’s have to be taken off–shell. The branching ratios for $h,H,A$ and $H^\pm$ decays are shown in Fig. 2 for ${\mbox{tg}\beta}=1.5$, in the case where the mixing in the stop sector is neglected. For the heavy Higgs boson $H$, the decay $H{\rightarrow}hh$ is the dominant channel, superseded by $t\bar{t}$ decays above the threshold \[for the latter, the inclusion of the three–body modes provides a smooth transition from below to above threshold\]. This rule is only broken for Higgs masses of about 140 GeV where an accidentally small value of the $\lambda_{Hhh}$ coupling allows the $b \bar{b}$ and $WW^*$ decay modes to become dominant. Important channels in general, below the $t\bar{t}$ threshold, are decays to pairs of gauge bosons and $b\bar{b}$ decays. In a restricted range of $M_H$, below–threshold $AZ^*$ and $H^{\pm}W^{\mp *}$ also play a non–negligible role. In the case of the pseudoscalar $A$, the dominant modes are the $A{\rightarrow}b\bar{b}$ and $A {\rightarrow}t\bar{t}$ decays below the $hZ$ and $t\bar{t}$ thresholds respectively; in the intermediate mass region, $M_A=200$ to $300$ GeV, the decay $A {\rightarrow}hZ^*$ \[which reaches $\sim 1\%$ already at $M_A =130$ GeV\] dominates. The gluonic decays are significant around the $t\bar{t}$ threshold. For the charged Higgs boson, the inclusion of the three–body decay modes will reduce the branching ratio for the $\tau\nu$ channel quite significantly. Indeed, this decay does not overwhelm all the other modes since the three–body decay channels $H^+ {\rightarrow}hW^*$ as well as $H^+{\rightarrow}AW^*$ in the low mass range and $H^+ {\rightarrow}bt^*$ in the intermediate mass range have appreciable branching ratios. The total widths of the Higgs bosons are in general considerably smaller than for the SM Higgs due to the absence or the suppression of the decays to $W/Z$ bosons which grow as $M_H^3$. The dominant decays are built-up by top quarks so that the widths rise only linearly with $M_\Phi$. However, for large ${\mbox{tg}\beta}$ values, the decay widths scale in general like ${\rm tg}^2\beta$ and can become experimentally significant, for ${\mbox{tg}\beta}{\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~}{\cal O}(30)$ and for large $M_\Phi$. [Fig. 2: Branching ratios for the CP–even, the CP–odd and the charged MSSM Higgs bosons, including the three–body decays, for ${\mbox{tg}\beta}=1.5$ and no stop mixing.]{} SUSY Decay modes$^7$ ==================== In the previous discussion, we have assumed that decay channels into neutralinos, charginos and sfermions are shut. However, these channels could play a significant role, since some of these particles can have masses in the ${\cal O}(100$ GeV) range or less. To discuss these decays, we will restrict ourselves to the MSSM constrained by minimal Supergravity, in which the SUSY sector is described in terms of five universal parameters at the GUT scale: the common scalar mass $m_0$, the common gaugino mass $M_{1/2}$, the trilinear coupling $A$, the bilinear coupling $B$ and the higgsino mass $\mu$. These parameters evolve according to the RGEs, forming the supersymmetric particle spectrum at low energy. The requirement of radiative electroweak symmetry breaking further constrains the SUSY spectrum, since the minimization of the one–loop Higgs potential specifies the parameter $\mu$ \[to within a sign\] and also $B$. The unification of the $b$ and $\tau$ Yukawa couplings gives another constraint: in the $\lambda_t$ fixed–point region, the value of ${\mbox{tg}\beta}$ is fixed by the top quark mass through: $m_t \simeq (200~{\rm GeV}) \sin\beta$, leading to ${\mbox{tg}\beta}\simeq 1.75$. There also exists a high–${\mbox{tg}\beta}$ \[$\lambda_b$ and $\lambda_\tau$ fixed–point\] region for which ${\mbox{tg}\beta}\sim$ 50–60. If one also notes that moderate values of the trilinear coupling $A$ have little effect on the resulting spectrum, then the whole SUSY spectrum will be a function of ${\mbox{tg}\beta}$ which we take to be ${\mbox{tg}\beta}=1.75$ and 50, the sign of $\mu$, $m_0$ which in practice we replace with $M_A$ taking the two illustrative values $M_A =300$ and 600 GeV, and the common gaugino mass $M_{1/2}$ that we will freely vary. The decay widths of the heavy CP-even, the CP–odd and the charged Higgs bosons, $H,A$ and $H^\pm$, into pairs of neutralinos and charginos \[dashed lines\], squarks \[long–dashed lines\] and sleptons \[dot–dashed lines\], as well as the total \[solid lines\] and non–SUSY \[dotted–lines\] decay widths, are shown in Fig. 3 for ${\mbox{tg}\beta}=1.75$, $\mu>0$ and two values of $M_A=300$ \[left curves\] and $600$ GeV \[right curves\]. For $M_A=300$ GeV, i.e. below the $t\bar{t}$ threshold, the widths of the $H$ decays into inos and sfermions are much larger than the non–SUSY decays. In particular, squark \[in fact $\tilde{t}$ and $\tilde{b}$ only\] decays are almost two–orders of magnitude larger when kinematically allowed. The situation changes dramatically for larger $M_A$ when the $t\bar{t}$ channel opens up: only the decays into $\tilde{t}$ pairs when allowed are competitive with the dominant $H {\rightarrow}t\bar{t}$ channel. Nevertheless, the decays into inos are still substantial having BRs at the level of 20%; the decays into sleptons never exceed a few percent. In the case of the pseudoscalar $A$, because of CP–invariance and the fact that sfermion mixing is small except in the stop sector, only the decays into inos and $A {\rightarrow}\tilde{t}_1 \tilde{t}_2$ decays are allowed. For these channels, the situation is quite similar to the case of $H$: below the $t\bar{t}$ threshold the decay width into ino pairs is much larger than the non–SUSY decay widths \[here $\tilde{t}_2$ is too heavy for the $A {\rightarrow}\tilde{t}_1 \tilde{t}_2$ decay to be allowed\], but above $2m_t$ only the $A {\rightarrow}\tilde{t}_1 \tilde{t}_2$ channel competes with the $t\bar{t}$ decays. For the charged Higgs boson $H^\pm$, only the decay $H^+ {\rightarrow}\tilde{t}_1 \tilde{b}_1$ \[when kinematically allowed\] competes with the dominant $H^+{\rightarrow}t\bar{b}$ mode, yet the $\tilde{\chi}^+ \tilde{ \chi}^0$ decays have a branching ratio of a few ten percent; the decays into sleptons are at most of ${\cal O}(\%)$. In the case where $\mu<0$, the situation is quite similar as above. For large ${\mbox{tg}\beta}$ values, ${\mbox{tg}\beta}\sim 50$, all gauginos and sfermions are very heavy and therefore kinematically inaccessible, except for the lightest neutralino and the $\tau$ slepton. Moreover, the $b\bar{b}/\tau \tau$ and $t\bar{b}$/$\tau \nu$ \[for the neutral and charged Higgs bosons respectively\] are enhanced so strongly, that they leave no chance for the SUSY decay modes to be significant. Therefore, for large ${\mbox{tg}\beta}$, the simple pattern of $bb/\tau\tau$ and $tb$ decays for heavy neutral and charged Higgs bosons still holds true even when the SUSY decays are allowed. [Fig. 3: Decay widths for the SUSY decay modes of the heavy CP–even, CP–odd and charged Higgs bosons, for ${\mbox{tg}\beta}=1.75$. The total and the non–SUSY widths are also shown.]{} Since the decays into stop and sbottom squarks can be dominant when kinematically allowed, QCD corrections must be incorporated in order to have full control on the decay widths and to make a reliable comparison with the standard (non–SUSY) decay channels. The QCD corrections to the decays of the heavy CP–even, CP–odd and charged MSSM Higgs bosons into stop and sbottom quarks have been recently calculated$^8$. These corrections are found to be rather large, enhancing or supressing the widths by amounts up to 50% and in some case even more. The QCD corrections depend strongly on the gluino mass; however for very heavy gluinos, they are only logarithmically dependent on $m_{\tilde{g}}$. Contrary to the case of Higgs boson decays into light quark pairs, these large corrections cannot be absorbed into running squark masses since the latter are expected to be of the same order as as the Higgs masses. The program HDECAY$^9$ ====================== Finally, let me shortly describe the fortran code HDECAY, which calculates the various decay widths and the branching ratios of Higgs bosons in the SM and the MSSM and which includes: \(a) All decay channels that are kinematically allowed and which have branching ratios larger than $10^{-4}$, [*y compris*]{} the loop mediated, the three body decay modes and in the MSSM the cascade and the supersymmetric decay channels. \(b) All relevant two-loop QCD corrections to the decays into quark pairs and to the quark loop mediated decays into gluons are incorporated in the most complete form; the small leading electroweak corrections are also included. \(c) Double off–shell decays of the CP–even Higgs bosons into massive gauge bosons which then decay into four massless fermions, and all all important below–threshold three–body decays discussed previously. \(d) In the MSSM, the complete radiative corrections in the effective potential approach with full mixing in the stop/sbottom sectors; it uses the renormalisation group improved values of the Higgs masses and couplings and the relevant leading next–to–leading–order corrections are also implemented. \(e) In the MSSM, all the decays into SUSY particles (neutralinos, charginos, sleptons and squarks including mixing in the stop, sbottom and stau sectors) when they are kinematically allowed. The SUSY particles are also included in the loop mediated $\gamma \gamma$ and $gg$ decay channels. The basic input parameters, fermion and gauge boson masses and total widths, coupling constants and in the MSSM, soft–SUSY breaking parameters can be chosen from an input file. In this file several flags allow to switch on/off or change some options \[[*e.g.*]{} chose a particular Higgs boson, include/exclude the multi–body or SUSY decays, or include/exclude specific higher–order QCD corrections\]. The results for the many decay branching ratios and the total decay widths are written to several output files with headers indicating the processes and giving the input parameters. Acknowledgments: ================ I thank the organisers of this workshop, in particular Bernd Kniehl, for the nice and stimulating atmosphere of the meeting. The work discussed here was done in enjoyable collaborations with Abdeslam Arhrib, Wolfgang Hollik, Jan Kalinowski, Patrick Janot, Christoph Jünger, Paul Ohmann, Michael Spira and Peter Zerwas. References ========== [9]{} For a review see, H. Haber and G. Kane, Phys. Rep. 117 (1985) 75. For a review, see J.F. Gunion, H.E. Haber, G.L. Kane and S. Dawson, [*The Higgs Hunter’s Guide*]{}, Addison–Wesley, Reading 1990. For a recent summary, see M. Carenna and P. Zerwas \[conv.\] et al. [*Higgs Physics at LEPII*]{}, CERN yellow report CERN-96-01, edited by G. Altarelli, T. Sjostrand and F. Zwirner. Y. Okada, M. Yamaguchi and T. Yanagida, Prog. Theor. Phys. 85 (1991) 1; H. Haber and R. Hempfling, Phys. Rev. Lett. 66 (1991) 1815; J. Ellis, G. Ridolfi and F. Zwirner, Phys. Lett. 257B (1991) 83; R. Barbieri, F. Caravaglios and M. Frigeni, Phys. Lett. 258B (1991) 167. A. Djouadi, M. Spira and P. Zerwas, Z. Phys. C70 (1996) 427. A. Djouadi, J. Kalinowski and P. Zerwas, Z. Phys. C70 (1996) 435. A. Djouadi, J. Kalinowski, P. Ohmann and P. Zerwas, hep-ph/9605339 (Z. Phys. C to appear); A. Djouadi, P. Janot, J. Kalinowski and P. Zerwas, Phys. Lett. B376 (1996) 220. A. Bartl et al., hep-ph/9701398; A. Arhrib, A. Djouadi, W. Hollik and C. Jünger, hep-ph/9702426. A. Djouadi, J. Kalinowski and M. Spira, PM–97–02, to appear. The program can obtained by sending an E-mail to one of the authors: djouadi@lpm.univ-montp2.fr, kalino@x4u2.desy.de or spira@cern.ch or directly from the WWW at http://www.lpm.univ-montp2.fr/ djouadi/program.html or http://wwwcn.cern.ch/ mspira/ . [^1]: Talk given at the Ringberg Workshop [*the Higgs Puzzle*]{}, Ringberg Castle, Tegernsee, Germany, December 8–13 1996; to appear in the proceedings.
{ "pile_set_name": "ArXiv" }
--- abstract: | We have investigated experimentally the electronic transport properties of a two-dimensional electron gas (2DEG) present in an AlSb/InAs/AlSb quantum well, where part of the toplayer has been replaced by a superconducting Nb strip, with an energy gap $\Delta_0$. By measuring the lateral electronic transport underneath the superconductor, and comparing the experimental results with a model based on the Bogoliubov-de Gennes equation and the Landauer-Büttiker formalism, we obtain a decay length $\xi_{\text{Sm}} \approx 100~\text{nm}$ for electrons. This decay length corresponds to an interface transparency $T_{\text{SIN}}=0.7$ between the Nb and InAs. Using this value, we infer an energy gap in the excitation spectrum of the SQW of $\Delta_{\text{eff}} = 0.97 \Delta_0 = 0.83~\text{meV}$. address: - | Department of Applied Physics and Material Science Centre, University of Groningen,\ Nijenborgh 4, 9747 AG Groningen, The Netherlands. - 'Interuniversity Micro Electronics Centre, Kapeldreef 75, B-3030, Leuven, Belgium' author: - 'P. H. C. Magnée, B. J. van Wees, and T. M. Klapwijk' - 'W. van de Graaf, and G. Borghs' title: 'Experimental determination of the quasi-particle decay length $\xi_{\text{Sm}}$ in a superconducting quantum well.[^1]' --- A superconducting quantum well (SQW) can be defined as a system in which one of the barriers of a quantum well, in our case InAs in between AlSb barriers, is replaced by a superconductor, here Nb. In a quantum well, particles are confined by normal reflections at the boundaries. In a SQW, also Andreev reflection [@And64] at the superconducting barrier can occur, changing the confinement. Due to this superconducting barrier, an energy gap $\Delta_{\text{eff}}$ appears in the excitation spectrum of the two dimensional electron gas (2DEG) present in the SQW. [@VMvWK94] The magnitude of this gap depends on the interface transparency $T_{\text{SIN}}$ between the superconductor and the InAs, and the superconducting energy gap $\Delta_0$. In the limit $T_{\text{SIN}}\ll 1$, Volkov [*et al.*]{} [@VMvWK94] have shown that the SQW can be described as a 2 dimensional superconductor with an effective order parameter $\Delta_{\text{eff}}\,e^{i\phi}$ ($\Delta_{\text{eff}} \ll \Delta_0$), where $\phi$ is the macroscopic phase of the superconductor on top. The physics of the SQW is of importance to understand transport in co-planar super-normal-superconductor (SNS) junctions. From a technological point of view there are two kinds of SNS junctions, sandwich- (inline planar) and co-planar junctions. [@Lik79] In sandwich-type junctions, the junction length $L$ is well defined. In co-planar structures however, electrons can travel a certain distance underneath the superconductor before being Andreev reflected, thus effectively enlarging the junction length. The distance electrons penetrate underneath the superconductor can be associated with a decay length $\xi_{\text{Sm}}$, similar to the superconducting coherence length $\xi_0 = \hbar v_{\text{F}} / \Delta$. It is important to have a good estimate of the actual junction length, $L_{\text{eff}} = L + 2 \xi_{\text{Sm}}$, because it is a relevant parameter in calculations for the critical current $I_c$ in SNS junctions. [@Lik76; @KL88] This was also appreciated recently by Nguyen [*et al.*]{}, [@NKH92; @NKH94] who measured Nb-InAs-Nb junctions with varying length, using a transmission line model (TLM). When plotting the resistance versus the junction length, they obtained a straight line, intersecting the length axis at a negative value. They interpreted this length to be the average distance $x_{\text{A}} = 1.5~\mu$m an electron needs to travel underneath the superconductor, before it is Andreev reflected. The system under study is a 15 nm InAs layer sandwiched between a 2 $\mu$m AlSb layer and a superconductor, Nb, see Fig \[fig:sample\]b. The 2DEG present in the InAs has a high electron mobility, resulting in a long elastic mean free path $\ell$. [@DHvWetal95] The ballistic regime is therefore easily accessible. Furthermore the absence of a Schottky barrier in metal-InAs contacts enables one to make highly transparent interfaces. At the InAs-AlSb interface, the barrier for electrons is assumed to be infinite, at the Nb-InAs interface a $\delta$-function potential barrier is present, characterized by a dimensionless parameter $Z = 2p_{\text{F}}H/\mu = H/\hbar v_{\text{F}}$ as introduced by Blonder [*et al.*]{} [@BTK82] Due to the high interface transparency in our system the limit $\Delta_{\text{eff}} \ll \Delta_0$, used by Volkov [*et al.*]{}, [@VMvWK94] no longer applies, therefore also the quasi particle decay length is expected to be different from $\hbar v_{\text{F}} / \Delta_{\text{eff}}$. The classical description used by Nguyen [*et al.*]{} [@NKH92; @NKH94] takes into account multiple Andreev reflections, but ignores phase coherence between these multiple reflections. This approach will in general not lead to an exponential decay of the laterally transmitted wave functions in the SQW. Here we will calculate both the energy gap and the decay length of quasi particles at the Fermi-level in the SQW, using a quantum mechanical description. We will assume that only the lowest 2D subband in the QW is filled, which is the case in our samples. In order to calculate the wave functions we have to solve the Bogoliubov-de Gennes equation, [@dGen66] $$\left[ \begin{array}{lr} {\cal H} &\Delta({\bf r})\\ \Delta^{\ast}({\bf r})&-{\cal H} \end{array} \right] \left[ \begin{array}{c} u({\bf r})\\ v({\bf r}) \end{array} \right] =E \left[ \begin{array}{c} u({\bf r})\\ v({\bf r}) \end{array} \right]\, , \label{eq:BdG}$$ where the Hamiltonian $\cal H$ is defined as $${\cal H}=-\frac{\hbar^2}{2 m^{\ast}}\nabla^2+U({\bf r})-\mu\, . \label{eq:ham}$$ For the effective mass we assume $m^{\ast}=m_0$, the free electron mass, in the Nb, and $m^{\ast}=0.023~m_0$ in the InAs. For the potential $U({\bf r})$ we take $$\begin{aligned} U({\bf r}) = U(z) &=& H \delta(z) - E_{\text{F, Nb}} \theta(-z) \nonumber\\ &-& E_{\text{F,InAs}} \theta(z)\theta(L-z) + V_0 \theta(z-L)\, , \label{eq:pot}\end{aligned}$$ where the Fermi energies are $E_{\text{F, Nb}} = 5.3~\text{eV}$ and $E_{\text{F, InAs}} = 0.11~\text{eV}$. The potential barrier at the InAs-AlSb interface, $V_0$, is assumed to go to infinity. In Eq.(\[eq:BdG\]) the pair potential $\Delta({\bf r})$ is assumed to be $\Delta_0$ in the Nb $(z<0)$, and zero elsewhere. The solutions to the Bogoliubov-de Gennes equation, Eq.(\[eq:BdG\]), are given by electron- and hole wave functions in the InAs-quantum well $(0<z<L)$, $$\Psi({\bf r})= \left[ \begin{array}{c} u({\bf r})\\ v({\bf r}) \end{array} \right] = \left\{ u(z) \left[ \begin{array}{c} 1\\ 0 \end{array} \right] + v(z) \left[ \begin{array}{c} 0\\ 1 \end{array} \right] \right\} e^{i(k_xx+k_yy)}\, ,$$ and by mixed quasi-particle wave functions in the Nb superconductor $(z<0)$, with $u^2 = 1 - v^2 = \frac{1}{2} (1+\Omega/E)$. $\Omega^2 = E^2 - \Delta^2$, $E$ being the energy with respect to the Fermi-level. These wave functions and their derivatives have to be matched at the boundaries $z=0$ and $z=L$, see Ref.  for a detailed analysis. Numerical solutions of Eq.(\[eq:BdG\]) are given in Fig. \[fig:matrix\]. For $k_z$ real, where $k_z$ is the wave vector in the $z$-direction, there are no solutions of $E(k)$ with $|E|$ smaller than an effective energy gap $|\Delta_{\text{eff}}|$. This $|\Delta_{\text{eff}}|$ is calculated as function of the transparency $T_{\text{SIN}} = 1/(1+Z^2)$ of the Nb-InAs interface, by finding the minimum of $E(k)$, Fig. \[fig:matrix\]a. At low transparency it can be shown [@VMvWK94] that $\Delta_{\text{eff}}$ depends linearly on $T_{\text{SIN}}$: $\Delta_{\text{eff}} \approx \frac{1}{4} T_{\text{SIN}} E_0$, where $E_0 = \frac{\hbar^2}{2 m^{\ast}} \left(\frac{\pi}{L}\right)^2$ is the confinement energy in the QW. At $E=0$ there are only solutions for Eq.(\[eq:BdG\]) for complex $k_z$. The total energy $\frac{\hbar^2}{2 m^{\ast}} (k_z^2 + k_{\parallel}^2)$ must be real, hence $k_{\parallel} = \sqrt{k_x^2 + k_y^2}$ has an imaginary part. By taking the $y$-direction along the boundary of the SQW, wave function matching requires $\text{Im}(k_y) = 0$. For the decay length we can thus write $\text{Im}(k_x)^{-1} = -\text{Re}(k_x) / \text{Re}(k_z)\text{Im}(k_z) \approx -k_{\text{F}} \cos(\alpha) / \text{Re}(k_z)\text{Im}(k_z) = \xi_{\text{Sm}} \cos(\alpha)$, where $\alpha$ is the angle of incidence. The decay length $\xi_{\text{Sm}}$ is plotted in Fig. \[fig:matrix\]b. The decay length $\xi = \hbar^2 k_{\text{F}} / m^{\ast} \Delta_{\text{eff}}$, analogous to the expression used for the decay length of quasi-particles in a superconductor, is shown for comparison. As can be seen, at high transparency, there is a substantial difference between both decay lengths. We will show that the former one is the relevant one in the SQW. Samples are based on a 15 nm InAs quantum well with AlSb barriers. Prior to any processing the top AlSb layer is removed. The parameters of the quantum well with an exposed InAs surface are (measured at 4.2 K): $n_{\text{S}} = 1.1 \times 10^{16}~\text{m}^{-2}$, $\mu_e = 2.2~\text{m}^2/\text{Vs}$ and $\ell = 380~\text{nm}$. We want to measure the laterally transmitted signal through a SQW, depending on the width $d$. For this purpose, the Nb pattern is defined, using electron beam lithography (EBL), as a narrow strip, either $d = 100$ or 200 nm, with probes at either side at distances of 200 and 300 nm, see Fig. \[fig:sample\]a. Inspection with the electron microscope shows $d = 100$ and 236 nm. Prior to the Nb deposition, the InAs surface is cleaned using low energy Ar-sputtering. This can reduce the thickness of the quantum well by a maximum of 2 nm, and might alter the carrier density $n_{\text{S}}$, and mobility $\mu_e$. To define the width of the InAs channel, a mesa-etch is performed in the 200 nm strip sample, $W = 0.9~\mu$m. This was not done for the sample with a 100 nm strip, which means that in this sample the junctions to the SQW have a width equal to the length of the strip, 3.5 $\mu$m. Evidently some parallel conductance will be present. Prior to the measurements we checked the continuity of the strips, together with the critical temperature $T_c$, by measuring the resistance through contacts 3 and 4 (Fig. \[fig:sample\]a). Measurements are done using a standard 4-point lock-in technique, at 1.3 K. By applying an ac modulation current on top of a dc-bias, we can measure the energy dependence of transport in the SQW. [@footnote1] Transport through the SQW can be modelled in the spirit of the Landauer-Büttiker formalism, [@BILP85; @Lam91] using normal- and Andreev reflection probabilities, $R_{e\rightarrow e}$ and $R_{e\rightarrow h}$, and transmission probabilities $T_{e\rightarrow e}$ and $T_{e\rightarrow h}$, see Fig \[fig:sample\]b. Conservation of particles requires $1 = R_{e\rightarrow e} + R_{e\rightarrow h} + T_{e\rightarrow e} + T_{e\rightarrow h}$. By expressing the current in terms of these reflection and transmission probabilities, we can translate the region underneath the strip into a schematic resistor network, shown in Fig. \[fig:sample\]c, where $$R = \frac{1}{G_{\text{S}}} \frac{1}{2 (R_{e\rightarrow h} + T_{e\rightarrow e})}\, ,$$ $$R_C = \frac{1}{G_{\text{S}}} \frac{(T_{e\rightarrow e} - T_{e\rightarrow h})}{4 (R_{e\rightarrow h} + T_{e\rightarrow h}) (R_{e\rightarrow h} + T_{e\rightarrow e})}\, .$$ Here $G_{\text{S}} = \frac{2e^2}{h} \frac{W}{\frac{1}{2} \lambda_{\text{F}}}$ is the Sharvin conductance. [@BvH91a] Although these resistors do not have physical relevance, they are useful for calculating the the various measurable quantities: $$\begin{aligned} &&\frac{\partial V_1}{\partial I_1} \approx \frac{\partial V_2}{\partial I_2} \approx \frac{R_{\parallel}}{R_{\parallel} + R + R_C} (R + R_C) = \frac{R_{\parallel}}{R_{\parallel} + R + R_C}\nonumber\\ &&\times \frac{1}{G_{\text{S}}} \frac{2 (R_{e\rightarrow h} + T_{e\rightarrow h}) + (T_{e\rightarrow e} - T_{e\rightarrow h})}{4 (R_{e\rightarrow h} + T_{e\rightarrow h}) (R_{e\rightarrow h} + T_{e\rightarrow e})} \label{eq:dv1di1}\, ,\end{aligned}$$ $$\begin{aligned} &&\frac{\partial V_2 / \partial I_1}{\partial V_1 / \partial I_1} \approx \frac{R_{\parallel}}{R_{\parallel} + R + R_C} \frac{R_C}{R + R_C}= \frac{R_{\parallel}}{R_{\parallel} + R + R_C}\nonumber\\ &&\times \frac{(T_{e\rightarrow e} - T_{e\rightarrow h})}{2 (R_{e\rightarrow h} + T_{e\rightarrow h}) + (T_{e\rightarrow e} - T_{e\rightarrow h})}\label{eq:dv2dv1}\, ,\end{aligned}$$ where $R_{\parallel}$ accounts for any parallel conductance that might be present. For small $(T_{e\rightarrow e} - T_{e\rightarrow h})$, the angular distribution of incoming electrons is taken into account by calculating the following integral: $$\begin{aligned} T_{e\rightarrow e} &-& T_{e\rightarrow h} = \frac{|\Psi(x=d)|^2}{|\Psi(x=0)|^2} \nonumber\\ &=& \frac{1}{2} \int_{-\pi/2}^{\pi/2} \cos(\alpha) \exp\left(\frac{-2d}{\xi_{\text{Sm}}\cos(\alpha)}\right)\, d\alpha\, . \label{eq:Tee}\end{aligned}$$ In Fig. \[fig:exp\] the experimental data are shown. For the wide strip sample, $d = 236~\text{nm}$, we measure junction resistances of $\partial V_1 / \partial I_1 |_{V_1=0} = 425~\Omega$ and $\partial V_2 / \partial I_2 |_{V_2=0} = 625~\Omega$, whereas from the Sharvin conductance, with $W = 0.9~\mu$m, we would expect $1/G_{\text{S}} \simeq 180~\Omega$, which is approximately a factor of 3 smaller. This can be explained by elastic scattering, present in the samples. In the limit $T_{e\rightarrow e},\, T_{e\rightarrow h} \ll 1$, which we assume to be the case, we can write $\partial V_1/\partial I_1 \approx \frac{1}{G_{\text{S}}} \frac{1}{2 R_{e\rightarrow h}}$, Eq.(\[eq:dv1di1\]). Using $\partial V_{1(2)} / \partial I_{1(2)} \approx 525~\Omega$ we get $R_{e\rightarrow h} \approx 0.17$, or $R_{e\rightarrow e} \approx (1 - R_{e\rightarrow h}) \approx 0.83$. This value for $R_{e\rightarrow h}$ is used in further analysis. For the narrow strip sample, $d=100~\text{nm}$, the junction width is $W=3.5~\mu$m. By scaling $\partial V_{1(2)} / \partial I_{1(2)}$ from the wide strip sample, we expect $\partial V_{1(2)} / \partial I_{1(2)} \approx 135~\Omega$. The measured values are $\partial V_1 / \partial I_1 |_{V_1=0} = 75~\Omega$ and $\partial V_2 / \partial I_2 |_{V_2=0} = 68~\Omega$. These lower values are explained by a parallel resistance of $R_{\parallel} \approx 150~\Omega$, which is in agreement with expectation on geometrical grounds. We will first focus on the zero voltage bias transfer signal $\partial V_2 / \partial V_1$. From the wide strip, $d = 236~\text{nm}$, we obtain $\partial V_2 / \partial V_1|_{V_1=0} = \partial V_1 / \partial V_2|_{V_2=0} \approx 0.01$ which, with the aid of Eq.(\[eq:dv2dv1\]) leads to $(T_{e\rightarrow e} - T_{e\rightarrow h}) \approx 0.0035$. From Eq.(\[eq:Tee\]) we can then calculate $\xi_{\text{Sm}} \approx 100~\text{nm}$. Using again Eq.(\[eq:Tee\]) and (\[eq:dv2dv1\]) for the narrow strip, $d = 100~\text{nm}$, we expect $\partial V_2 / \partial V_1 \approx 0.097$, which is in reasonable agreement with the observed values $\partial V_2 / \partial V_1|_{V_1=0} \approx 0.04$ and $\partial V_1 / \partial V_2|_{V_2=0} \approx 0.05$. The discrepancy is mainly due to the fact that we assume $R_{e\rightarrow h}$ to be equal for both samples, whereas in general it will depend on $d$. Using $\xi_{\text{Sm}} \approx 100~\text{nm}$ we can estimate from Fig. \[fig:matrix\]b that the transparency $T_{\text{SIN}} = 0.7$ for the Nb-InAs interface. At finite energy, $\xi_{\text{Sm}} \propto \text{Im}(k_x)^{-1}$ will increase with $E$, until $E \geq \Delta_{\text{eff}}$, where the decay length will diverge, for $E>\Delta_{\text{eff}}$ there will be propagating states. Therefore the transfer signal $\partial V_2 / \partial V_1 (V_1)$ is expected to increase with increasing bias voltage $V_1$. In both samples we observe a dip at low voltage bias, with a width of approximately 0.8 mV. From the estimated interface transparency $T_{\text{SIN}} = 0.7$ we infer an induced energy gap $\Delta_{\text{eff}} = 0.97 \Delta_0 = 0.83~\text{meV}$, where $\Delta_0 = 0.86~\text{meV}$ is the superconducting energy gap of the strip, which is very close to width of the observed dip. This dip is however only observed when the current is injected from one probe, when the current is injected from the opposite probe, there is no dip present in the transfer signal, see Fig. \[fig:exp\]. This indicates that we can not understand the voltage dependence within the presented model. When the elastic scattering length is small, $\ell \leq \xi_{\text{Sm}}$, we do not measure $\xi_{\text{Sm}}$, but $\xi_{\text{eff}} = \sqrt{\xi_{\text{Sm}} \ell}$. This will lead to a somewhat larger value for $\xi_{\text{Sm}}$. In the InAs used $\ell$ is reduced due to the Ar-sputtering, and we expect that this will influence especially the voltage dependent signal. Furthermore we see a small dip at $V = 2~\text{mV} \approx (\Delta_{0,probe} + \Delta_{0,strip}) / e$. This is not expected from our model, but is probably related to the fact that we do not inject electrons from a normal reservoir, but use a superconducting injector instead. [@footnote1] Nguyen [*et al.*]{} [@NKH94] obtained an Andreev transfer length $x_{\text{A}} = 1.5~\mu$m, which is equivalent to a decay length for the wave functions of 3 $\mu$m. Comparing this value to the decay length obtained from our experiment, $\xi_{\text{Sm}} \approx 100~\text{nm}$, we see a huge difference. Nguyen [*et al.*]{} obtain their $x_{\text{A}}$ by extrapolating resistances of long Nb-InAs-Nb junctions (20 to 200 $\mu$m) to lower junction lengths. This is however not allowed, since a junction of zero length still has a finite resistance, of the order of the Sharvin resistance. [@BvH91a] In conclusion, we have investigated lateral transport in a Nb-InAs-AlSb superconducting quantum well. At zero voltage bias we can describe the transport properties in terms of transmission and reflection probabilities. For quasi-particles in the SQW, a decay length of $\xi_{\text{Sm}} \approx 100~\text{nm}$ is inferred from the experiments. This $\xi_{\text{Sm}}$ corresponds to an interface transparency between the Nb and InAs of $T_{\text{SIN}}=0.7$, which, according to calculations, results in an induced energy gap in the excitation spectrum of the Nb-InAs SQW of $\Delta_{\text{eff}} = 0.97 \Delta_0$. From the experiment there are some indications for the presence of this gap, although the exact voltage dependence of the transfer signal can not be understood within the presented model. We believe that the transfer signal at finite bias is greatly influenced by elastic scattering. This work was supported by the Dutch Science Foundation NWO/FOM. B. J. van Wees acknowledges support from the Royal Dutch academy of Sciences (KNAW). [10]{} A. F. Andreev, Zh. Eksp. Teor. Fiz. [**46**]{}, 1823 (1964), \[Sov.Phys. JETP 19, 1228 (1964)\]. A. F. Volkov, P. H. C. Magnée, B. J. van Wees, and T. M. Klapwijk, to appear in Physica C (unpublished). K. K. Likharev, Rev. Mod. Phys. [**51**]{}, 101 (1979). K. K. Likharev, Pis’ma Zh. Tekh. Fiz. [**2**]{}, 29 (1976), \[Sov. Techn. Phys. Lett. 2, 12 (1976)\]. M. Y. Kupriyanov and V. F. Lukichev, Zh. Eksp. Teor. Fiz. [**94**]{}, 139 (1988), \[Sov. Phys. JETP 67, 1163 (1988)\]. C. Nguyen, H. Kroemer, and E. L. Hu, Phys. Rev. Lett. [**69**]{}, 2847 (1992). C. Nguyen, H. Kroemer, and E. L. Hu, Appl. Phys. Lett. [**65**]{}, 103 (1994). A. Dimoulas [*et al.*]{}, Phys. Rev. Lett. [**74**]{}, 602 (1995). G. E. Blonder, M. Tinkham, and T. M. Klapwijk, Phys. Rev. B [**25**]{}, 4515 (1982). P. G. de Gennes, [*Superconductivity of metals and alloys*]{} (Benjamin, New York, 1966). Due to the highly non-uniform density of states in a superconductor, the energy distribution of electrons injected from a superconductor will in general be non-linear in voltage bias. M. Büttiker, Y. Imry, R. Landauer, and S. Pinhas, Phys. Rev. B [ **31**]{}, 6207 (1985). C. J. Lambert, J. Phys.: Cond. Matter [**3**]{}, 6579 (1991). C. W. J. Beenakker and H. van Houten, Solid State Phys. [**44**]{}, 1 (1991). [^1]: Submitted to Phys. Rev. B, Rap. Comm.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We apply the statefinder hierarchy plus the fractional growth parameter to explore the extended Ricci dark energy (ERDE) model, in which there are two independent coefficients $\alpha$ and $\beta$. By adjusting them, we plot evolution trajectories of some typical parameters, including the Hubble expansion rate $E$, deceleration parameter $q$, the third- and fourth-order hierarchy $S_3^{(1)}$ and $S_4^{(1)}$ and fractional growth parameter $\epsilon$, respectively, as well as several combinations of them. For the case of variable $\alpha$ and constant $\beta$, in the low-redshift region the evolution trajectories of $E$ are in high degeneracy and that of $q$ separate somewhat. However, the $\Lambda$CDM model is confounded with ERDE in both of these cases. $S_3^{(1)}$ and $S_4^{(1)}$, especially the former, perform much better. They can differentiate well only varieties of cases within ERDE except $\Lambda$CDM in the low-redshift region. For the high-redshift region, combinations $\{S_n^{(1)},\epsilon\}$ can break the degeneracy. Both of $\{S_3^{(1)},\epsilon\}$ and $\{S_4^{(1)},\epsilon\}$ have the ability to discriminate ERDE with $\alpha=1$ from $\Lambda$CDM, of which the degeneracy cannot be broken by all the before-mentioned parameters. For the case of variable $\beta$ and constant $\alpha$, $S_3^{(1)}(z)$ and $S_4^{(1)}(z)$ can only discriminate ERDE from $\Lambda$CDM. Nothing but pairs $\{S_3^{(1)},\epsilon\}$ and $\{S_4^{(1)},\epsilon\}$ can discriminate not only within ERDE but also ERDE from $\Lambda$CDM. Finally, we find that $S_3^{(1)}$ is surprisingly a better choice to discriminate within ERDE itself, and ERDE from $\Lambda$CDM as well, rather than $S_4^{(1)}$.' author: - 'Fei Yu[^1]' - 'Jing-Lei Cui' - 'Jing-Fei Zhang' - Xin Zhang title: Statefinder hierarchy exploration of the extended Ricci dark energy --- Introduction ============ Data from a series of astronomical observations for more than a decade have shown that the universe is undergoing an epoch of accelerated expansion [@Riess:1998AnJ1009; @Perlmutter:1999ApJ565; @Spergel:2007ApJS377; @Adelman:2008ApJS297]. The most likely explanation for this cosmic acceleration is that the universe is currently being dominated by an exotic component, named [*dark energy*]{} (DE), which exerts repulsive gravity. To explain the origin and physical properties of dark energy, numerous theoretical/phenomenological models have been proposed [@Bamba:2012ASS155]. Among the models, the most successful one is the $\Lambda$CDM model (which mainly includes the cosmological constant $\Lambda$ and cold dark matter), because it is simple but could provide a very good fit to the observational data currently available. The cosmological constant is equivalent to the vacuum energy density with $w=-1$. For a time-dependent equation-of-state parameter (EOS) $w$, there are lots of models, such as quintessence [@Steinhardt:1999PRD123504], Chaplygin gas [@Kamenshchik:2001PLB265], holographic dark energy [@LM:2004PLB1], and so on. In this paper, we study the model inspired by the holographic principle of quantum gravity. The holographic principle was enlightened by quantum properties of black hole [@Bekenstein:1973PRD2333; @Bousso:1999JHEP07004] and later extended to string theory [@Susskind:1994JMP6377]. According to the work of Cohen et al. [@Cohen:1999PRL4971], when $\rho_{\rm de}$ is taken as the quantum zero-point energy density caused by a short distance cut-off, the total energy in a region of size $L$ should not be more than the mass of a black hole of the same size, i.e., $L^3\rho_{\rm de} \leqslant LM_{\rm p}^2$. The saturated form of this inequality, which is equivalent to the largest $L$ allowed, leads to the energy density of the holographic dark energy, $\rho_{\rm de}=3c^2M_{\rm p}^2L^{-2}$, where $c$ is a numerical constant introduced and $M_{\rm p}$ is the reduced Planck mass with $M_{\rm p}^2=(8\pi G)^{-1}$. For the model setting, the choice of the infrared (IR) cut-off $L$ is very crucial. After the denial of the Hubble scale [@Hsu:2004PLB13] and the particle horizon [@Bousso:1999JHEP07004; @Fischler:9806039] as IR cut-off for their failure to give rise to the cosmic acceleration, Li chose the future event horizon instead, getting the expected success [@LM:2004PLB1]. But the adoption of the future event horizon indicates that the history of dark energy depends on the future evolution of the scale factor $a(t)$, which violates causality [@CRG:2007PLB228]. Then the agegraphic dark energy model [@CRG:2007PLB228; @WH:2008PLB113] and the Ricci dark energy (RDE) model [@GCJ:2009PRD043511] emerged to avoid the violation of causality. The former is characterized by the age of the universe as the length measure while the latter takes the average radius of Ricci scalar curvature $|R|^{-1/2}$ as the IR cut-off. Further, the RDE model was extended to a more general form, liberating coefficients of the two terms, of the energy density [@Nojiri:2006GRG1285; @Granda:2008PLB275] $$\label{erde} \rho_{\rm de}=3M_{\rm p}^2(\alpha H^2+\beta\dot{H}),$$ where $\alpha$ and $\beta$ are constants to be determined and the dot denotes a derivative with respect to time. For the Ricci-type holographic DE models are determined by a local concept of Ricci scalar curvature rather than a global one of future event horizon, they are naturally free of the causality problem. The model is called the [*extended Ricci dark energy*]{} (ERDE) model, with the special case $\alpha=2\beta$ the RDE model. With the increasing number of DE models, diagnostics aiming to differentiate them are needed. So far several methods have appeared. They are the well-known statefinder [@Sahni:2003JETPL201; @Alam:2003MNRAS1057], $Om$ [@Sahni:2008PRD103502] and growth rate of perturbations [@Acquaviva:2008PRD043514; @Acquaviva:2010PRD082001; @WLM:1998ApJ483]. The statefinder is a sensitive and robust geometrical diagnostic of DE, which uses both the second and the third derivatives of $a(t)$. Recently, Arabsalmani and Sahni further extended the statefinder to higher-order derivatives of $a(t)$, and called such a diagnostic “statefinder hierarchy” [@Arabsalmani:2011PRD043501]. The statefinder diagnostic has been applied to various DE models [@ZX:2005PLB1; @ZX:2005IJMPD1597; @ZX:2006JCAP01003; @Setare:2007JCAP03007; @ZJF:2008PLB26; @FCJ:2008PLB231; @TML:2009PRD023503; @ZL:2010IJMPD21; @YF:2013CTP243; @CJL:2014EPJC2849; @Sahni:2014ApJL40], but sometimes we do need the diagnostic with higher-order derivatives of $a(t)$. For instance, when diagnosing the new agegraphic DE model, the original statefinder (second and third derivatives) cannot differentiate this model with different parameter values [@CJL:2014EPJC2849], but the hierarchy (further higher-order derivatives) is capable of breaking the degeneracy [@CJL:2014EPJC3100]. Here we study the ERDE model with statefinder hierarchy, supplemented by the growth rate of perturbations, to explore what the behaviors are like when ERDE takes different parameter values and what the difference is between ERDE and $\Lambda$CDM. Necessarily referring to, in our previous work [@YF:2013CTP243], we have diagnosed, with the original statefinders, the ERDE model both with interaction between DE and matter and not. The results therein seems satisfactory since there is no appearance of degeneracy for ERDE with various parameter values. But from the aspect of completeness of a theory, we neglected another evolution tendency of ERDE, mentioned in some papers [@ZX:2009PRD103509; @Granda:09100778; @Mathew:2013IJMPD1350056], that $\alpha$ of values larger than 1 enables the ERDE model to exhibit another orientation of evolution symmetrical to that plotted in Ref. [@YF:2013CTP243]. As a matter of fact [@Granda:09100778], $\alpha>1$ makes ERDE behave like quintessence [@Steinhardt:1999PRD123504] ($w>-1$), while $\alpha<1$ like quintom [@FB:2005PLB35] ($w$ evolves across the cosmological-constant boundary $w=-1$). We will expatiate on this theme in a later section. In Sect. 2, the ERDE model is exhibited. In Sect. 3, we introduce the diagnostic tools of statefinder hierarchy and growth rate of perturbations. Then ERDE will be explored in Sect. 4. Finally Sect. 5 gives the conclusion. The ERDE model ============== We consider a flat universe with DE and matter, namely, $$\label{fdm} 3M_{\rm p}^2H^2=\rho_{\rm de}+\rho_{\rm m},$$ where $\rho_{\rm de}$ and $\rho_{\rm m}$ are, respectively, energy densities of DE and matter, and $\rho_{\rm de}$ takes the form of ERDE described by Eq. (\[erde\]). We get $$\label{eq4} E^2=\frac{H^2}{H_0^2}=\Omega_{\rm m0}e^{-3x}+\alpha E^2+\frac{\beta}{2}\frac{dE^2}{dx},$$ where $E=H/H_0$ is the dimensionless Hubble expansion rate, $x=\ln a$ and the subscript “ 0 ” denotes present values of physical quantities. The solution of Eq. (\[eq4\]) is $$\label{eq7} E^2=\Omega_{\rm m0}e^{-3x}+\frac{3\beta-2\alpha}{2\alpha-3\beta-2}\Omega_{\rm m0}e^{-3x}+f_0e^{\frac{2}{\beta}(1-\alpha)x},$$ where $$f_0=1+\frac{2}{2\alpha-3\beta-2}\Omega_{\rm m0},$$ under the initial condition $E_0=E(x=0)=1$. The fractional density and EOS of ERDE are given by $$\begin{aligned} \Omega_{\rm de} &=& \frac{1}{E^2}\frac{\rho_{\rm de}}{\rho_0} \nonumber \\ &=& \frac{1}{E^2}\left(\frac{3\beta-2\alpha}{2\alpha-3\beta-2}\Omega_{\rm m0}e^{-3x}+f_0e^{\frac{2}{\beta}(1-\alpha)x}\right),\label{omegade} \\ w &=& \frac{\frac{2\alpha-3\beta-2}{3\beta}f_0e^{\frac{2}{\beta}(1-\alpha)x}}{\frac{3\beta-2\alpha}{2\alpha-3\beta-2}\Omega_{\rm m0}e^{-3x}+f_0e^{\frac{2}{\beta}(1-\alpha)x}}.\label{w}\end{aligned}$$ Statefinder hierarchy and growth rate of matter perturbations ============================================================= 3.1 The statefinder hierarchy {#the-statefinder-hierarchy .unnumbered} ----------------------------- The primary aim of the statefinder hierarchy is to single the $\Lambda$CDM model out from evolving DE ones [@Arabsalmani:2011PRD043501]. For it has a convenient property that all members of the statefinder hierarchy can be expressed in terms of some elementary functions (like the deceleration parameter $q$, the EOS $w$ or the fractional density $\Omega$), even the Chaplygin gas, interacting dark energy, and modified gravity models have already been explored in this way [@LJ:2014JCAP12043; @YL:150308948; @Myrzakulov:2013JCAP10047]. To review briefly, we just explain the primary principle of the statefinder hierarchy. Because in Ref. [@Arabsalmani:2011PRD043501], the EOS of DE $w$ in the hierarchy expressions is a constant, we here generalize $w$ to be time-dependent and have intensive interest only in the final expressions of statefinder hierarchy members in terms of elementary functions. Later we will see that these elementary functions are the fractional density and EOS of ERDE, already derived above, Eqs. (\[omegade\]) and (\[w\]). We Taylor-expand the scale factor $a(t)$ around the present epoch $t_0$: $$\frac{a(t)}{a_0}=1+\sum\limits_{n=1}^{\infty}\frac{A_n(t_0)}{n!}[H_0(t-t_0)]^n,$$ where $$A_n=\frac{a^{(n)}}{aH^n},~~~n\in N;$$ $a^{(n)}$ is the $n$th derivative of $a(t)$ with respect to time. The familiar term $A_2=-q$ represents the deceleration parameter, while $A_3$ is the very original statefinder “$r$” [@Sahni:2003JETPL201]. $A_4$ was ever referred to as the snap “$s$” [@Visser:2004CQG2603], while $A_5$ the lerk “$l$” [@Dabrowski:2005PLB184]. For $\Lambda$CDM ($w=-1$), $$\label{eq13} \begin{split} &A_2=1-\frac{3}{2}\Omega_{\rm m},\\ &A_3=1,\\ &A_4=1-\frac{3^2}{2}\Omega_{\rm m},~~~~~{\rm etc}, \end{split}$$ where $\Omega_{\rm m}=\frac{2}{3}(1+q)$, which means that for $\Lambda$CDM the elementary functions are the deceleration parameter $q$ or the fractional density parameter of matter $\Omega_{\rm m}$, because $w$ is constant. Then the [*statefinder hierarchy*]{} $S_n$ can be defined as [@Arabsalmani:2011PRD043501]: $$\label{eq14} \begin{split} &S_2=A_2+\frac{3}{2}\Omega_{\rm m},\\ &S_3=A_3,\\ &S_4=A_4+\frac{3^2}{2}\Omega_{\rm m},~~~~~{\rm etc}. \end{split}$$ Comparing Eq. (\[eq14\]) with Eq. (\[eq13\]), one can get the essential feature of this diagnostic that all the $S_n$ parameters stays pegged at unity for $\Lambda$CDM during the entire course of cosmic expansion, $$S_n|_{\Lambda\rm{CDM}}=1.$$ In fact, that is why $S_n$ are defined in this way, to differ from both other constant-$w$ DE models and evolving ones. Remember that in Ref. [@Sahni:2003JETPL201] there is a statefinder pair $\{r,s\}$, where $r$ is $S_3$ and $s \equiv \frac{r-1}{3(q-1/2)}$. $s$ also belongs to the third derivative hierarchy and serves the aim of breaking some of the degeneracy present in $r$. To normalize letters of the alphabet, Arabsalmani and Sahni introduced a general pair $\{S_n^{(1)},S_n^{(2)}\}$ [@Arabsalmani:2011PRD043501] $$\begin{split} &S_3^{(1)}=A_3,\\ &S_4^{(1)}=A_4+3(1+q),~~~~~{\rm etc}, \end{split}$$ and $$S_n^{(2)}=\frac{S_n^{(1)}-1}{\gamma\left(q-\frac{1}{2}\right)},$$ where $\gamma$ is an arbitrary constant and the superscript “(1)” is used for discriminating not only between the original hierarchy $S_n$ and $S_n^{(1)}$, but also between $S_n^{(1)}$ and its derivative $S_n^{(2)}$. Therefore, $\{S_n^{(1)},S_n^{(2)}\}=\{1,0\}$ for $\Lambda$CDM and $\{r,s\}$ is just $\{S_3^{(1)},S_3^{(2)}\}$ with $\gamma=3$. In this paper we use only the $S_n^{(1)}$ series as follows: $$\begin{aligned} q= &\frac{1}{2}+\frac{3}{2}w\Omega_{\rm de}, \label{q} \\ S_3^{(1)}= &1+\frac{9}{2}\Omega_{\rm de}w(1+w)-\frac{3}{2}\Omega_{\rm de}w', \label{s31} \\ S_4^{(1)}= &1-\frac{27}{2}\Omega_{\rm de}w(1+w)\left(\frac{7}{6}+w\right)-\frac{27}{4}\Omega_{\rm de}^2w^2(1+w) \nonumber \\ &+\frac{3}{2}\Omega_{\rm de}\left[\left(\frac{13}{2}+9w+\frac{3}{2}w\Omega_{\rm de}\right)w'-w''\right], \label{s41}\end{aligned}$$ where the prime denotes the derivative with respect to $x=\ln a$. ![(color online). The evolution trajectories of the equation of state $w$ versus redshift $z$ of ERDE for variable $\alpha$ with $\beta=0.5$. Herein $\Omega_{\rm m0}=0.27$.[]{data-label="fig1"}](wz-b.pdf) ![image](Ez-b.pdf) ![image](qz-b.pdf) ![image](S31z-b.pdf) ![image](S41z-b.pdf) 3.2 The growth rate of perturbations {#the-growth-rate-of-perturbations .unnumbered} ------------------------------------ The fractional growth parameter $\epsilon(z)$ [@Acquaviva:2008PRD043514; @Acquaviva:2010PRD082001] can supplement the statefinders as a null diagnostic as well, defined as $$\epsilon(z):=\frac{f(z)}{f_{\Lambda{\rm CDM}}(z)},$$ where $f(z)=d\ln\delta/d\ln a$ represents the growth rate of linearized density perturbations [@WLM:1998ApJ483], $$f(z) \simeq \Omega_{\rm m}^\gamma(z),$$ $$\gamma(z)=\frac{3}{5-\frac{w}{1-w}}+\frac{3(1-w)\left(1-\frac{3}{2}w\right)}{125\left(1-\frac{6}{5}w\right)^3}(1-\Omega_{\rm m}),$$ where $w$ either is a constant, or varies slowly with time. Combine the fractional growth parameter $\epsilon(z)$ with statefinder hierarchy to define a [*composite null diagnostic*]{} (CND): $\{S_n,\epsilon\}$ [@Arabsalmani:2011PRD043501]. For $\Lambda$CDM, $\gamma\simeq0.55$ and $\epsilon=1$ [@WLM:1998ApJ483; @Linder:2005PRD043529], therefore, $\{S_n,\epsilon\}=\{1,1\}$. Exploring ERDE with statefinder hierarchy ========================================= ![image](S31e-b.pdf) ![image](S41e-b.pdf) ![(color online). A comparison of evolution trajectories of the fractional growth parameter $\epsilon$ versus redshift $z$ of ERDE for $\alpha=1$ with $\beta=0.5$ and that of the $\Lambda$CDM model. Herein $\Omega_{\rm m0}=0.27$.[]{data-label="fig4"}](epz-b.pdf) ![image](Ez-a.pdf) ![image](qz-a.pdf) ![image](S31z-a.pdf) ![image](S41z-a.pdf) ![image](S31e-a.pdf) ![image](S41e-a.pdf) At first, it is necessary to clarify the different considerations between our previous work [@YF:2013CTP243] mentioned above and this one, which can help to choose proper typical values of parameters here. In the former, we used an imposed condition $w_0=-1$ to reduce by one the degrees of freedom for parameters $\alpha$ and $\beta$, which were both born arbitrary. But in this paper, from the perspective of simulation in theory, we let $w$ remain free so as to investigate the dependency level of the ERDE model upon $\alpha$ and $\beta$ by adjusting them. Therefore, we explore two cases. The first is adjusting $\alpha$ with a constant $\beta$; the second is, whereas, adjusting $\beta$ with a constant $\alpha$. For constant $\beta$, we take $\beta=0.5$ as a typical value, approaching the observational constraints [@GCJ:2009PRD043511; @WYT:2010PRD083523; @Malekjani:2011ASS515]. We fix $\Omega_{\rm m0}=0.27$ throughout the paper. Figure \[fig1\] exhibits the phenomenon we mentioned above, namely, “[*$\alpha>1$ makes ERDE behave like quintessence while $\alpha<1$ like quintom*]{}”. It is obvious that $\alpha$ plays a key role in the evolution of ERDE. When $\alpha>1$, the EOS evolves in the range of $-1<w<0$. When $\alpha<1$, the EOS evolves from the region of $w>-1$ to that of $w<-1$, i.e., the model exhibits a quintom-like evolution behavior. In particular, the boundary $\alpha=1$ can as well make the model behave like quintessence, but the universe will ultimately enter the de Sitter phase in the far future. It is necessary to emphasize that these features undoubtedly hold for the RDE model [@ZX:2009PRD103509]. In our previous work [@YF:2013CTP243], although $\beta$ took 0.3, 0.4, 0.5, and 0.6, under the condition of the present EOS value $w_0=-1$, all the values $\alpha$ obtained there were less than 1. Just because of this, with no lack of universality, in this paper we explore the ERDE model comprehensively without missing any possibilities. Thus, around the boundary 1, we take $\alpha$ to be 0.8, 0.9, 1.0, 1.1, and 1.2. Figure \[fig2\] shows the evolutions of the various-order derivatives of the scale factor $a$ versus redshift $z$, from the first to the fourth, for the ERDE model. They are $E$, $q$, $S_3^{(1)}$ and $S_4^{(1)}$, respectively, and they are meanwhile compared with the $\Lambda$CDM model. It can be seen that for $E(z)$, according to Eq. (\[eq7\]), in the low-redshift region ($z \lesssim 1$) the curves of the model itself with various parameter values, even together with that of $\Lambda$CDM, are highly degenerate. Although the degeneracy is broken in the high-redshift region, however, the well-known observational data are mainly within the low-redshift region, featuring $z \lesssim 1$. For instance, for some supernova samples [@Conley:2011ApJS1] the majority of the redshifts are in the range of $z<1$ while only a few are in the range of a higher redshift, $1<z<1.4$. Therefore, the current observations for $E(z)$ have not been of help so far. If the next generation Extremely Large Telescopes with high resolution would observe the high-redshift QSOs ($2<z<5$) [@Liske:2008MNRAS1192], the evolution of $E(z)$ may help effectively. For $q(z)$, according to Eq. (\[q\]), in the low-redshift region the degeneracy that exists in the $E(z)$ case is broken to some extent, but the trends of these curves are quite close to one another, including that for $\Lambda$CDM. Better exhibitions appear in the cases of $S_3^{(1)}$ and $S_4^{(1)}$, especially in the former. From the $S_3^{(1)}(z)$ plot in Fig. \[fig2\], the degeneracy is perfectly broken in the region of $z<1$. The two symmetrical orientations of evolution due to different $\alpha$, which has been concluded before from the $w(z)$ plot of Fig. \[fig1\], appear apparently. When $\alpha>1$, namely $w$ is always larger than $-1$, $S_3^{(1)}$ evolves decreasingly from 1; when $\alpha<1$, namely $w$ can evolve across $-1$, $S_3^{(1)}$ evolves increasingly from 1. But differing from $S_3^{(1)}$, $S_4^{(1)}$ does not show the symmetrical aspect, but only in the same side, although the curves separate well in the low-redshift region. In both plots there is a common feature that when $\alpha=1$, getting $S_3^{(1)}=S_4^{(1)}=1$, the same as that of $\Lambda$CDM. So in the $S_3^{(1)}(z)$ and $S_4^{(1)}(z)$ plots there are two shortcomings. One is a high degeneracy still existing in the high-redshift region. The other is the curves of $\alpha=1$ for ERDE superposing that of $\Lambda$CDM. In face of them, a single diagnostic of geometry fails to be achieved. Instead, we combine it with the fractional growth parameter, namely CND, trying to find a better way. $\alpha$ 0.8 0.9 1.0 1.1 1.2 ------------------------------- -------- -------- -------- -------- -------- -- $\beta$ 0.5 $S_{30}^{(1)}$ 2.088 1.464 1 0.696 0.552 $S_{40}^{(1)}$ 2.806 1.492 1 1.043 1.332 $\epsilon_0$ 1.0034 0.9974 0.9902 0.9813 0.9702 $\bigtriangleup S_{30}^{(1)}$ 1.536 $\bigtriangleup S_{40}^{(1)}$ 1.806 $\bigtriangleup\epsilon_0$ 0.0332 : The present values of statefinders and fractional growth parameter, $S_{30}^{(1)}$, $S_{40}^{(1)}$ and $\epsilon_0$, and the differences of them, $\bigtriangleup S_{30}^{(1)}$, $\bigtriangleup S_{40}^{(1)}$ and $\bigtriangleup \epsilon_0$. For each case, $\bigtriangleup S_{30}^{(1)}=S_{30}^{(1)}({\rm max})-S_{30}^{(1)}({\rm min})$, $\bigtriangleup S_{40}^{(1)}=S_{40}^{(1)}({\rm max})-S_{40}^{(1)}({\rm min})$ and $\bigtriangleup \epsilon_0=\epsilon_0({\rm max})-\epsilon_0({\rm min})$.[]{data-label="table1"} Interestingly, when using CND, $\{S_3^{(1)},\epsilon\}$ and $\{S_4^{(1)},\epsilon\}$ of Fig. \[fig3\], we find that the degeneracy in the high-redshift region can be broken clearly, especially $\{S_3^{(1)},\epsilon\}$ performing far better. To solve the second shortcoming, in both $S_3^{(1)}$-$\epsilon$ and $S_4^{(1)}$-$\epsilon$ plots, ERDE with $\alpha=1$ exhibits a short line segment while $\Lambda$CDM is just a point $\{1,1\}$. We may see the reason at a glimpse of Fig. \[fig4\]. That is to say, in the evolution history the fractional growth parameter $\epsilon(z)$ becomes closer and closer to 1 from past to present, but not equal to yet. Since the present values of physical parameters are significant in the research of cosmology, Table \[table1\] shows the present values of parameters $S_{30}^{(1)}$, $S_{40}^{(1)}$ and $\epsilon_0$, and the differences of them, for each case $\bigtriangleup S_{30}^{(1)}=S_{30}^{(1)}({\rm max})-S_{30}^{(1)}({\rm min})$ and the same way for $\bigtriangleup S_{40}^{(1)}$ and $\bigtriangleup\epsilon_0$. We can see $\bigtriangleup S_{40}^{(1)}>\bigtriangleup S_{30}^{(1)}$, namely the fourth derivative of $a$, compared with the third derivative, alleviates the degeneracy of present values. But even so, the comparison of either the $S_3^{(1)}$-$z$ and $S_4^{(1)}$-$z$ plots in Fig. \[fig2\] or the $S_3^{(1)}$-$\epsilon$ and $S_4^{(1)}$-$\epsilon$ plots in Fig. \[fig3\] shows $S_3^{(1)}$ to be performing much better during the evolution process than $S_4^{(1)}$. This indeed violates our habitual judgement: that the higher the order of derivative is, the better the diagnostic performs [@CJL:2014EPJC3100; @LJ:2014JCAP12043; @YL:150308948; @Myrzakulov:2013JCAP10047]. $\alpha$ 0.9 ------------------------------- -------- -------- -------- -------- -------- -- $\beta$ 0.35 0.40 0.45 0.50 0.55 $S_{30}^{(1)}$ 1.580 1.538 1.499 1.464 1.433 $S_{40}^{(1)}$ 1.629 1.578 1.532 1.492 1.457 $\epsilon_0$ 0.9923 0.9945 0.9961 0.9974 0.9984 $\bigtriangleup S_{30}^{(1)}$ 0.147 $\bigtriangleup S_{40}^{(1)}$ 0.173 $\bigtriangleup\epsilon_0$ 0.0061 : The present values of statefinders and fractional growth parameter, $S_{30}^{(1)}$, $S_{40}^{(1)}$ and $\epsilon_0$, and the differences of them, $\bigtriangleup S_{30}^{(1)}$, $\bigtriangleup S_{40}^{(1)}$ and $\bigtriangleup \epsilon_0$. For each case, $\bigtriangleup S_{30}^{(1)}=S_{30}^{(1)}({\rm max})-S_{30}^{(1)}({\rm min})$, $\bigtriangleup S_{40}^{(1)}=S_{40}^{(1)}({\rm max})-S_{40}^{(1)}({\rm min})$ and $\bigtriangleup \epsilon_0=\epsilon_0({\rm max})-\epsilon_0({\rm min})$.[]{data-label="table2"} For constant $\alpha$, although it can be either larger or smaller than 1, which leads to reverse orientations of the evolution, we only take $\alpha=0.9$ as a typical value in this exploration, according to the best-fit values for $\alpha$ from some of the recent constraints [@WYT:2010PRD083523; @Malekjani:2011ASS515; @Lepe:2010EPJC575]. To obtain feasible evolutions, we take 0.35, 0.4, 0.45, 0.5, and 0.55 for $\beta$. Likewise, we explore the four parameters $E$, $q$, $S_3^{(1)}$ and $S_4^{(1)}$ in Fig. \[fig5\] first, as well as make a comparison with the $\Lambda$CDM model. We can see that the ERDE model is insensitive to parameter $\beta$. For the first- and second-order hierarchy $E(z)$ and $q(z)$, high degeneracy appears, even together with $\Lambda$CDM. In the third- and fourth-order cases, the ERDE model itself with various parameter values highly degenerates, but the $\Lambda$CDM model can be discriminated perfectly from ERDE in the low-redshift region. Then let us observe the CND of Fig. \[fig6\]. The $S_3^{(1)}$-$\epsilon$ and $S_4^{(1)}$-$\epsilon$ curves look as good as the above case of $\beta=0.5$ in Fig. \[fig3\]. In both plots the evolution trajectories separate quite well, but the combination of $\{S_3^{(1)},\epsilon\}$ is slightly better than $\{S_4^{(1)},\epsilon\}$ because of the more legible separation in-between curves in the high-redshift region. In the same way we show in Table \[table2\] the present values of $S_{30}^{(1)}$, $S_{40}^{(1)}$ and $\epsilon_0$, and the differences of them for $\alpha=0.9$. Likewise the relation $\bigtriangleup S_{40}^{(1)}>\bigtriangleup S_{30}^{(1)}$ demonstrates once again that the fourth-order hierarchy can help to alleviate the degeneracy of present values when compared with the third one. But for the same reason as of the comparison of $S_3^{(1)}$-$\epsilon$ and $S_4^{(1)}$-$\epsilon$ plots in Fig. \[fig6\], we find for the ERDE model $\{S_3^{(1)},\epsilon\}$ is a more efficient parameter pair of diagnostic than $\{S_4^{(1)},\epsilon\}$, which is already concluded in the above-mentioned case of $\beta=0.5$. Conclusion ========== In this paper, we explore the extended Ricci dark energy model with statefinder hierarchy supplemented by the growth rate of perturbations. Since in ERDE there are two independent variables $\alpha$ and $\beta$, we just adjust them, respectively, leaving other parameters fixed, for the sake of investigating the effects of $\alpha$ and $\beta$ on this model. First, a feature of the holographic Ricci-type dark energy models is corroborated again, namely, $\alpha>1$ makes them behave like quintessence while $\alpha<1$ like quintom. For the ERDE model with $\beta=0.5$, letting $\alpha$ vary around 1, we conclude that the evolutions of the Hubble expansion rate $E$ are in high degeneracy in the low-redshift region of $z \lesssim 1$; but because the observational data come mainly from the low-redshift region $z \lesssim 1$, the broken degeneracy in the high-redshift makes no sense. The evolutions of deceleration parameter $q$ do degenerate no more in the low-redshift region of $z \lesssim 1$. However, for both $E$ and $q$ plots, the evolution of $\Lambda$CDM cannot be singled out from in-between with great ease. The situations of $S_3^{(1)}$ and $S_4^{(1)}$, which contain the third and fourth derivatives of the scale factor, respectively, turn out to be better. $S_3^{(1)}(z)$ evolves with respect to redshift $z$ along two orientations symmetrical to each other on the basis of different $\alpha$ in the region of $z<1$. When $\alpha>1$, it evolves decreasingly from 1; when $\alpha<1$, it evolves increasingly from 1. $S_4^{(1)}$ although seems featureless by contrast with $S_3^{(1)}(z)$, by a comparison of the present-value differences of the parameters $S_{3}^{(1)}$ and $S_{4}^{(1)}$ ($\bigtriangleup S_{30}^{(1)}=1.536$, $\bigtriangleup S_{40}^{(1)}=1.806$), we see that $S_4^{(1)}$ is capable of alleviating the degeneracy existing in other statefinder parameters for ERDE. There are also two unsolved problems that high degeneracy still exists in the high-redshift region, and with $\alpha=1$ are degenerate with $\Lambda$CDM. As for them, the combination of statefinder hierarchy $S_n$ and fractional growth parameter $\epsilon$ (CND) can help. In both $S_3^{(1)}$-$\epsilon$ and $S_4^{(1)}$-$\epsilon$ plots, the degeneracy in the high-redshift region is pretty broken and ERDE with $\alpha=1$ exhibits a short line segment, but $\Lambda$CDM exhibits just a point $\{1,1\}$. Nevertheless, the fact that the $S_3^{(1)}$-$z$ and $S_3^{(1)}$-$\epsilon$ planes feature a more legible and regular sight of evolution due to $\alpha$ than that of the $S_4^{(1)}$-$z$ and $S_4^{(1)}$-$\epsilon$ planes, reveals that the third-order statefinder hierarchy $S_3^{(1)}$ makes more sense for ERDE than $S_4^{(1)}$ does. For the ERDE model with $\alpha=0.9$, we find by contrast with $\alpha$, $\beta$ has a weak influence. The evolution trajectories of $E$, $q$, $S_3^{(1)}$ and $S_4^{(1)}$ with respect to redshift $z$ are in high degeneracy within the ERDE model. The degeneracy of ERDE with $\Lambda$CDM still exists for $E(z)$ and $q(z)$, but it is perfectly broken in the low-redshift region for both $S_{3}^{(1)}(z)$ and $S_{4}^{(1)}(z)$. As for the high-redshift region, the use of CND can break the degeneracy there, especially the $\{S_3^{(1)},\epsilon\}$ pair performs more efficiently than the $\{S_4^{(1)},\epsilon\}$ pair, although $\bigtriangleup S_{30}^{(1)}=0.147<\bigtriangleup S_{40}^{(1)}=0.173$. $\Lambda$CDM can also be discriminated from ERDE by CND. As a consequence of all the materials studied above, we find that, although the higher-order statefinder hierarchy, even with the growth rate of perturbations, can differentiate the ERDE model itself with various parameter values and also from the $\Lambda$CDM model, there is the interesting discovery that the third-order hierarchy of statefinder is really a better choice than the fourth-order hierarchy for the ERDE model. This work was supported by the National Natural Science Foundation of China under Grant No. 11175042, the Provincial Department of Education of Liaoning under Grant No. L2012087, and the Fundamental Research Funds for the Central Universities under Grants No. N140505002, No. N140506002, and No. N140504007. A.G. Riess et al., Supernova search team collaboration, Astron. J. [**116**]{} (1998) 1009. \[astro-ph/9805201\] S. Perlmutter et al., Supernova cosmology project collaboration, Astrophys. J. [**517**]{} (1999) 565. \[astro-ph/9812133\] D.N. Spergel et al., WMAP collaboration, Astrophys. J. Suppl. [**170**]{} (2007) 377. \[astro-ph/0603449\] J.K. Adelman-McCarthy et al., SDSS collaboration, Astrophys. J. Suppl. [**175**]{} (2008) 297. K. Bamba, S. Capozziello, S. Nojiri, S. D. Odintsov, Astrophys. Space Sci. [**342**]{} (2012) 155. arXiv:1205.3421 \[gr-qc\] P.J. Steinhardt, L.M. Wang, I. Zlatev, Phys. Rev. D [**59**]{} (1999) 123504. \[astro-ph/9812313\] A. Kamenshchik, U. Moschella, V. Pasquier, Phys. Lett. B [**511**]{} (2001) 265. M. Li, Phys. Lett. B [**603**]{} (2004) 1. J.D. Bekenstein, Phys. Rev. D [**7**]{} (1973) 2333. R. Bousso, JHEP [**9907**]{} (1999) 004. L. Susskind, J. Math. Phys. (N.Y.) [**36**]{} (1994) 6377. A. Cohen, D. Kaplan, A. Nelson, Phys. Rev. Lett. [**82**]{} (1999) 4971. S.D.H. Hsu, Phys. Lett. B [**594**]{} (2004) 13. W. Fischler, L. Susskind, arXiv:hep-th/9806039. R.G. Cai, Phys. Lett. B [**657**]{} (2007) 228. H. Wei, R.G. Cai, Phys. Lett. B [**660**]{} (2008) 113. arXiv:0708.0884 \[astro-ph\] C. Gao, F. Wu, X. Chen, Y.-G. Shen, Phys. Rev. D [**79**]{} (2009) 043511. arXiv:0712.1394 \[astro-ph\] S. Nojiri, S. D. Odintsov, Gen. Rel. Grav. [**38**]{} (2006) 1285. \[hep-th/0506212\] L.N. Granda, A. Oliveros, Phys. Lett. B [**669**]{} (2008) 275. V. Sahni, T.D. Saini, A.A. Starobinsky, U. Alam, JETP Lett. [**77**]{} (2003) 201. \[astro-ph/0201498\] U. Alam, V. Sahni, T.D. Saini, A.A. Starobinsky, Mon. Not. Roy. Astron. Soc. [**344**]{} (2003) 1057. \[astro-ph/0303009\] V. Sahni, A. Shafieloo, A.A. Starobinsky, Phys. Rev. D [**78**]{} (2008) 103502. arXiv:0807.3548 \[astro-ph\] V. Acquaviva, A. Hajian, D.N. Spergel, S. Das, Phys. Rev. D [**78**]{} (2008) 043514. arXiv:0803.2236 \[astro-ph\] V. Acquaviva, E. Gawiser, Phys. Rev. D [**82**]{} (2010) 082001. arXiv:1008.3392 \[astro-ph.CO\] L. Wang, P.J. Steinhardt, Astrophys. J. [**508**]{} (1998) 483. \[astro-ph/9804015\] M. Arabsalmani, V. Sahni, Phys. Rev. D [**83**]{} (2011) 043501. arXiv:1101.3436 \[astro-ph.CO\] X. Zhang, Phys. Lett. B [**611**]{} (2005) 1. \[astro-ph/0503075\] X. Zhang, Int. J. Mod. Phys. D [**14**]{} (2005) 1597. \[astro-ph/0504586\] X. Zhang, F.-Q. Wu, J. Zhang, J. Cosmol. Astropart. Phys. [**01**]{} (2006) 003. \[astro-ph/0411221\] M.R. Setare, J. Zhang, X. Zhang, J. Cosmol. Astropart. Phys. [**03**]{} (2007) 007. \[gr-qc/0611084\] J. Zhang, X. Zhang, H. Liu, Phys. Lett. B [**659**]{} (2008) 26. arXiv:0705.4145 \[astro-ph\] C.-J. Feng, Phys. Lett. B [**670**]{} (2008) 231. arXiv:0809.2502 \[hep-th\] M.L. Tong, Y. Zhang, Phys. Rev. D [**80**]{} (2009) 023503. arXiv:0906.3646 \[gr-qc\] L. Zhang, J. Cui, J. Zhang, X. Zhang, Int. J. Mod. Phys. D [**19**]{} (2010) 21. arXiv:0911.2838 \[astro-ph.CO\] F. Yu, J.-F. Zhang, Commun. Theor. Phys. [**59**]{} (2013) 243. arXiv:1305.2792 \[astro-ph.CO\] J.-L. Cui, J.-F. Zhang, Eur. Phys. J. C [**74**]{} (2014) 2849. arXiv:1402.1829 \[astro-ph.CO\] V. Sahni, A. Shafieloo, A.A. Starobinsky, Astrophys. J. [**793**]{} (2014) no. 2, L40. arXiv:1406.2209 \[astro-ph.CO\] J.-F. Zhang, J.-L. Cui, X. Zhang, Eur. Phys. J. C [**74**]{} (2014) 3100. arXiv:1409.6562 \[astro-ph.CO\] X. Zhang, Phys. Rev. D [**79**]{} (2009) 103509. arXiv:0901.2262 \[astro-ph.CO\] L.N. Granda, W. Cardona, A. Oliveros, arXiv:0910.0778 \[hep-th\] T.K. Mathew, J. Suresh, D. Divakaran, Int. J. Mod. Phys. D [**22**]{} (2013) 1350056. B. Feng, X.L. Wang, X.M. Zhang, Phys. Lett. B [**607**]{} (2005) 35. \[astro-ph/0404224\] J. Li, R. Yang, B. Chen, J. Cosmol. Astropart. Phys. [**12**]{} (2014) 043. arXiv:1406.7514 \[gr-qc\] L. Yin, L.-F. wang, J.-L. Cui, Y.-H. Li, X. Zhang, arXiv:1503.08948 \[astro-ph.CO\] R. Myrzakulov, M. Shahalam, J. Cosmol. Astropart. Phys. [**10**]{} (2013) 047. arXiv:1303.0194 \[gr-qc\] M. Visser, Class. Quant. Grav. [**21**]{} (2004) 2603. \[gr-qc/0309109\] M.P. Dabrowski, Phys. Lett. B [**625**]{} (2005) 184. \[gr-qc/0505069\] E.V. Linder, Phys. Rev. D [**72**]{} (2005) 043529. \[astro-ph/0507263\] Y. Wang, L. Xu, Phys. Rev. D [**81**]{} (2010) 083523. M. Malekjani, A. Khodam-Mohammadi, N. Nazari-pooya, Astrophys. Space Sci. [**332**]{} (2011) 515. A. Conley et al., Astrophys. J. Suppl. [**192**]{} (2011) 1. arXiv:1104.1443 \[astro-ph.CO\] J. Liske et al., Mon. Not. Roy. Astron. Soc. [**386**]{} (2008) 1192. arXiv:0802.1532 \[astro-ph\] S. Lepe, F. Peña, Eur. Phys. J. C [**69**]{} (2010) 575. [^1]: Corresponding author
{ "pile_set_name": "ArXiv" }
--- abstract: 'The modified local spin density functional and the related local potential for excited states is tested by employing the ionization potential theorem. The functional is constructed by splitting $k$-space. Since its functional derivative cannot be obtained easily, the corresponding potential is given by analogy to its ground-state counterpart. Further to calculate the highest occupied orbital energy $\epsilon_{max}$ accurately, the potential is corrected for its asymptotic behavior by employing the van Leeuwen and Baerends correction to it. $\epsilon_{max}$ so obtained is then compared with the $\Delta$SCF ionization energy calculated using the MLSD functional. It is shown that the two match quite accurately.' address: 'Department of Physics, Indian Institute of Technology, Kanpur 208 016, India' author: - 'M. Hemanadhan, Md. Shamim and Manoj K. Harbola' title: ' Testing excited-state energy density functional and potential with the ionization potential theorem ' --- Introduction {#sec:introd} ============ Ground-state density functional theory (gDFT) is the most widely used theory for electronic structure calculations  [@book:Parr-Yang:1989; @book:Dreizler-Gross:1990; @book:March:1992; @book:Engel-Dreizler:2011]. The key to its success has been accurate exchange-correlation functionals $E_{xc}$ developed over the past few decades [@becke:1988; @perdew-burke-ernzerhof:1996; @Tau-Perdew-etal:2003]. The exchange-correlation potential $v_{xc}$ required for the self-consistent calculations (SCF) is then obtained either by taking functional derivative of the $E_{xc}$ or in some cases by using model potentials [@leeuwen-baerends:1994; @Umezawa:2006; @becke-johnson:2006]. It is then natural to ask if the ground-state theory can be extended to study excited-states to perform self-consistent Kohn-Sham calculations for the density and total energy of excited-states. Although time-dependent density functional theory (TDDFT) is now routinely used for calculations of excitation energies and the corresponding oscillator strengths, the theory has its limitations [@book:Ullrich:2012]. On the other hand, the progress of time-independent excited-state DFT (eDFT) has been slow. Some of the earlier work includes the extension of ground-state theory to the lowest energy states of a given symmetry by Gunnarsson and Lundqvist [@gunnarsson-lundqvist:1976; @gunnarsson-lundqvist:1976:err:1977], Ziegler et al. [@ziegler-rauk-baerends:1977] and von Barth [@barth:1979]. Subsequent work are the development of ensemble theory to excited-states by Theophilou [@theophilou:1979], Gross, Oliveira, Kohn [@gross-oliveira-kohn:1988; @oliveira-gross-kohn:1988], and its application to study transition energies of atoms by Nagy [@nagy:1996]. Recently, the work by G[ö]{}rling [@gorling:1999] and Levy and Nagy [@levy-nagy:1999; @nagy-levy:2001], both based on constrained-search approach [@Levy:1979], rekindled interest in eDFT. Following this, Samal and Harbola explored density-functional theory for excited-states further  [@harbola:2002; @harbola:2004; @samal-harbola:2005; @samal-harbola:2006; @samal-harbola:2006b]. A crucial requirement for implementing eDFT is the appropriate functionals for the excited-states. These functionals should be as easy to use as the ground-state functionals and be such that improved functionals can be built upon them. For the ground-states such a functional is provided by the local-density approximation (LDA), which is based on the homogeneous electron gas (HEG). Motivated by this, we have proposed an LDA-like functional for excited-states [@samal-harbola:2005]. This functional is also obtained using the homogeneous electron gas. The spin-generalization of the functional, the modified local spin-density (MLSD) functional has been shown to lead to accurate transition-energies [@samal-harbola:2005]. Encouraged by this, we have been subjecting our method of constructing the functional to more and more severe tests [@hemanadhan-harbola:2010; @hemanadhan-harbola:2012; @shamim-harbola:2010]. With this in mind, we test our method for the satisfaction of the ionization potential (IP) theorem in this paper. According to the ionization potential (IP) theorem for the ground-states [@PhysRevLett.49.1691; @Levy-Perdew-Sahni:1984; @Katriel-Davidson:1980] or an excited-states, the highest occupied Kohn-Sham orbital energy ($\epsilon_{max}$) for a system is equal to the negative of the ionization potential $I$ [@PhysRevLett.49.1691]. Thus $$\epsilon_{max} = -I(N) \equiv E(N) - E(N-1) \label{eq:ip}$$ where $E(N)$, and $E(N-1)$ are the energies corresponding to $N$ and $N-1$ electron systems such that $I(N)$ is smallest. The difference of these energies for the $N$ and $N-1$ electron system calculated self-consistently is referred to as $\Delta$SCF. The relationship of Eq.  arises because the asymptotic decay of the electronic density of a system is related to its ionization potential; on the other hand, for a Kohn-Sham system it is governed by $\epsilon_{max}$, thereby relating the two quantities. Thus, if the exact functionals were known, the corresponding Kohn-Sham calculation will give $\epsilon_{max}$, $E(N)$ and $E(N-1)$ so that Eq.  is satisfied. However, this is not the case when approximate functionals are used. For instance, when ground-state calculations are done using the LDA, the $\Delta$SCF values are accurate, but the $\epsilon_{max}$ are roughly $50\%$ of the $\Delta$SCF energy or the experimental values [@w2012crc]. This is due to the fact that LDA potential decays exponentially rather than correctly as $-1/r$ for $r \rightarrow \infty$. Therefore it is less binding for the outermost electrons. For the ground-states, it is seen that if the asymptotic behavior of the potential is improved, $\epsilon_{max}$ becomes close to $E(N)-E(N-1)$. Two ways of making such a correction are the van Leeuwen and Baerends (LB) [@leeuwen-baerends:1994] method and the range-separated hybrid (RSH) methods [@inbook:savin:chong:1995; @leininger-stoll-werner-savin:1997; @iikura-tsundea-yanai-hirao:2001; @yanai-tew-handy:2004; @baer-neuhauser:2005; @kronik-tamar-abramson-baer:2012]. In the LB method, a correction term is added to the LDA potential to make the effective potential go as $-1/r$ asymptotically, while in the RSH approach the Coulomb term is split into long-range (LR) and short-range (SR) part. Thus, $r^{-1}$ can be written as $ r^{-1} \operatorname{erf}(\gamma r) + r^{-1} \operatorname{erfc}(\gamma r )$ where $\gamma$ is a parameter [@inbook:savin:chong:1995; @leininger-stoll-werner-savin:1997; @iikura-tsundea-yanai-hirao:2001; @yanai-tew-handy:2004; @baer-neuhauser:2005; @kronik-tamar-abramson-baer:2012]. Here the first term is long-range and approaches $2\gamma/\sqrt{\pi}$ as $r\rightarrow 0$, while the second term is close to $\frac{\exp(-\gamma r)}{r})$ [@Bohm-Pines:1953c] and is short range. In the RSH approach, the long-range part is treated exactly and the short-range part within the LDA. Recently, Stein et. al [@stein-eisenberg-kronik-baer:2010] have applied this idea to study the band gaps for a wide range of systems. In their work $\gamma$ is fixed by the satisfaction of the IP theorem. Motivated by their work, we have studied the IP theorem using the LB potential. The line of our investigation is as follows: We first show that the LB potential leads to the satisfaction of the IP theorem for the ground-states to a high degree of accuracy. We then ask: does our approach of constructing excited-state energy functionals give the same level of accuracy for IP theorem for excited-states when applied with the LB potential ? This then provides a test for our approach. The positive results of our calculations point to the correctness of our method of dealing with excited-state functionals. The LB correction to the LDA potential is given as $$- \beta \rho_{\sigma}^{1/3}(\mathbf{r}) \frac{x^2_{\sigma}}{1+3\beta x_{\sigma} \sinh^{-1}(x_{\sigma})} \label{eq:lbgradient}$$ where the parameter $\beta$ is obtained by fitting the LB potential so that it resembles closely to the exact potential for the beryllium atom ($\beta=0.05$), and $x_{\sigma}$ is a dimensionless ratio given by $x_{\sigma} = \frac{|\nabla \rho_{\sigma}|}{\rho_{\sigma}^{4/3}}$. In the present paper, the parameter $\beta$ is chosen to satisfy IP theorem, similar to the work of Stein et al. [@stein-eisenberg-kronik-baer:2010]. The difference with the work of Ref. [@stein-eisenberg-kronik-baer:2010] is that in the present work the potential is given entirely in terms of the density whereas in RSH functional it is written using both the wavefunction and the density. We note that recently the LB potential has also been applied to calculate satisfactorily the band gaps of a wide variety of bulk systems [@Prashant-Harbola-etal:2013]. In the following, we present in Section \[sec:gr-theory\] the results of application of the LB potential to the ground-states of several atoms. It is shown that with the help of parameter $\beta$, the LB potential can be optimized to satisfy IP theorem to a very high degree. The results for the ground-state set up the standard against which the excited-state results are to be judged for the functional and the corresponding potential proposed for the excited-states. After this we study the IP theorem for excited-states using the LB correction in conjunction with the modified LDA potential based on the idea [@inbook:Harbola-etal:Ghosh-Chattaraj:2013] of splitting $k$-space for excited-states. It is shown that the IP theorem is satisfied more accurately with the modified LDA potential in comparison to the ground-state LDA expression for the potential. In addition the modified potential has proper structure at the minimum of radial density in contrast to the ground-state LDA potential that has undesirable features at these points [@cheng-Wu-Voorhis:2008]. Results for the ground-state IP theorem using LB potential {#sec:gr-theory} ========================================================== In this section, we first present the ground-state exchange-only $\epsilon_{max}$ and $\Delta$SCF energies obtained with the LDA and the LB potential for few atoms. Following that, we also present the results with correlation functional included. The results for the ground-states are not entirely new in light of some previous work [@Banerjee-Harbola:1999] but it is necessary to give them here to put the new results of excited-states in proper perspective. The LDA exchange energy functional $E_{x}$ [@dirac30] is given by $$E^{LDA}_{x}[\rho(\mathbf{r})] = -\frac{3}{4} \left(\frac{3}{\pi} \right)^{1/3} \int \rho^{4/3} (\mathbf{r}) d\mathbf{r} \label{eq:Ex-LDA}$$ and the corresponding potential $v^{LDA}_{x}$ required for self-consistency calculations is $$v_{x}^{LDA} = - \left( \frac{6\rho(\mathbf{r})}{\pi} \right)^{1/3} \label{eq:vx-LDA}$$ Spin generalization of the expression of Eq. , the local spin-density approximation (LSD), is obtained by using $$E_x^{LSD} [\rho_{\alpha},\rho_{\beta}] = \frac{1}{2} E_x [2\rho_{\alpha}] + \frac{1}{2} E_x [2\rho_{\beta}].$$ In Table \[tab:gr-x-IP\], the $\epsilon_{max}$ and $\Delta$SCF obtained using the spin-generalized LDA exchange functional Eq.  and its potential Eq.  is shown. As is well-known and noted earlier, the LSD underestimates the highest occupied orbital energy (HO) roughly by $50\%$, due to incorrect asymptotic exponential behavior of the LDA exchange-potential of Eq. . The $\Delta$SCF energies, however, are close to the HF values. As stated in the previous section, $\epsilon_{max}$ and $\Delta$SCF energies become consistent with each other if asymptotically the potential goes correctly as $-1/r$. The van-Leeuwen and Barends (LB) potential does that. The LB potential $v_x^{LB}$ [@leeuwen-baerends:1994], is calculated by including the LB correction of Eq.  to the LSD potential of Eq.  and is given as $$v_{x,\sigma}^{LB}(\mathbf{r}) = v_{x,\sigma}^{LSD} - \beta \rho_{\sigma}^{1/3}(\mathbf{r}) \frac{x^2_{\sigma}}{1+3\beta x_{\sigma} \sinh^{-1}(x_{\sigma})} \label{eq:vx-LB}$$ where $v_{x,\sigma}^{LSD} = \frac{ \delta E_x^{LSD}}{\delta \rho_{\sigma}}$. In the original LB potential, parameter $\beta=0.05$. In the present work, in addition to using this value of $\beta$, we also optimize it by the satisfaction of IP theorem. In the latter calculation, $\beta$ is varied until $\epsilon_{max}$ and $\Delta$SCF energies match, i.e $$\epsilon^{\beta}_{max} = E(N,\beta) - E(N-1,\beta)$$ Here, $\epsilon^{\beta}_{max}$ is the highest-occupied eigen-value for a specific choice of $\beta$. The price for employing the asymptotically corrected model exchange potential is that the corresponding exchange functional is not known. Although in the past Levy-Perdew relation [@levy-perdew:1985] has been used to get the corresponding exchange-energies from the potential [@Banerjee-Harbola:1999], this may not always be correct [@Gaiduk-Chulkov-Staroverov:2009]. In this section we therefore use the potential above in the KS calculations but employ the LSD exchange functional for calculating the energies. Presented in Table \[tab:gr-x-IP\] are the results for $\epsilon_{max}$ and $\Delta$SCF energies using the LSD potential, the LB potential with $\beta=0.05$ and the LB potential with optimized $\beta$. As mentioned above, the exchange energy functional used is the LSD functional itself for all the three potentials. Also shown are the $\Delta$SCF energies obtained from HF calculations. Comparing the results of the LSD and LB calculations with the corresponding numbers in Hartree-Fock theory, it is evident that (i) the $\Delta$SCF values given by the LSD functional are reasonably close to the corresponding HF values, and (ii) by making the potential correct in the asymptotic regions, $\epsilon_{max}$ improves substantially and becomes close to the $\Delta$SCF values. Interestingly, the match between $\epsilon_{max}$ obtained with the LB potential and the $\Delta$SCF values is better than that in the Hartree-Fock theory. Next, motivated by the work of Ref. [@stein-eisenberg-kronik-baer:2010], we tune the parameter $\beta$ in the LB potential so that $\epsilon_{max}$ matches with the $\Delta$SCF energies. The optimized $\beta$ and the corresponding energies are also shown in Table \[tab:gr-x-IP\]. As is evident from Table \[tab:gr-x-IP\], choosing $\beta$ through IP theorem, the highest orbital energy $\epsilon_{max}$ improves. We note that according to Koopmans theorem [@Koopmans:1934], the orbital energy $\epsilon_{i}$ is close to the removal energy of the electron from that orbital. However we find that DFT results are better in this regard. The results of Table \[tab:gr-x-IP\] are depicted in Fig. \[fig:ip-gr-x\] where we have plotted the $\Delta$SCF results against $-\epsilon_{max}$ for LSD, LB and HF theories. We see that the LB results are closest to the $\Delta$SCF$=-\epsilon_{max}$ line. Having presented our results for the exchange-only calculations we next include correlation using the LDA. The correlation functional we use is that parametrized by Vosko, Wilk and Nusair [@vosko-wilk-nusair:1980]. The orbital energies $\epsilon_{max}$ and the $\Delta$SCF energies for the LB and $\beta$ optimized LB are presented in Table \[tab:gr-xc-IP\] in comparison with the experimental results [@w2012crc]. We see from Table \[tab:gr-xc-IP\] that with the asymptotically corrected LB potential, the IP theorem is satisfied remarkably well. The parameter $\beta$ in the LB potential is tuned to satisfy IP theorem (Ref. Eq. ) The $\epsilon_{max}$, so obtained matches with experiments in a much better way. The radial density and the exchange potential for Li ground-state obtained using the LDA and the LB potentials are shown in Fig. \[fig:vx-Li-ground\]. Also shown in Fig. \[fig:vx-Li-ground\] is the KLI potential [@krieger-li-iafrate:1992a], which is essentially the exact exchange potential, for comparison. It is evident that from about $r=0.2 a.u.$ onwards, the LB potentials with both $\beta=0.05$ and the optimized $\beta$ are quite close to the KLI potential. The discrepancy of the LB potential for $r<0.2 a.u$ corresponds to the non-zero $\beta$. Furthermore, all the three potentials go as $-1/r$ in the asymptotic regions. On the other hand, the LSD potential underestimates the exact potential all over. The bump in the potential for Li is at the minimum in the radial densities [@lindgren:1971]. Having given the results for the ground-states, we now turn our attention to excited-states and show that the exchange functional and potential constructed for these states by splitting the $k$-space for HEG give results with similar accuracy. Split $k$-space method for constructing excited-state energy functionals and excited-state potential {#sec:excited} ==================================================================================================== In eDFT, we have put forth the idea that the excited state energies be calculated using the modified local spin density (MLSD) functional developed over the past few years [@samal-harbola:2005; @shamim-harbola:2010; @hemanadhan-harbola:2010; @hemanadhan-harbola:2012]. The basis of the MLSD exchange energy functional is the split $k$-space method [@inbook:Harbola-etal:Ghosh-Chattaraj:2013]. In this method, the $k$-space is split in accordance to the orbital occupation of a given excited-state. In Fig. \[fig:k-space\], we show an excited-state, where some orbitals (core) are occupied, followed by vacant (unocc) orbitals and then again orbitals are occupied (shell). To construct excited-state functionals, the density for each point is mapped onto the $k$-space of an HEG. The corresponding split $k$-space, also shown in Fig. \[fig:k-space\], is constructed according to the orbital occupation i.e. the $k$-space is occupied from $0$ to $k_1$, vacant from $k_1$ to $k_2$ and then again occupied from $k_2$ to $k_3$ where $k_1, k_2, k_3$ are given by $$\begin{aligned} k_{1}^{3}(\mathbf{r}) &= 3\pi^{2}\rho_{c}(\mathbf{r}) \label{eq:k1} \\ k_{2}^{3}(\mathbf{r})-k_{1}^{3}(\mathbf{r}) &= 3\pi^{2}\rho_{v}(\mathbf{r}) \label{eq:k2} \\ k_{3}^{3}(\mathbf{r})-k_{2}^{3}(\mathbf{r}) &= 3\pi^{2}\rho_{s}(\mathbf{r}) \label{eq:k3}\end{aligned}$$ in terms of $\rho_{c}$, $\rho_{v}$ and $\rho_{s}$ corresponding to the electron densities of core, vacant (unoccupied) and the shell orbitals. Further, $$\begin{aligned} \rho_{c}(\mathbf{r}) &= \sum\limits^{n_1}_{i=1} {\left| \phi_{i}^{core}(\mathbf{r})\right|}^{2} \\ \rho_{v}(\mathbf{r}) &= \sum\limits^{n_2}_{i=n_1+1} {\left| \phi_{i}^{unocc}(\mathbf{r})\right|}^{2} \\ \rho_{s}(\mathbf{r}) &= \sum\limits^{n_3}_{i=n_2+1} {\left| \phi_{i}^{shell}(\mathbf{r})\right|}^{2}\end{aligned}$$ where first $n_1$ orbitals are occupied, $n_1+1$ to $n_2$ are vacant followed by occupied orbitals from $n_2+1$ to $n_3$. The total electron density $\rho({\bf r})$ is given as $$\begin{aligned} \rho({\mathbf{r}})=\rho_{c}({\mathbf{r}})+\rho_{s}({\mathbf{r}}) \\ \textrm{or} \hspace{2em} \rho({\mathbf{r}})=\rho_1({\mathbf{r}}) -\rho_2({\mathbf{r}}) + \rho_3({\mathbf{r}})\end{aligned}$$ with $\rho_1=\rho_c,\rho_2=\rho_c+\rho_v $ and $\rho_3=\rho_c+\rho_v+\rho_s$. Using this idea, we have constructed the kinetic [@hemanadhan-harbola:2010] and exchange-energy functionals [@samal-harbola:2005] for excited-states, and shown that these functionals lead to accurate kinetic, exchange, and transition energies. We point out that the application of the ground-state LSD functional generally leads to poor results for excited-states. The generality of this idea to construct energy functionals for other class of systems also leads to accurate energies [@shamim-harbola:2010; @hemanadhan-harbola:2012]. Encouraged by these studies, we now subject this method to test by the IP theorem. For this study, we consider the class of excited-systems as shown in Fig. \[fig:k-space\] for which the MLSD functional is given by [@samal-harbola:2005] $$\begin{aligned} E_X^{MLDA}[\rho] & = \int \rho(\mathbf{r}) \left[ \epsilon(k_3)-\epsilon(k_2)+\epsilon(k_1) \right] d\mathbf{r} + \frac{1}{8\pi^3} \int \left(k_3^2-k_1^2 \right)^2 \ln \left( \frac{k_3+k_1}{k_3-k_1}\right) d\mathbf{r} \nonumber \\ & - \frac{1}{8\pi^3} \int \left(k_3^2-k_2^2 \right)^2 \ln \left( \frac{k_3+k_2}{k_3-k_2}\right) d\mathbf{r} + \frac{1}{8\pi^3} \int \left(k_2^2-k_1^2 \right)^2 \ln \left( \frac{k_2+k_1}{k_2-k_1}\right) d\mathbf{r} \label{eq:xmlda}\end{aligned}$$ where $\epsilon(k_i)=\frac{-3k_i}{4\pi}$ is the exchange energy per particle for the ground state of HEG with Fermi wavevector $k_i$. Like the ground-state functional the modified local spin density (MLSD) functional is given as $$E_X^{MLSD}[\rho] = \frac{1}{2} E_X^{MLDA}[2 \rho_{\alpha}] + \frac{1}{2} E_X^{MLDA}[2 \rho_{\beta}] \label{eq:exmlsd}$$ The corresponding potential $v^{MLSD}_x$ is given as $$v_{x,\sigma}^{MLSD}(\textbf{r})= \frac{\delta E_{X}^{MLSD}[\rho]}{\delta\rho_{\sigma}(\mathbf{r})} \label{xp}$$ However, it has not been possible to get a workable analytical expression for $ v_{x}^{MLSD}({\bf r})$ out of Eqs.  and . Therefore on the basis of arguments based on ground-state theory, we try to model the potential. For completeness we note the earlier attempts to construct accurate excited-state potentials by Gaspar [@gaspar:1974] and Nagy [@nagy:1990]. They have given an ensemble averaged exchange potential for the excited states and using this potential, they calculate excitation energy for single electron excitations. In the next section we propose an excited-state LDA-like exchange potential based on split $k$-space. This potential is similar to its ground-state LDA counterpart. We refer to this as the MLSD potential. We further correct the potential for its asymptotic behavior with the LB correction. With the asymptotically corrected MLSD potential, we show that the IP theorem for excited-states is satisfied to a good accuracy. Generalization of Dirac exchange potential for excited-states using split $k$-space ----------------------------------------------------------------------------------- The Hartree-Fock exchange potential for a system of electrons is given by$$v_{x,i}^{HF}=v_{x}(\phi_{i})=-\sum_{j}\int\frac{\phi^{\ast}_{j}({\bf r'})\phi_{i}({\bf r'})\phi_{j}({\bf r})} {\phi_{i}({\bf r})|{\bf r}- {\bf r'}|}d{\bf r'} \label {hfx}.$$ For homogeneous electron gas, the wavefunction is given by $$\phi_{{\bf k}}({\bf r}) = \frac{1}{{\sqrt V}} e^{ \left (i{\bf k}\cdot {\bf r}\right )} \label {wf}.$$ where $V$ is the volume of the system. Using this form of wavefunction in Eq.  we get an exchange potential for one-gap systems shown in Fig. \[fig:k-space\] to be for $\phi_k(\mathbf{r})$ $$\begin{aligned} v_{x}(k)=&-\frac{1}{\pi} \left [k_{1}-k_{2}+k_{3} + \frac {k_{1}^{2}-k^{2}}{2k}\ln\left|\frac{k+k_{1}}{k-k_{1}}\right| \right. - \left. \frac {k_{2}^{2}-k^{2}}{2k}\ln\left|\frac{k+k_{2}}{k-k_{2}}\right|+\frac {k_{3}^{2}-k^{2}}{2k}\ln\left|\frac{k+k_{3}}{k-k_{3}}\right|\right ] \label{eq:hfxpi}\end{aligned}$$ where $k_1, k_2$, and $k_3$ are given by Eqns , ,  This potential is orbital dependent. To make this potential an orbital independent potential we draw the analogy from the ground state exchange potential, where the exact LDA potential is equal to the HF potential for highest occupied molecular orbital (HOMO). $$v^{HF}_{x,i}(\mathbf{r})|_{i=max} = \frac{\delta E_x^{LDA}[\rho(\mathbf{r})]}{\delta \rho(\mathbf{r})} \label{eq:hflda}$$ Therefore we take the potential for the electron in HOMO as the exchange potential for all the electrons. For this we put $k=k_{3}$ in Eq. , and get the following expression for the MLSD exchange potential $$\begin{aligned} v_{x}^{MLSD}= & -\frac{k_{3}}{\pi}\left [1-x_{2}+x_{1}-\frac{1}{2}(1-x_{1}^{2})\ln\left|\frac{1+x_{1}}{1-x_{1}}\right| \right. \left. +\frac{1}{2}(1-x_{2}^{2})\ln\left|\frac{1+x_{2}}{1-x_{2}}\right|\right ] \label {eq:vxmlsd}\end{aligned}$$ where, $ x_{1}=\frac{k_{1}}{k_{3}}, x_{2}=\frac{k_{2}}{k_{3}} $ The MLSD potential of Eq.  is also obtained by taking the functional derivative of the exchange functional $E_x^{MLDA}$ of Eq.  with respect to $\rho_{3}(\mathbf{r})$, corresponding to the largest wave-vector in the $k$-space. Thus, we reach the same result from two different paths; this in some sense assures us about the correctness of the approach taken. When this potential is corrected for its asymptotic behavior by adding the LB correction, we obtain the modified LB (MLB) potential. In the following Section, we test the MLB potential using the IP theorem for excited-states and show that it satisfies the IP theorem as accurately as the LB potential does for the ground-states. On the other hand, the LB potential does not lead to as accurate as satisfaction of the IP theorem indicating thereby that the potential derived on the basis of splitting $k$-space is more appropriate for the excited-state calculations. Results for excited-states {#sec:ex-result} ========================== The MLSD potential is the ground-state counterpart of the LSD potential. To correct the potential in the asymptotic region, we include the LB gradient term of Eq.  corresponding to largest wave-vector $k_3$ in the MLSD potential and obtain the MLB potential. $$v_{x,\sigma}^{MLB} = v_x^{MLSD} - \beta \rho_{3,\sigma}^{1/3}(\mathbf{r}) \frac{x^2_{3,\sigma}}{1+3\beta x_{3,\sigma} \sinh^{-1}(x_{3,\sigma})} \label{eq:vxmlb}$$ In performing self-consistent calculations, it is this potential that is employed as the exchange potential in the excited-state Kohn-Sham equations. Our calculations are performed using the central-field approximation [@Slater:1929] whereby the potential is taken to be spherically symmetric. Having obtained the orbitals the exchange energy is then calculated using the MLSDSIC functional [@samal-harbola:2005] and is given as $$E_{X}^{MLSDSIC}=E_{X}^{MLSD}-\sum_{i}^{rem}{E_i}^{SIC}-\sum_{i}^{add}{E_i}^{SIC} \label{eq:mlsdsic}$$ where, $$E_{i}^{SIC}\left[\phi_i\right]= \int\int\frac{|\phi_{i}(\mathbf{r}_{1})|^{2}|\phi_{i} ({\bf r}_{2})|^{2}}{|{\bf r}_{1}-{\bf r}_{2}|}d{\bf r}_{1}d{\bf r}_{2} +E^{LSD}_{X}\left[\rho\left(\phi_i\right)\right] \label{eq:sic}$$ where the summation index $i$ in Eq.  runs over the orbitals from which the electrons are removed and create a gap, and to the orbitals to which the electrons are added. $E^{LSD}_{X}\left[\rho\left(\phi_i\right)\right]$ is the exchange energy corresponding to the $\phi_i$ orbital in the LSD approximation. Using the $\Delta$SCF energy obtained from these calculations and the eigenvalues from the Kohn-Sham calculations, we study the IP theorem. For our study, we have considered systems for which both the atomic excited-states and its ionic states can be represented by a single Slater determinant; this is so because LSD/MLSD is accurate for such states  [@barth:1979]. Presented in Table \[tab:ex-x-IP\] are the $\epsilon_{max}$ and $\Delta$SCF energies for different excited-states obtained using the LB potential of Eq. , and the excited-state MLB potential of Eq. . In both the LB and the MLB potentials, we have used $\beta = 0.05$. Further, the energies for both the potentials are calculated using the MLSDSIC exchange energy functional. The HF $\epsilon_{max}$ and $\Delta$SCF are also shown in Table \[tab:ex-x-IP\] for comparison. The results of Table III are shown graphically in Fig. \[fig:ip-ex-x\]. It is evident from the figure that the MLB potential satisfies the IP theorem accurately while the LB and HF both deviate from it. Thus accounting for the occupation of orbitals in the $k$-space gives better results for the theorem. Let us next check how does the MLB potential compare with the KLI potential for excited-states. Plotted in Fig. \[fig:vx-Li-excited-3s1\] are the radial density and the corresponding excited-state exchange potential of Li $(3s^1 \ ^2S)$ within the LB and the MLB approximations. Also shown in the figure the exact exchange potential, obtained through KLI method [@Nagy:1997]. It is clear from the figure that the split $k$-space based MLB potential has a structure resembling the KLI potential for the excited-state: very close to it in the inter-shell region from about 0.1 a.u. onwards and beyond. This is similar to the relation between the LB potential and the KLI potential for the ground-states. The LB potential for the excited-states, on the other hand, is not close to the exact potential and has undesirable features at the minimum of radial density which are not present in the MLB potential. Similar unsmooth behavior is observed [@cheng-Wu-Voorhis:2008] in the LSD potential. In addition, the MLB potential is closer to the KLI potential in the interstitial and the asymptotic region, similar to what the LB potential did for the ground-states. The discrepancy between the potentials near the nucleus that was present in the ground-states is also present here. Nonetheless it is clear that the exchange potentials obtained on the basis of split $k$-space give a much better description of an excited-state than the ground-state LB potential. To sum up, we have shown that excited-state energy functional and its asymptotically corrected potential based on split $k$-space satisfy IP theorem with a great accuracy in the exchange-only limit. This can be improved further by optimizing $\beta$. In Table \[tab:ex-x-IP\], we also present the results obtained by varying the parameter $\beta$ in the excited-state MLB potential until $\epsilon_{max}$ matches with the $\Delta$SCF energies. The $\epsilon_{max}$ so obtained using the excited-state potential is close to the HF values. For $B \ (3p^1 \ ^2P)$ we are unable to tune the $\beta$ using the MLB potential. We now wish to include correlation and compare our results with experiments. The lack of correlation potential for excited-states forces us to rely on the ground-state potential. In Table \[tab:ex-xc-IP\] are the calculations performed using the ground-state VWN potential. It is seen that similar to the ground-state, the $\Delta$SCF energies obtained with the split $k$-space functional are close to the experimental values. Also shown in table are the $\beta$ tuned energies to satisfy IP theorem. By imposing IP theorem, $\epsilon_{max}$ improves over the $\beta=0.05$ values and is closer to the experimental values for all atoms. Concluding Remarks {#sec:conclusion} ================== To conclude we have shown that splitting $k$-space according to the occupation of Kohn-Sham orbitals is a good way of constructing excited-state potential. The potential so constructed, when corrected for its long-range behavior, gives highly accurate eigenvalues for the upper most orbital in the sense of IP theorem: the eigenvalues and the $\Delta$SCF energies obtained from the energy functional by splitting $k$-space agree with one another to a great degree. This shows that split $k$-space method could be the proper path to follow for constructing excited-state energy functionals. Acknowledgments =============== M. Hemanadhan wishes to thank Council of Scientific and Industrial Research (CSIR), New Delhi for financial support. [10]{} R. G. Parr and W. Yang, [*Density-Functional Theory of Atoms and Molecules*]{}, Vol. 16 of [*International Series of Monographs on Chemistry*]{} (Oxford University Press, New York, 1989). R. M. Dreizler and E. K. U. Gross, [*Density-Functional Theory: An Approach to the Quantum Many-Body Problem*]{} (Springer-Verlag, New York, 1990). N. H. March, [*Electron Density Theory of Atoms and Molecules*]{}, [ *Theoretical Chemistry Series*]{} (Academic Press, London, 1992). E. Engel and R. M. Dreizler, [*Density Functional Theory: An Advanced Course*]{}, [*Theoretical and Mathematical Physics*]{} (Springer-Verlag, Berlin Heidelberg, 2011). A. D. Becke, Phys. Rev. A [**38**]{}, 3098 (1988). J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. [**77**]{}, 3865 (1996), errata: Phys. Rev. Lett. [**78**]{}, 1396 (1997). J. Tau, J. P. Perdew, V. N. Staroverov, and E. Scuseria, Phys. Rev. Lett. [**91**]{}, 146401 (2003). R. van Leeuwen and E. J. Baerends, Phys. Rev. A [**49**]{}, 2421 (1994). N. Umezawa, Phys. Rev. A [**74**]{}, 032505 (2006). A. D. Becke and E. R. Johnson, J. Chem. Phys. [**124**]{}, 221101 (2006). C. A. Ullrich, [*Time-Dependent Density-Functional Theory: Concepts and Applications*]{}, [*Oxford Graduate Texts*]{} (Oxford University Press Inc., New York, 2012). O. Gunnarsson and B. I. Lundqvist, Phys. Rev. B [**13**]{}, 4274 (1976). O. Gunnarsson and B. I. Lundqvist, Phys. Rev. B [**15**]{}, 6006 (1977). T. Ziegler, A. Rauk, and E. J. Baerends, Theor. Chim. Acta [**43**]{}, 261 (1977). U. von Barth, Phys. Rev. A [**20**]{}, 1693 (1979). A. K. Theophilou, J. Phys. C Solid State Phys. [**12**]{}, 5419 (1979). E. K. U. Gross, L. N. Oliveira, and W. Kohn, Phys. Rev. A [**37**]{}, 2809 (1988). L. N. Oliveira, E. K. U. Gross, and W. Kohn, Phys. Rev. A [**37**]{}, 2821 (1988). . Nagy, Phys. Rev. A [**53**]{}, 3660 (1996). A. G[ö]{}rling, Phys. Rev. A [**59**]{}, 3359 (1999). M. Levy and [Á]{}. Nagy, Phys. Rev. Lett. [**83**]{}, 4361 (1999). . Nagy and M. Levy, Phys. Rev. A [**63**]{}, 052502 (2001). M. Levy, Proc. Natl. Acad. Sci. USA [**76**]{}, 6062 (1979). M. K. Harbola, Phys. Rev. A [**65**]{}, 052504 (2002). M. K. Harbola, Phys. Rev. A [**69**]{}, 042512 (2004). P. Samal and M. K. Harbola, J. Phys. B: At. Mol. Opt. Phys. [**38**]{}, 3765 (2005). P. Samal and M. K. Harbola, Chem. Phys. Lett. [**419**]{}, 217 (2006), errata: Chem. Phys. Lett. [**422**]{}, 586 (2006). P. Samal and M. K. Harbola, J. Phys. B: At. Mol. Opt. Phys. [**39**]{} 4065 (2006). M. Hemanadhan and M. K. Harbola, J. Mol. Struct. Theochem [**943**]{}, 152 (2010). M. Hemanadhan and M. Harbola, Eur. Phys. J. D [**66**]{}, 1 (2012). . Shamim and M. K. Harbola, J. Phys. B: At. Mol. Opt. Phys. [**43**]{}, 215002 (2010). J. P. Perdew, R. G. Parr, M. Levy, and J. L. Balduz, Phys. Rev. Lett. [**49**]{}, 1691 (1982). J. Katriel, E. R. Davidson, Proc. Natl. Acad. Sci. USA **77**, 4403 (1980). M. Levy, J. P. Perdew, and V. Sahni, Phys. Rev. A [**30**]{}, 2745 (1984). W. Haynes, D. R. Lide, and T. Bruno, [*CRC Handbook of Chemistry and Physics 2012-2013*]{}, [*CRC Handbook of Chemistry & Physics*]{} (CRC Press, Boca Raton, Florida, 2012). A. Savin, in [*Recent advances in density functional methods: Part 1*]{}, [ *Recent Advances in Computational Chemistry, Vol 1, Part 1*]{}, edited by D. Chong (World Scientific Publishing Company Incorporated, Singapore, 1995). T. Leininger, H. Stoll, H.-J. Werner, and A. Savin, Chem. Phys. Lett. [**275**]{}, 151 (1997). H. Iikura, T. Tsuneda, T. Yanai, and K. Hirao, J. Chem. Phys. [**115**]{}, 3540 (2001). T. Yanai, D. P. Tew, and N. C. Handy, Chem. Phys. Lett. [**393**]{}, 51 (2004). R. Baer and D. Neuhauser, Phys. Rev. Lett. [**94**]{}, 043002 (2005). L. Kronik, T. Stein, S. Refaely-Abramson, and R. Baer, J. Chem. Theory Comput. [**8**]{}, 1515 (2012). D. Bohm and D. Pines, Phys. Rev. [**92**]{}, 609 (1953). T. Stein, H. Eisenberg, L. Kronik, and R. Baer, Phys. Rev. Lett. [**105**]{}, 266802 (2010). P. Singh, M. K. Harbola, B. Sanyal, and A. Mookerjee, Phys. Rev. B [**87**]{}, 235110 (2013). M. K. Harbola, M. Hemanadhan, [Md]{}. Shamim, and P. Samal, in [ *Concepts and Methods in Modern Theoretical Chemistry, Electronic Structure and Reactivity*]{}, edited by S. K. Ghosh and P. K. Chattaraj (Taylor & Francis Group, Boca Raton, Florida, 2013). C.-L. Cheng, Q. Wu, and T. V. Voorhis, J. Chem. Phys. [**129**]{}, 124112 (2008). A. Banerjee and M. K. Harbola, Phys. Rev. A [**60**]{}, 3599 (1999). P. A. M. Dirac, Proc. Cambridge Phil. Soc. [**26**]{}, 376 (1930). M. Levy and J. P. Perdew, Phys. Rev. A [**32**]{}, 2010 (1985). A. P. Gaiduk, S. K. Chulkov, and V. N. Staroverov, J. Chem. Theory Comput. [**5**]{}, 699 (2009). T. C. Koopmans, Physica [**1**]{}, 104 (1934). S. H. Vosko, L. Wilk, and M. Nusair, Can. J. Phys. [**58**]{}, 1200 (1980). J. B. Krieger, Y. Li, and G. J. Iafrate, Phys. Rev. A [**45**]{}, 101 (1992). I. Lindgren, Int. J. Quan. Chem. [**5**]{}, 411 (1971). R. G[á]{}sp[á]{}r, Acta Phys. Hung. [**35**]{}, 213 (1974). . Nagy, Phys. Rev. A [**42**]{}, 4388 (1990). J. C. Slater, Phys. Rev. [**34**]{}, 1293 (1929). . Nagy, Phys. Rev. A [**55**]{}, 3465 (1997). A. Kramida, Y. Ralchenko, J. Reader, and N. A. Team, NIST Atomic Spectra Database (version 5.0), 2012. ![Plot of $\epsilon_{max}$ vs. $\Delta$SCF energies using different exchange-only potentials.[]{data-label="fig:ip-gr-x"}](fig_IP_ground_x.ps){width="4.5in" height="6.5in"} ![Ground-state radial density and the exchange potential of Li ($2s^1 \ 2S$) for the up spin obtained using different approximations for the potential.[]{data-label="fig:vx-Li-ground"}](pl-gr-Li.ps){height="5.5in" width="3.8in"} ![(a) Orbital and (b) the corresponding $k$-space occupation in the excited state configuration of a homogeneous electron gas (HEG).[]{data-label="fig:k-space"}](kfig.eps){height="3.5in" width="4.5in"} ![Plot of $\epsilon_{max}$ vs. $\Delta$SCF energies using different exchange-only potentials of LB, MLB and HF.[]{data-label="fig:ip-ex-x"}](fig_IP_excited_x.ps){width="4.5in" height="6.5in"} ![Excited-state radial density and the exchange potential of Li ($3s^1 \ 2S$) for the up spin obtained using different approximations for the potential.[]{data-label="fig:vx-Li-excited-3s1"}](pl-ex-Li.ps){height="5.5in" width="3.8in"} =0.11cm ----------------------- ---------------------------- ------------- ---------------------------- ------------- --------- --------------------------------------------------- ---------------------------- ------------- Atoms/ion $-\epsilon_{\textrm{max}}$ $\Delta$SCF $-\epsilon_{\textrm{max}}$ $\Delta$SCF $\beta$ $-\epsilon_{\textrm{max}} (=\Delta \textrm{SCF})$ $-\epsilon_{\textrm{max}}$ $\Delta$SCF He($1s^2 \ ^1S$) 0.517 0.811 0.794 0.810 0.064 0.809 0.918 0.862 Li($2s^1 \ ^2S$) 0.100 0.185 0.175 0.182 0.073 0.182 0.196 0.196 Be($2s^2 \ ^1S$) 0.170 0.281 0.282 0.278 0.043 0.278 0.309 0.296 B($2s^2 2p^1 \ ^2P$) 0.120 0.278 0.263 0.274 0.075 0.273 0.310 0.291 C($2s^2 2p^2 \ ^3P$) 0.196 0.396 0.366 0.394 0.104 0.392 0.433 0.396 N($2s^2 2p^3 \ ^4S$) 0.276 0.515 0.476 0.513 0.112 0.511 0.568 0.513 O($2s^2 2p^4 \ ^3P$) 0.210 0.436 0.448 0.431 0.035 0.432 0.632 0.437 F($2s^2 2p^5 \ ^2P$) 0.326 0.597 0.585 0.594 0.060 0.594 0.730 0.578 Ne($2s^2 2p^6 \ ^1S$) 0.443 0.754 0.724 0.751 0.077 0.749 0.850 0.729 ----------------------- ---------------------------- ------------- ---------------------------- ------------- --------- --------------------------------------------------- ---------------------------- ------------- ----------------------- ---------------------------- ------------- ------------------- --------------------------------------------------- ------- Atom Expt. [@w2012crc] $-\epsilon_{\textrm{max}}$ $\Delta$SCF $\beta$ $-\epsilon_{\textrm{max}} (=\Delta \textrm{SCF})$ He($1s^2 \ ^1S$) 0.851 0.892 0.106 0.890 0.904 Li($2s^1 \ ^2S$) 0.193 0.198 0.066 0.198 0.198 Be($2s^2 \ ^1S$) 0.320 0.329 0.072 0.329 0.342 B($2s^2 2p^1 \ ^2P$) 0.296 0.312 0.086 0.311 0.305 C($2s^2 2p^2 \ ^3P$) 0.401 0.431 0.115 0.430 0.414 N($2s^2 2p^3 \ ^4S$) 0.511 0.550 0.117 0.548 0.534 O($2s^2 2p^4 \ ^3P$) 0.516 0.506 0.041 0.507 0.501 F($2s^2 2p^5 \ ^2P$) 0.647 0.661 0.065 0.660 0.640 Ne($2s^2 2p^6 \ ^1S$) 0.782 0.813 0.082 0.811 0.792 ----------------------- ---------------------------- ------------- ------------------- --------------------------------------------------- ------- ------------------------- ---------------------------- ------------- ---------------------------- ------------- --------- ---------------------------------------------------- ---------------------------- ------------- -- -- Atom $-\epsilon_{\textrm{max}}$ $\Delta$SCF $-\epsilon_{\textrm{max}}$ $\Delta$SCF $\beta$ $-\epsilon_{\textrm{max}}(={\Delta \textrm{SCF})}$ $-\epsilon_{\textrm{max}}$ $\Delta$SCF Li($2p^1 \ ^2P$) 0.117 0.109 0.096 0.114 0.300 0.114 0.129 0.129 B($2s^1 2p^2 \ ^2D$) 0.226 0.175 0.166 0.185 0.120 0.183 0.276 0.192 C($2s^1 2p^3 \ ^3D$) 0.279 0.202 0.200 0.215 0.090 0.213 0.402 0.233 N($2s^1 2p^4 \ ^4P$) 0.328 0.227 0.232 0.242 0.070 0.241 0.522 0.241 O($2s^1 2p^5 \ ^3P$) 0.466 0.362 0.387 0.368 0.035 0.370 0.601 0.364 F($2s^1 2p^6 \ ^2S$) 0.601 0.543 0.533 0.539 0.055 0.538 0.703 0.497 Ne$^+$($2s^12p^6\ ^2S$) 1.429 1.369 1.339 1.370 0.075 1.369 1.553 1.334 Li($3s^1 \ ^2S$) 0.076 0.085 0.069 0.072 0.080 0.073 0.074 0.074 Li($4s^1 \ ^2S$) 0.042 0.052 0.035 0.051 0.122 0.038 0.038 0.038 B($3s^1 \ ^2S$) 0.107 0.139 0.108 0.122 0.200 0.121 0.114 0.114 B($3p^1 \ ^2P$) 0.078 0.096 0.067 0.088 - - 0.079 0.079 Be($2s^1 3s^1 \ ^3S$) 0.095 0.116 0.094 0.101 0.100 0.102 0.100 0.100 ------------------------- ---------------------------- ------------- ---------------------------- ------------- --------- ---------------------------------------------------- ---------------------------- ------------- -- -- --------------------------- ---------------------------- ------------- --------- --------------------------------------------------- --------------- Atom $-\epsilon_{\textrm{max}}$ $\Delta$SCF $\beta$ $-\epsilon_{\textrm{max}} (=\Delta \textrm{SCF})$ Expt. [@NIST] Li($2p^1 \ ^2P$) 0.110 0.128 0.230 0.127 0.130 B($2s^1 2p^2 \ ^2D$) 0.214 0.252 0.600 0.247 0.257 C($2s^1 2p^3 \ ^3D$) 0.262 0.300 0.500 0.295 0.318 N($2s^1 2p^4 \ ^4P$) 0.308 0.344 0.350 0.339 0.348 O($2s^1 2p^5 \ ^3P$) 0.453 0.441 0.040 0.442 0.471 F($2s^1 2p^6 \ ^2S$) 0.594 0.604 0.056 0.600 0.623 Ne$^+$($2s^1 2p^6 \ ^2S$) 1.409 1.444 0.080 1.443 1.442 Li($3s^1 \ ^2S$) 0.079 0.081 0.060 0.081 0.074 Li($4s^1 \ ^2S$) 0.042 0.046 0.122 0.046 0.039 B($3s^1 \ ^2S$) 0.123 0.136 0.200 0.136 0.122 B($3p^1 \ ^2P$) 0.079 0.099 - - 0.083 Be($2s^1 3s^1 \ ^3S$) 0.107 0.112 0.082 0.112 0.105 --------------------------- ---------------------------- ------------- --------- --------------------------------------------------- ---------------
{ "pile_set_name": "ArXiv" }
--- abstract: 'We discuss the value of the cosmological constant as recovered from CMB and LSS data and the robustness of the results when general isocurvature initial conditions are allowed for, as opposed to purely adiabatic perturbations. The Bayesian and frequentist statistical approaches are compared. It is shown that pre-WMAP CMB and LSS data tend to be incompatible with a non-zero cosmological constant, regardless of the type of initial conditions and of the statistical approach. The non-adiabatic contribution is constrained to be $\leq 40\%$ ($2\sigma$ c.l.).' address: 'Département de Physique Théorique, Université de Genève, 24 quai Ernest Ansermet, CH–1211 Genève 4, Switzerland' author: - Roberto Trotta title: The cosmological constant and the paradigm of adiabaticity --- Cosmic microwave background ,cosmological constant ,initial conditions 98.70Vc ,98.80Hw ,98.80Cq Introduction ============ There are now at least 5 completely independent observations which consistently point toward a majority of the energy-density of the Universe being in the form of a “cosmological constant”, ${\Omega_{\Lambda}}$. Those observations are: cosmic microwave background anisotropies (CMB), large scale structure (LSS), supernovae typ IA, strong and weak gravitational lensing. The very nature of this mysterious component remains unknown, and the so called “smallness problem” (i.e. why ${{\mathcal O}}({{\Omega_{\Lambda}}}) \sim 1$ and not ${\Omega_{\Lambda}}\gsim 10^{58}$ as expected from particle physics arguments) is still unsolved. It is therefore important to test the robustness of results indicating a non-vanishing cosmological constant with respect to non-standard physics. One possibile extension of the “concordance model” is given by non-adiabatic initial conditions for the cosmological perturbations, [i.e. ]{}isocurvature modes. Another test is the use of a different statistical approach then the usual Bayesian one, namely the frequentist method. We discuss this points in the next section, and present their application to the cosmological constant problem in section 3. Section 4 is dedicated to our conclusions. Testing the assumption of adiabaticity ====================================== Statistics ---------- Most of the recent literature on cosmological parameters estimation uses [*Bayesian inference*]{}: the Maximum Likelihood (ML) principle states that the best estimate for the unknown parameters is the one which maximizes the likelihood function. Therefore, in the grid-based method, one usually minimizes the $\chi^2$ over the parameters which one is not interested in. Then one defines $1 {\sigma}$, $2 {\sigma}$ and $3 {\sigma}$ likelihood contours around the best fit point, as the locus of models within $\Delta \equiv \chi^2 - \chi^2_{\rm ML} = 2.30$, $6.18$, $11.83$ away from the ML value for the joint likelihood in two parameters, $\Delta = 1$, $4$, $9$ for the likelihood in only one parameter. Based on Bayes’ Theorem, likelihood intervals measure our degree of belief that the particular set of observations used in the analysis is generated by a parameter set belonging to the specified interval [@statistics]. Since Bayesian likelihood contours are drawn with respect to the ML point, if the best fit value for the $\chi^2$ is much lower then what one would expect statistically for Gaussian variables ([i.e. ]{}$\chi^2/F \approx 1$, were $F$ denotes the number of degrees of freedom, dof), Bayesian contours will underestimate the real errors. The grid-based parameter estimation method can however be used for a determination of true exclusion region ([*frequentist approach*]{}). The Bayesian and frequentist methods can give quite different errors on the parameters, since the meaning of the confidence intervals is different. The frequentist approach answers the question: What is the probability of obtaining the experimental data at hand, if the Universe has some given cosmological parameters? To the extent to which the $C_\ell$’s can be approximated as Gaussian variables, the quantity $\chi^2$ is distributed according to a chi-square probability distribution with $F = N - M$ dof, where $N$ is the number of independent (uncorrelated) experimental data points and $M$ is the number of fitted parameters. Since the chi-square distribution, $P^{(F)}$, is well known, one can readily estimate [*confidence intervals*]{}, by finding the quantile of $P^{(F)}$ for the chosen (1 tail) confidence level. The so obtained exclusion regions do not rely on the ML point. On the other hand, they are rigorously correct only if the assumption of Gaussianity holds, and the number of dof is precisely known. In general one should keep in mind that frequentist contours are less stringent than likelihood (Bayesian) contours. Dependence on initial conditions -------------------------------- CMB anisotropies are sensitive not only to the matter-energy content of the universe, but also to the type of initial conditions (IC) for cosmological perturbations. Initial conditions are set at very early times, and determining them gives precious hints on the type of physical process which produced them. In the context of the inflationary scenario, the type of IC is related to the number of scalar fields in the very early universe and to their masses. For instance, the simplest inflationary model, namely with only one scalar field, predicts adiabatic (AD) initial conditions. In this case, the initial density contrast for all components (baryons, CDM, photons and neutrinos) is the same, up to a constant: $${\frac{\delta \rho_{b}}{\rho_{b}}} = {\frac{\delta \rho_{c}}{\rho_{c}}} = \frac{3}{4}{\frac{\delta \rho_{\gamma}}{\rho_{\gamma}}} = \frac{3}{4} {\frac{\delta \rho_{\nu}}{\rho_{\nu}}} \equiv \Delta_{AD} \qquad \text{(AD).}$$ This excites a cosine oscillatory mode in the photon-baryon fluid, which induces a first peak at $\ell \approx 220$ in the angular power spectrum for a flat universe. Another possibility are CDM isocurvature initial conditions. Then the total energy-density perturbation vanishes (setting ${\frac{\delta \rho_{b}}{\rho_{b}}}={\frac{\delta \rho_{\nu}}{\rho_{\nu}}}=0$ without loss of generality): $${\frac{\delta \rho_{{{\rm tot}}}}{\rho_{{{\rm tot}}}}} = {\frac{\delta \rho_{c}}{\rho_{c}}} + {\frac{\delta \rho_{\gamma}}{\rho_{\gamma}}} = 0 \qquad \text{(CDM ISO)}$$ and therefore the gravitational potential $\Psi$ is approximately zero as well (“isocurvature”). CDM isocurvature IC excite a sine oscillation, and the resulting first peak in the power spectrum is displaced to $\ell \approx 330$. Generation of isocurvature initial conditions requires the presence of (at least) a second light scalar field during inflation. The observation of the first peak at $\ell = 220.1 \pm 0.8$ [@Page03] has ruled out the possibility of pure CDM isocurvature initial conditions. However, a subdominant isocurvature contribution to the prevalent adiabatic mode cannot be excluded. Beside AD and CDM isocurvature, the complete set of IC for a fluid consisting of photons, neutrinos, baryons and dark matter in general relativity consists of three more modes [@BMT]. These are the baryon isocurvature mode (BI), the neutrino isocurvature density (NID) and neutrino isocurvature velocity (NIV) modes. Those five modes are the only regular ones, [i.e. ]{}they do not diverge at early times. The NID mode can be understood as a neutrino entropy mode, while the NIV consists of vanishing density perturbations for all fluids but non-zero velocity perturbations between fluids. The CDM and BI modes are identical, and therefore it suffices to consider only one of them. In the most general scenario, one would expect all four modes to be present with arbitrary initial amplitude and arbitrary correlation or anti-correlation, with the restriction that their superposition must be a positive quantity. For simplicity we consider the case where all modes have the same spectral index, $n_{{\rm S}}$. The most general initial conditions are then described by the spectral index $n_{{\rm S}}$ and a positive semi-definite $4 \times 4$ matrix, which amounts to eleven parameters instead of two in the case of pure AD initial conditions. More details can be found in Refs. [@TRD1; @TRD2]. The CMB and matter power spectra for the different types of initial conditions are plotted in  \[fig:power\_spectra\]. The matter power spectrum ------------------------- ![Joint Bayesian likelihood contours for the baryon density $\omega_b$ and the Hubble parameter $h$, using pre-WMAP CMB data only. The tighter contours (shades of green) assume purely AD initial conditions, the wider contours (yellow/shades of red) include general isocurvature IC (from Ref. [@TRD1]).[]{data-label="fig:TRD1"}](both_trd1_b.eps){width="5.5cm"} Inclusion of general initial conditions in the analysis can lead to very important degeneracies in the IC parameter space, which spoil the accuracy with which other cosmological parameters can be measured by CMB alone. This has been demonstrated in a striking way for the case of the Hubble parameter and the baryon density in Ref [@TRD1], [cf]{}  \[fig:TRD1\]. An effective way to break this degeneracy is achieved by the inclusion of large scale structure (LSS) data. The key point is that, once the corresponding CMB power spectrum amplitude has been COBE-normalized, the amplitude of the AD matter power spectrum is nearly two orders of magnitude larger than [*any*]{} of the isocurvature contribution ([cf]{}  \[fig:power\_spectra\]). Therefore the matter power spectrum essentially measures the adiabatic part, and is nearly insensitive to isocurvature contributions. The argument holds true for observations of the matter spectrum on all scales, ranging from large scale structure to weak lensing and Lyman $\alpha$-clouds. In view of optimally constraining the isocurvature content, it is therefore essential to combine those observations with CMB data, in order to break the strong degeneracy among initial conditions which is present in the CMB power spectrum alone [@Tprep]. ![CMB (left) and matter (right) power spectra of the different auto- (odd panels) and cross-correlators (even panels) for the standard $\Lambda$CDM concordance model. The CMB power spectrum is COBE-normalized. The color and line style codes are as follows: in the odd panels, AD: solid/black line, CI: dotted/green line, NID: short-dashed/red line, NIV: long-dashed/blue line; in the even panels, AD: solid/black line (for comparison), $<{{\rm AD}},{{\rm CI}}>$: long-dashed/magenta line, $<{{\rm AD}},{{\rm NID}}>$: dotted/green line, $<{{\rm AD}},{{\rm NIV}}>$: short-dashed/red line, $<{{\rm CI}},{{\rm NID}}>$: dot-short dashed/blue line, $<{{\rm CI}},{{\rm NIV}}>$: dot-long dashed/light-blue line, and $<{{\rm NID}},{{\rm NIV}}>$: dot-short dashed/black line. []{data-label="fig:power_spectra"}](CMB_AUTOCORR.ps "fig:"){width="3.0cm"} ![CMB (left) and matter (right) power spectra of the different auto- (odd panels) and cross-correlators (even panels) for the standard $\Lambda$CDM concordance model. The CMB power spectrum is COBE-normalized. The color and line style codes are as follows: in the odd panels, AD: solid/black line, CI: dotted/green line, NID: short-dashed/red line, NIV: long-dashed/blue line; in the even panels, AD: solid/black line (for comparison), $<{{\rm AD}},{{\rm CI}}>$: long-dashed/magenta line, $<{{\rm AD}},{{\rm NID}}>$: dotted/green line, $<{{\rm AD}},{{\rm NIV}}>$: short-dashed/red line, $<{{\rm CI}},{{\rm NID}}>$: dot-short dashed/blue line, $<{{\rm CI}},{{\rm NIV}}>$: dot-long dashed/light-blue line, and $<{{\rm NID}},{{\rm NIV}}>$: dot-short dashed/black line. []{data-label="fig:power_spectra"}](CMB_CROSSCORR.ps "fig:"){width="3.0cm"} ![CMB (left) and matter (right) power spectra of the different auto- (odd panels) and cross-correlators (even panels) for the standard $\Lambda$CDM concordance model. The CMB power spectrum is COBE-normalized. The color and line style codes are as follows: in the odd panels, AD: solid/black line, CI: dotted/green line, NID: short-dashed/red line, NIV: long-dashed/blue line; in the even panels, AD: solid/black line (for comparison), $<{{\rm AD}},{{\rm CI}}>$: long-dashed/magenta line, $<{{\rm AD}},{{\rm NID}}>$: dotted/green line, $<{{\rm AD}},{{\rm NIV}}>$: short-dashed/red line, $<{{\rm CI}},{{\rm NID}}>$: dot-short dashed/blue line, $<{{\rm CI}},{{\rm NIV}}>$: dot-long dashed/light-blue line, and $<{{\rm NID}},{{\rm NIV}}>$: dot-short dashed/black line. []{data-label="fig:power_spectra"}](PS_AUTOCORR.ps "fig:"){width="3.0cm"} ![CMB (left) and matter (right) power spectra of the different auto- (odd panels) and cross-correlators (even panels) for the standard $\Lambda$CDM concordance model. The CMB power spectrum is COBE-normalized. The color and line style codes are as follows: in the odd panels, AD: solid/black line, CI: dotted/green line, NID: short-dashed/red line, NIV: long-dashed/blue line; in the even panels, AD: solid/black line (for comparison), $<{{\rm AD}},{{\rm CI}}>$: long-dashed/magenta line, $<{{\rm AD}},{{\rm NID}}>$: dotted/green line, $<{{\rm AD}},{{\rm NIV}}>$: short-dashed/red line, $<{{\rm CI}},{{\rm NID}}>$: dot-short dashed/blue line, $<{{\rm CI}},{{\rm NIV}}>$: dot-long dashed/light-blue line, and $<{{\rm NID}},{{\rm NIV}}>$: dot-short dashed/black line. []{data-label="fig:power_spectra"}](PS_CROSSCORR.ps "fig:"){width="3.0cm"} The cosmological constant and isocurvature IC ============================================= We apply the above statistical (Bayesian or frequentist) and physical (general initial conditions, matter power spectrum) considerations to the study of the cosmological constant problem from pre-WMAP data. We outline the method and the main results below (see Ref. [@TRD2] for more details) and comment at the end on the qualitative impact of the new WMAP data on those findings. Our analysis makes use of the COBE, BOOMERanG and Archeops data [@CMBdata], covering the range $3 \leq \ell \leq 1000$ in the CMB power spectrum. For the matter power spectrum, we use the galaxy-galaxy linear power spectrum from the 2dF data [@2dFdata], and we assume that light traces mass up to a (scale independent) bias factor $b$, over which we maximise. The main focus being on the type of initial conditions, we restrict our analysis to only 3 cosmological parameters: the scalar spectral index, $n_S$, the cosmological constant ${\Omega_{\Lambda}}$ in units of the critical density and the Hubble parameter, $H_0 = 100 \,h { {\;\mathrm{km}^{}} } { {\;\mathrm{s}^{-1}} } { {\;\mathrm{Mpc}^{-1}} }$. We consider flat universes only and neglect gravitational waves. When we set to zero the isocurvature modes, we recover the well-known results for purely AD perturbations. Because of the “geometrical degeneracy”, CMB alone cannot put very tight lower limits on ${\Omega_{\Lambda}}$ even if we allow only for flat universes. The degeneracy can be broken either by putting an external prior on $h$ or via the LSS spectrum, since $P_{{\rm m}}$ is mainly sensitive to the shape parameter $\Gamma \equiv {\Omega}_{{\rm m}}h$. Combination of CMB and LSS data yields the following likelihood (Bayesian) intervals for ${\Omega_{\Lambda}}$: $${\Omega_{\Lambda}}= 0.70 {_{-0.05}^{+0.05}} \mbox{ at $1 {\sigma}$ $\quad$ and $\quad$} {_{-0.27}^{+0.15}} \mbox{ at $3 {\sigma}$}.$$ From the Bayesian analysis, one concludes that CMB and LSS together with purely AD initial conditions require a non-zero cosmological constant at very high significance, more than $7 {\sigma}$ for the points in our grid! However, our best fit has a reduced chi-square $\chi/F= 0.59$, significantly less than $1$. This leads to artificially tight likelihood regions: the observationally excluded part of parameter space is less extended, and is given by the frequentist analysis. From the frequentist approach, we obtain instead the following confidence intervals: $$0.15 < {\Omega_{\Lambda}}< 0.90 \mbox{ at $1 {\sigma}$ $\quad$ and $\quad$} {\Omega_{\Lambda}}< 0.92 \mbox{ at $3 {\sigma}$}.$$ ![Bayesian (dashed lines) and frequentist (solid, filled) joint $1 {\sigma}$, $2 {\sigma}$, $3 {\sigma}$ contours using pre-WMAP CMB and 2dF data. The left panel assumes purely adiabatic IC, the right panel includes general isocurvature IC.[]{data-label="fig:ADGI"}](AD.eps "fig:"){width="5.5cm"} ![Bayesian (dashed lines) and frequentist (solid, filled) joint $1 {\sigma}$, $2 {\sigma}$, $3 {\sigma}$ contours using pre-WMAP CMB and 2dF data. The left panel assumes purely adiabatic IC, the right panel includes general isocurvature IC.[]{data-label="fig:ADGI"}](AM.eps "fig:"){width="5.5cm"} When we enlarge the space of models by including all possible isocurvature modes, likelihood (Bayesian) and confidence (frequentist) contours widen up along the ${\Omega_{\Lambda}}$, $h$ degeneracy, and this produces a considerable worsening of the likelihood limits. For general initial conditions we now find (Bayesian, CMB and LSS together): $${\Omega_{\Lambda}}= 0.70 {_{-0.10}^{+0.15}} \mbox{ at $1 {\sigma}$ $\quad$ and $\quad$} {_{-0.48}^{+0.25}} \mbox{ at $3 {\sigma}$}.$$ Again, the frequentist statistics give less tight bounds: $${\Omega_{\Lambda}}< 0.90 \mbox{ at $1 {\sigma}$ $\quad$ and $\quad$} {\Omega_{\Lambda}}< 0.95 \mbox{ at $3 {\sigma}$},$$ and in particular we cannot place any lower limit on the value of the cosmological constant. A complete discussion can be found in Ref. [@TRD2]. Joint likelihood contours for ${\Omega_{\Lambda}}$, $h$ with AD and general isocurvature initial conditions are plotted in \[fig:ADGI\] for both statistical approaches. From the frequentist point of view, the region in the ${\Omega_{\Lambda}}, h$ plane which is incompatible with data at more than $3 {\sigma}$ is nearly independent on the choice of initial conditions (compare left and right panel of \[fig:ADGI\]). Enlarging the space of initial conditions seemingly does not have a relevant benefit on fitting pre-WMAP data with or without a cosmological constant. In \[fig:AM\_OL0\] we plot the best fit model (which has $\chi/F= 0.67$) with general initial conditions and ${\Omega_{\Lambda}}= 0$. As a consequence of the red spectral index ($n_S=0.80$) and of the absence of the early Integrated Sachs-Wolfe effect (since ${\Omega_{\Lambda}}=0$), the best fit model has a very low first acoustic peak, even in the presence of isocurvature modes. This is compatible with the BOOMERanG and Archeops data only if the absolute calibration of the experiments is reduced by $28\%$ and $12\%$, respectively. Furthermore, this best fit model has a rather low value of the Hubble parameter, $h=0.35$, which is many sigmas away from the value obtained by the HST Key Project, namely $h=0.72 \pm 0.08$ [@HST]. We conclude that a good fit to the pre-WMAP CMB data combined with LSS measurements can only be obtained at the price of pushing hard the other parameters, even when general initial conditions are allowed for. ![Best fit with general IC and ${\Omega_{\Lambda}}= 0$, combining pre-WMAP CMB (left) and 2dF (right) data. In both panels solid/black is the total spectrum, long-dashed/red the purely AD contribution, short-dashed/green the sum of the pure isocurvature modes, dotted/magenta the sum of the correlators (multiplied by $-1$ in the left panel and in absolute value in the right panel).[]{data-label="fig:AM_OL0"}](AM_OL0_CMB.ps "fig:"){width="5.5cm"} ![Best fit with general IC and ${\Omega_{\Lambda}}= 0$, combining pre-WMAP CMB (left) and 2dF (right) data. In both panels solid/black is the total spectrum, long-dashed/red the purely AD contribution, short-dashed/green the sum of the pure isocurvature modes, dotted/magenta the sum of the correlators (multiplied by $-1$ in the left panel and in absolute value in the right panel).[]{data-label="fig:AM_OL0"}](AM_OL0_PS.ps "fig:"){width="5.5cm"} Finally, in order to constrain deviations from perfect adiabaticity, it is interesting to limit quantitatively the isocurvature contribution. To this end, one can phenomenologically quantify the isocurvature contribution to the CMB power by a parameter $0 \leq \beta \leq 1$, defined in Ref. [@TRD2], so that purely AD IC are characterized by $\beta=0$, while purely isocurvature IC correspond to $\beta=1$. In Fig. \[fig:BETA\_SHADE\] we plot the value of $\beta$ for the best fit models, with the frequentist exclusion regions superimposed. Within $2\sigma$ c.l. (frequentist), the isocurvature contribution to the IC is bounded to be less then $40\%$. ![Isocurvature content $0.0 \leq \beta \leq 1.0$ of best fit models with pre-WMAP CMB and 2dF data. The contours are for $\beta = 0.20, 0.40, 0.60, 0.80$ from the center to the outside. Shaded regions represents 1 to 3 $\sigma$ c.l..[]{data-label="fig:BETA_SHADE"}](BETA_SHADE_lr.eps){width="5.5cm"} Although a quantitative analysis using the more precise WMAP data has not yet been carried out, some qualitative features of the expected results can be discussed. In particular, the first peak has been measured by WMAP to be 10% higher then in previous observations [@WMAP]. On the other hand, our work indicates that the first peak is very suppressed even in the presence of general IC for ${\Omega_{\Lambda}}=0$. Therefore one expects that WMAP data will exclude with much higher confidence a vanishing cosmological constant. In fact, our pre-WMAP best fit ${\Omega_{\Lambda}}=0$ model, when compared to the WMAP data [@WMAP], has $\chi^2_{WMAP}/F \approx 4.4$, and is therefore found to be totally incompatible with the new data. Furthermore, the constraints on non-adiabatic contributions should improve considerably, especially in view of the inclusion of polarization data [@BMTpol]. Conclusions =========== We have shown that the statistical approach (Bayesian or frequentist) can have an important impact in the determination of errors from CMB and LSS data. We found that structure formation data tend to prefer a non-zero cosmological constant even if general isocurvature initial conditions are allowed for. The isocurvature contribution is constrained to be $\leq 40\%$ at $2\sigma$ c.l. (frequentist). Aknowledgments {#aknowledgments .unnumbered} ============== It is a pleasure to thank Alessandro Melchiorri and all the organizers of the workshop. I am also grateful to Alain Riazuelo and Ruth Durrer for a most pleasant collaboration. RT is partially supported by the Schmidheiny Foundation, the Swiss National Science Foundation and the European Network CMBNET. [99]{} G.J. Feldman and R.D. Cousins, Phys. Rev. D [**57**]{}, 3873-3889 (1998); A.G. Frodesen, O. Skjeggestad and H. Tofte, [*Probability and Statistics in Particle Physics*]{} (Universitetsforlaget, Bergen-Oslo-Tromso 1979); M.G. Kendall, and A. Stuart, [*The advanced theory of statistics, Vol. 2*]{}, 4th ed. (High Wycombe, London, 1977). L. Page [[*et al.*]{}]{}, preprint [astro-ph/0302220]{} (2003). M. Bucher, K. Moodley, and N. Turok, Phys. Rev. D [**62**]{}, 083508 (2000). R. Trotta, A. Riazuelo, and R. Durrer, Phys. Rev. Lett. [**87**]{}, 231301 (2001). R. Trotta, A. Riazuelo, and R. Durrer, Phys. Rev. D [**67**]{}, 063520 (2003). R. Trotta [[*et al.*]{}]{}, in preparation. G.F. Smoot [[*et al.*]{}]{}, ApJ [**396**]{}, L1 (1992); C.L. Bennett [[*et al.*]{}]{}, ApJ [**430**]{}, 423 (1994); M. Tegmark and A.J.S. Hamilton, in [*18th Texas Symposium on relativistic astrophysics and cosmology*]{}, edited by A.V. Olinto [[*et al.*]{}]{}, pp 270 (World Scientific, Singapore, 1997); C.B. Netterfield [[*et al.*]{}]{}, ApJ [**571**]{}, 604 (2002); A. Benoît [[*et al.*]{}]{}, preprint [astro-ph/0210306]{}. M. Tegmark, A. Hamilton and Y. Xu, Month. Not. R. Astron.Soc. (accepted), preprint [astro-ph/0111575]{} (2001). W. Freedman [[*et al.*]{}]{}, ApJ [**553**]{}, 47 (2001). G. Hinshaw [[*et al.*]{}]{}, preprint [astro-ph/0302217]{} (2003); L. Verde [[*et al.*]{}]{}, preprint [astro-ph/0302218]{} (2003); A. Kogut [[*et al.*]{}]{}, preprint [astro-ph/0302213]{} (2003). M. Bucher, K. Moodley, and N. Turok, Phys. Rev. Lett. [**87**]{}, 191301 (2001); M. Bucher, K. Moodley, and N. Turok, Phys. Rev. D [**66**]{}, 023528 (2002).
{ "pile_set_name": "ArXiv" }
--- abstract: | Pattern recognition and classification is a central concern for modern information processing systems. In particular, one key challenge to image and video classification has been that the computational cost of image processing scales linearly with the number of pixels in the image or video. Here we present an intelligent machine (the “active categorical classifier,” or ACC) that is inspired by the saccadic movements of the eye, and is capable of classifying images by selectively scanning only a portion of the image. We harness evolutionary computation to optimize the ACC on the MNIST hand-written digit classification task, and provide a proof-of-concept that the ACC works on noisy multi-class data. We further analyze the ACC and demonstrate its ability to classify images after viewing only a fraction of the pixels, and provide insight on future research paths to further improve upon the ACC presented here. categorical perception, attention-based processing, evolutionary computation, machine learning, supervised classification author: - 'Randal S. Olson' - 'Jason H. Moore' - Christoph Adami bibliography: - 'references.bib' title: Evolution of active categorical image classification via saccadic eye movement --- Introduction ============ Pattern recognition and classification is one of the most challenging ongoing problems in computer science in which we seek to classify objects within an image into categories, typically with considerable variation among the objects within each category. With [*invariant*]{} pattern recognition, we seek to develop a model of each category that captures the essence of the class while compressing inessential variations. In this manner, invariant pattern recognition can tolerate (sometimes drastic) variations within a class, while at the same time recognizing differences across classes that can be minute but salient. One means of achieving this goal is through invariant feature extraction [@Trieretal1996], where the image is transformed into feature vectors that may be invariant with respect to a set of transformations, such as displacement, rotation, scaling, skewing, and lighting changes. This method can also be used in a hierarchical setting, where subsequent layers extract compound features from features already extracted in lower levels, such that the last layer extracts features that are essentially the classes themselves [@LeCun1989]. Most of these existing methods have one thing in common: they achieve invariance either by applying transformations to the image when searching for the best match, or by mapping the image to a representation that is itself invariant to such transformations. In contrast to these “passive” methods where transformations are applied to the image, we propose an active, attention-based method, where a virtual camera roams over and focuses on particular portions of the image, similar to how our own brain controls the focus of our attention [@Mnih2014]. In this case, the camera’s actions are guided by what the camera finds in the image itself: In essence, the camera searches the image to discover features that it recognizes, creating in the process a time series of experiences that guides further movements and eventually allows the camera to classify the image. We call this camera an “active categorical classifier,” or ACC for short. Broadly speaking, the problem of classifying a spatial pattern is transformed into one of detecting differences within and between time series, namely the temporal sequence that the virtual camera generates in its sensors as it navigates the image. The method we propose here is inspired by models of visual attention [@IttiKoch2001], where attention to “salient” elements of an image or scene is guided by the image itself, such that only a small part of the incoming sensory information reaches short-term memory and visual awareness. Thus, focused attention overcomes the information-processing bottleneck imposed by massive sensory input (which can easily be $10^7-10^8$ bits per second in parallel at the optic nerve [@IttiKoch2001]), and serializes this stream to achieve near-real-time processing with limited computational requirements. In previous work, we have shown that it is possible to evolve robust controllers that navigate arbitrary mazes with near-perfect accuracy [@Edlund2011] and simulate realistic animal behavior [@Olson2013PredatorConfusion]. Independently, we have shown that we can evolve simple spatial classifiers for hand-written numerals in the MNIST data set [@Chapmanetal2013]. Here we use the same technology to evolve active categorical classifiers that “forage” on images and respond to queries about what they saw in the image without needing to examine the image again. Methods ======= In this section, we describe the methods used to evolve the active categorical classifiers (ACCs). We begin by describing the simulation environment in which the ACC scans and classifies the images. Next, we outline the structure and underlying neural architecture of an ACC. Finally, we provide details on the evolutionary process that we used to evolve the ACCs and the experiments that we conducted to evaluate them. Simulation Environment ---------------------- We evaluate the ACC on the MNIST data set, which is a well-known set of hand-written digits commonly used in supervised image classification research [@LeCunetal1998]. The MNIST data set contains 28x28 pixel images of hand-written digits—all with corresponding labels indicating what digit the image represents (0–9)—and comes in two predefined sets of training and testing data (60,000 and 10,000 images, respectively). In this project, we binarize the images such that any pixels with a grayscale value $>127$ (out of the range \[0, 255\]) are assigned a value of 1, and all other pixels are assigned a value of 0. When we evaluate an ACC, we place it at a random starting point in the 28x28 image and provide it a maximum of 40 steps to scan the image and assign a classification. (The 40-step maximum is meant to limit each simulation to a reasonably short amount of time.) Every simulation step, the ACC decides 1) what direction to move, 2) what class(es) it currently classifies the image as, and 3) whether it has made its final classification and is ready to terminate the simulation early. The ACC is evaluated only on its final classification for each image in the training set, with a “fitness” score ($F_{\rm ind}$) assigned as: $$F_{\rm ind} = \frac{1}{1000} \times \sum_{i=1}^{1000} \frac{\rm CorrectClass_i}{\rm NumClassesGuessed_i} \label{eq:fitness}$$ where $i$ is the index of an individual image in the training set, ${\rm CorrectClass}_i=1$ if the correct class is among the ${\rm NumClassesGuessed}_i$ guesses that the ACC offers (it is allowed to guess more than one), and ${\rm CorrectClass}_i=0$ otherwise. Thus, an ACC can achieve a minimum fitness of 0.1 by guessing [*all*]{} classes for all images, but only achieves a maximum fitness of 1.0 by guessing the correct class only for every image. We note that due to computational limitations, we subset the MNIST training set to the first 100 images of each digit, such that we use only 1,000 training images in total (1/60th of the total set). ![ [**Active categorical classifier (ACC) configuration.**]{} The ACC brain has 64 binary states that either fire or are quiescent, and represent sensory input from the image, internal memory, or decisions about how to interact with the image (described in the text).[]{data-label="fig:acc-config"}](figures/ACC-9){width="2in"} Active Categorical Classifier (ACC) ----------------------------------- We show in Fig. \[fig:acc-config\] the ACC in its natural habitat, roaming a digitized MNIST numeral. Each ACC has a brain that consists of 64 Markov neurons (“states”) that either fire (state = 1) or are quiescent (state = 0), and represent sensory input from the image, internal memory, and decisions about how to interact with the image. The ACC uses nine of these states to view nine pixels of the image in a 3x3 square, and four of the states to probe for activated pixels outside of its field of view with four raycast sensors that project across the image from the 0$^{\circ}$, 90$^{\circ}$, 180$^{\circ}$, and 270$^{\circ}$ angles of the 3x3 square (green squares in Fig. \[fig:acc-config\]). The raycast sensors activate only when they intersect with an activated pixel, and allow the ACC to find the numeral even if its starting position is far from it. We also provide the ACC two actuator states (“motor neurons”) that allow it to “saccade” three pixels up/down and left/right, or any combination thereof (red rectangles denoted as wheels in Fig. \[fig:acc-config\]). In addition, the ACC has 20 states dedicated to classifying the image: 10 states that can be activated to guess each digit class (blue squares), and 10 states to [*veto*]{} an activated guess for each digit class (purple squares), e.g., “this is definitely not a 4.” This configuration allows the ACC to guess multiple classes at once, and combine its internal logic to veto any of those guesses if it believes them to be incorrect. Finally, the ACC has a “done” state (orange triangle), which allows it to end the simulation early if it has already decided on its final guess(es) for the current image. The remaining 28 neurons are “memory” states (black circles) used to process and store information, and integrate that information over time. The “artificial brain” for the ACC in these experiments is a [*Markov Network*]{} (MN, see, e.g., [@Edlund2011; @Marstalleretal2013; @Chapmanetal2013]) that deterministically maps the 64 states (described above) at time $t$ to a corresponding series of output states that we interpret to determine the ACC’s movement actions and classifications at time $t + 1$. The combination of output states and sensory inputs from time $t + 1$ are then used to determine the output states for the ACC at time $t + 2$, and so on. Every MN must therefore usefully combine the information provided over time in the 64 states to decide where to move, classify the image, and finally to decide when it has gathered enough information to make an accurate classification. Making all these decisions at once requires complex logic that is difficult to design. Optimization Process -------------------- In order to create the complex logic embodied by a Markov Network, we [*evolve*]{} the MNs to maximize classification accuracy on the training images. We use a standard Genetic Algorithm (GA) to stochastically optimize a population of byte strings [@Eiben2003], which deterministically map to the MNs that function as the ACC’s “artificial brains” in the simulation described above. Due to space limitations, we cannot describe MNs in full detail here; a detailed description of MNs and how they are evolved can be found in [@Olson2016SelfishHerd]. In our experiments, the GA maintains a population of 100 byte strings (“candidates”) of variable length (maximum = 10,000 bytes) and evaluates them according to the fitness function in Equation \[eq:fitness\]. The GA selects the candidates to reproduce into the next generation’s population via tournament selection, where it shuffles the population and competes every byte string against only one other byte string. In each tournament, the byte string with the highest fitness produces one exact copy of itself as well as one mutated copy of itself into the next generation, while the “loser” produces no offspring. We note that the GA applies only mutations to the offspring (no crossover/recombination), with a per-byte mutation rate of 0.05%, a gene duplication rate of 5%, and a gene deletion rate of 2%. Experiments ----------- According to the evolutionary optimization process, the GA selects ACCs that are capable of spatio-temporal classification of MNIST digits. We first ran 30 replicates of the GA with random starting populations and distinct random seeds and allowed these replicates to run for 168 hours on a high-performance compute cluster. From those 30 replicates, we identified the highest-fitness ACC (the “elite”), and seeded another set of 30 replicates with mutants of the elite ACC. We allowed this second set of replicates to run for another 168 hours. In the following section, we report on the results of these experiments. ![ [**Fitness over time on the MNIST training set.**]{} Each line represents a replicate of the evolutionary process that trains the active categorical classifiers. The lines represent the highest-fitness individual every 1,000 generations, where the blue line traces the lineage that led to the highest-fitness individual out of all replicates. After running all 30 replicates for one week, we took the best individual from the first set of runs and seeded another set of evolutionary runs with it, which is represented by the cluster of lines following the top lineage of first set.[]{data-label="fig:edd-mnist-fitness-over-time"}](figures/edd-mnist-fitness-over-time){width="90.00000%"} ![ [**Active categorical classifier (ACC) accuracy on the binarized MNIST testing set.**]{} We report per-digit accuracy (labeled 0–9) of the ACC as well as the average accuracy across all digits (labeled “Overall”).[]{data-label="fig:edd-mnist-accuracy"}](figures/edd-mnist-accuracy){width="90.00000%"} Results ======= At the completion of the second set of replicates, the remaining active categorical classifiers (ACCs) had been optimized for 336 hours and roughly 250,000 generations. Shown in Fig. \[fig:edd-mnist-fitness-over-time\], the ACCs experienced the majority of their improvements within the first 150,000 generations, and minimal improvements occurred in the second set of replicates, indicating that the ACCs had reached a plateau—either because the scan pattern required to improve was too complex, or because improving the classification accuracy on poorly classified digits compromised the ability to classify those digits the ACC was already proficient at. Such trade-offs are likely due to insufficient brain size, and investigations with larger brains are currently underway. Instead of continuing the optimization process for a third set of replicates, we identified the highest-fitness ACC from replicate set 2 (highlighted in blue, Fig. \[fig:edd-mnist-fitness-over-time\]) and analyzed its spatio-temporal classification behavior to gain insights into its functionality. For the remainder of this section, we focus on the best ACC evolved in replicate set 2, which we will simply call “the ACC.” Shown in Fig. \[fig:edd-mnist-accuracy\], the ACC achieved respectable but not state-of-the-art performance on the MNIST testing set: It managed to classify most of the 0s and 1s correctly for example, but failed to classify many of the 2s. Overall, the ACC achieved a macro-averaged accuracy of 76%, which provides a proof-of-concept that the ACC works, but still has room for improvement on noisy multi-class data sets. We note that we have optimized ACCs on a set of hand-designed, non-noisy digits, where they managed to achieve 100% accuracy. Thus, it is clear that the ACC architecture requires additional experimentation to fully adapt to noisy data, much like other methods currently in use. ![ [**Analysis of informative pixels in the MNIST training set.**]{} Panel A shows the most informative pixels in the MNIST training set according to feature importance scores from a Random Forest (i.e., Gini importance [@GiniImportance]), whereas Panel B shows the pixels that the best active categorical classifier visited most frequently when classifying the MNIST data set. In both cases, darker colors represent higher values.[]{data-label="fig:edd-mnist-pixel-analysis"}](figures/edd-mnist-pixel-analysis){width="90.00000%"} ![ [**Example trajectories of the best active categorical classifier (ACC).**]{} The arrows indicate the direction that the ACC followed, whereas the dark grey areas indicate the pixels that it scanned. Although the ACC starts all evaluations at random spots in the grid, it aligns itself to the digit to a common starting point and executes and L-shaped scan of the digit. We note that we excluded an example of digit 2 because the ACC rarely classifies it correctly, although it follows a similar L-shaped trajectory.[]{data-label="fig:edd-mnist-agent-static"}](figures/edd-mnist-agent-static){width="75.00000%"} In Fig. \[fig:edd-mnist-pixel-analysis\]B, we analyze the movement patterns of the ACC by counting how many times each pixel is viewed in the ACC’s 3x3 visual grid when classifying the MNIST data set. Even though the ACC always starts at a random location in the image, we find that it follows a stereotypical scanning patterns of the digits: the ACC lines itself up to the top-left of the digit, then executes an L-shaped scanning pattern. In contrast, Fig. \[fig:edd-mnist-pixel-analysis\]A depicts the most informative pixels for differentiating the classes in the binarized MNIST data set with a Random Forest classifier as implemented in scikit-learn [@scikit-learn]. Here, we find that the most informative pixels exist in the center of the images, with several less-informative pixels on the image edges. Importantly, we note that the ACC never scans some of the most informative pixels in the lower half of the MNIST images (Fig. \[fig:edd-mnist-pixel-analysis\]A vs. Fig. \[fig:edd-mnist-pixel-analysis\]B). We believe that this behavior is the reason that the ACC is rarely able to classify any of the 2s, for example, because some of the most critical pixels for differentiating 2s from the rest of the digits are never visited. We provide examples of the ACC scanning patterns in Fig. \[fig:edd-mnist-agent-static\]. Shown again is the stereotypical L-shaped scanning pattern starting at the upper-left corner of every digit. (We note that we trimmed the agent paths to only the final scanning pattern because the initial phase of ACC movements are simply lining up to the upper-left corner of the digit.) Interestingly, the ACC scans only a fraction of the available pixels to make each classification, and appears to be integrating information about the digit over space and time to identify distinctive sub-features of the digits. Furthermore, the ACC completes the majority of its scans within 5–10 steps and then immediately activates the “done” state, indicating that the ACC also learned when it knows the correct digit. Discussion ========== The results that we display here show that it is possible to optimize an active categorical classifier (ACC) that scans a small portion of an image, integrates that information over space and time, and proceeds to perform an accurate classification of the image. Although the ACC does not achieve competitive accuracy on the MNIST data set compared to many modern techniques (76% testing accuracy, Fig. \[fig:edd-mnist-accuracy\]), we believe that this result is due to the lack of training data rather than any particular limitation of ACCs: Due to computational limitations, we were only able to use a fixed set of 1,000 training images (100 of each class) to optimize the ACCs, while modern techniques use much larger training sets that even include additional variations of the training images [@Wan2013]. Indeed, when we trained a scikit-learn Random Forest with 500 decision trees [@scikit-learn] on the same binarized training set of 1,000 images, it achieves only 88.5% accuracy on the MNIST testing set as compared to 97.5% when it is trained on the full training set. Thus, in future work we will focus on integrating methods that expose the ACCs to all training images in an efficient manner. From the point of view of embodied artificial intelligence, the challenge presented to the ACC in the image classification task is remarkably difficult. For one, these experiments challenged a single artificial brain to simultaneously perform several complex tasks, including to line itself up to a consistent starting point regardless of where it randomly starts in the image, decide where it needs to move to complete the scan based on limited information about the image, determine what pixels are important to consider, [*and*]{} integrate that information over space and time to classify the image into 1 of 10 classes. We furthermore challenged the ACC to evolve something akin to a “theory of mind” such that it knows when it has guessed the correct class for the image and to end the simulation early. In future work, it will be illuminating to analyze the underlying neural architecture of the evolved ACCs to provide insight into the fundamentals of active categorical perception [@Beer2003]. Unlike many modern image classification techniques that must analyze an entire static image to determine an image’s class, the ACC instead integrates information from a small subset of the pixels over space and time. This method naturally lends itself to video classification, where feature compression will play a crucial role in overcoming the massive data size challenge for real-time classification of moving objects [@Mnih2014]. Lastly, recent work has shown that modern deep learning-based image classification techniques tend to be easily fooled because they are trained in a supervised, discriminative manner: They establish decision boundaries that appropriately separate the data they encounter in the training phase, but these decision boundaries also include (and thus mis-classify) many inappropriate data points never encountered during training [@Nguyen2015]. Although most deep learning researchers respond to this challenge by creating additional “adversarial” training images to train the deep neural networks [@Goodfellow2014], we believe that the findings in [@Nguyen2015] highlight a critical weakness in deep learning: the resulting networks are trained to precisely map inputs to corresponding target outputs, without generalizing far beyond the training data they are exposed to [@Szegedy2013]. Due to their nature, deep neural networks are highly dependent on the training data, and only generalize to new challenges if they are similar to those encountered in the training data [@Goodfellow2014]. In contrast, heuristic-based machines such as the ACC learn simple, generalizable heuristics for classifying images that encode the conceptual representation [@Marstalleretal2013] of the objects, and should not be so easily fooled. As such, even if the ACC in the present work does not achieve competitive accuracy when compared to modern deep learning techniques, we believe that further development of heuristic-based image classification machines will lead to robust classifiers that will eventually surpass deep neural networks in generalizability without the need for adversarial training images. We further believe that it is precisely those machines that carry with them complex representations of the world that will become the robust and sophisticated intelligent machines of the future. Whether the embodied evolutionary approach we describe here will succeed in this is, of course, an open problem. Acknowledgments =============== We thank David B. Knoester, Arend Hintze, and Jeff Clune for their valuable input during the development of this project. We also thank the Michigan State University High Performance Computing Center for the use of their computing resources. This work was supported in part by the National Science Foundation BEACON Center under Cooperative Agreement DBI-0939454, and in part by National Institutes of Health grants LM009012, LM010098, and EY022300.
{ "pile_set_name": "ArXiv" }
--- abstract: 'A modified non-linear time series analysis technique, which computes the correlation dimension $D_2$, is used to analyze the X-ray light curves of the black hole system GRS 1915+105 in all twelve temporal classes. For four of these temporal classes $D_2 $ saturates to $\approx 4-5$ which indicates that the underlying dynamical mechanism is a low dimensional chaotic system. Of the other eight classes, three show stochastic behavior while five show deviation from randomness. The light curves for four classes which depict chaotic behavior have the smallest ratio of the expected Poisson noise to the variability ($ < 0.05$) while those for the three classes which depict stochastic behavior is the highest ($ > 0.2$). This suggests that the temporal behavior of the black hole system is governed by a low dimensional chaotic system, whose nature is detectable only when the Poisson fluctuations are much smaller than the variability.' author: - 'R. Misra, K.P. Harikrishnan, B. Mukhopadhyay, G. Ambika and A. K. Kembhavi' title: The chaotic behavior of the black hole system GRS 1915+105 --- Introduction {#sec: I} ============ Black hole X-ray binaries are variable on a wide range of timescales ranging from months to milli-seconds. A detailed analysis of their temporal variability is crucial to the understanding of the geometry and structure of these high energy sources. Such studies may eventually be used to test the relativistic nature of these sources and to understand the physics of the accretion process. The variability in different energy bands is generally quantified by computing the power spectrum which is the amplitude squared of the Fourier transform. The power spectra give information about the characteristic frequencies of the system which show up as either breaks or as near Gaussian peaks, i.e Quasi-Periodic Oscillation (QPO) in the spectra (e.g. @Bel01 [@Tom01; @Rod02]). The shape of the power spectra, combined with the observed frequency dependent time lags between different energy bands, have put constraints on the radiative mechanisms and geometry of emitting regions (e.g. @Now99 [@Mis00; @Cui99; @Pou99; @Cha00; @Nob01]) These results are based on the response of the system to temporal variations whose origin is not clear. Important insight into the origin can be obtained by the detection and quantification of the possible non-linear behavior of the fluctuations. For example, the presence of stochastic fluctuations would favor X-ray variations driven by variations of some external parameters (like the mass accretion rate), or the possibility that active flares occur randomly. On the other hand, if the fluctuations can be described as a deterministic chaotic system, then inner disk instability or coherent flaring activity models will be the likely origin. A quantitative description of the temporal behavior can also be compared with time dependent numerical simulations of the accretion process and will help examine the physical relevance of these simulations. The non-Gaussian and non-zero skewness values of the temporal variation of the black hole system Cygnus X-1 suggested that the variations are non-linear in nature [@Thi01; @Tim00; @Mac02]. More rigorous tests were applied to the AGN ArK 564 [@Gli02] which also suggested non-linear behavior. Nonlinear time series analysis(NLTS) seems to be the most convenient tool to check if the origin of the variability is chaotic, stochastic or a mixture of the two and has been adopted in several disciplines to study complex systems (e.g. human brain, weather) and predict their immediate future [@Sch99]. This technique has also been used earlier to analyze X-ray data of astrophysical sources. Based on a NLTS analysis of EXOSAT data, [@Vog87] claimed that the X-ray Pulsar Her X-1 was a low dimensional chaotic system. However, [@Nor89] pointed out problems with that analysis since the source has a strong periodicity and the data analyzed had low signal to noise ratio. [@Leh93] used the NLTS technique to analyze EXOSAT light curves of several AGN, and found that only one, NGC 4051, showed signs of low dimensional chaos. A similar analysis on the noise filtered[*Tenma*]{} satellite data of Cyg X-1, suggested that the source may be a low dimensional chaotic system with large intrinsic noise [@Unn90]. These analysis were hampered by small number of data points ($\simless 1000$) in the light curve and/or noise. Hence, the reported detection of low dimension chaos was only possible by rather subjective comparison of the results of the data analysis with those from simulated data of chaotic systems with noise. The Galactic micro quasar GRS 1915+105 is a highly variable black hole system. It shows a wide range of variability [@Che97; @Pau97; @Bel97a] which required @Bel00 to classify the behavior in no less than twelve temporal classes. In this work, our motivation is to determine the temporal property of this source by using a modified nonlinear time series analysis for each of these twelve classes. The different kinds of variability and its brightness ( the average RXTE PCA count rate ranges from $5000-32000$ counts/s) makes this source an ideal one to detect chaotic behavior. In the next section we describe the technique used to determine the Correlation dimension. The results of the analysis are presented in §3, while in §4 the work is summarized and discussed. The Non-Linear time series analysis =================================== The algorithm normally employed in this analysis [@Gra02] aims at creating an artificial or pseudo space of dimension $M$ with delay vectors constructed by splitting a scalar time series s(t) with delay time $\tau$ as $$\vec{x}(t)=[s(t),s(t+\tau),.....,s(t+(M-1)\tau)]$$ The correlation sum or the correlation function is the average number of data points within a distance R from a data point, $$C_M(R) \equiv \lim_{N \rightarrow \infty} {1\over N(N-1)} \sum_{i}^{N} \sum_{j, j \neq i }^{N}\hbox {H} (R-|\vec{x}_i -\vec{x}_j|)$$ where $\vec{x}_j$ is the position vector of a point belonging to the attractor in the M-dimensional space, $N$ is the number of reconstructed vectors and H is the Heaviside step function. The fractional dimension $D_2 (M)$ is defined as $$D_2 \equiv \lim_{R \rightarrow 0} d\hbox {log} C_M (R)/d\hbox {log} (R)$$ and is essentially the scaling index of $C_M(R)$ variation with $R$. $D_2 (M)$ can be used to differentiate between different temporal behavior since for an uncorrelated stochastic system, $D_2 \approx M$ while for a chaotic system, $D_2 (M) \approx$ constant for $M$ greater than a certain dimension $M_{max}$. For a finite duration light curve, there are two complications that hinder the successful computation of $D_2 (M)$. First, for small values of $R$, $C_M(R)$ is of order unity and the result there would be dominated by Poisson noise. Second, for large values of $R$, $C_M(R)$ will saturate to the total number of data points. Usually, these two effects are avoided in the log$C_M(R)$ versus log$R$ plot and the slope $D_2$ is obtained from the linear part of the curve. However, such an exercise is subjective especially for high dimensions. Here, we use a numerical scheme to compute $D_2$, which takes into account the above effects and at the same time optimizes the maximum use of the available data. The details of the method and several tests of its validity will be presented elsewhere ( Misra [[*et al.* ]{}]{}[*in preparation*]{}). Briefly, the technique involves converting the original light curve to a uniform deviate, and to redefine the correlation function $C_M(R)$ as the average number of data points within a M-cube (instead of a M-sphere) of length $R$ around a data point. Only those M-cubes are considered which are within the embedding space, ensuring that there are no edge effects due to limited data points. This imposes a maximum value of $R < R_{max}$ for which $C_M(R)$ can be computed. To avoid the Poisson noise dominated region, only results from $R$ greater than a $R_{min}$ are taken into consideration such that the average $ C (R_{min}) > 1$ where the Poisson noise would approximately be $1/\sqrt{N_c}$. Typically $C_M(R)$ is computed for ten different values of $R$ between $R_{min}$ and $R_{max}$ and the logarithmic slope for each point is computed and the average is taken to be $D_2 (M)$. The error on $D_2 (M)$ is estimated to be the mean standard deviation around this average. It should be noted that there often exists a critical $M_{cr}$ for which $R_{max} \approx R_{min}$ and no significant result can then be obtained for $M > M_{cr}$. Figure 1 (a) shows the $D_2 (M)$ curve for a time series generated from random numbers and for the well known analytical low dimensional chaotic system, the Lorenz system. The total number of data points used to generate both curves is 30000 and the number of random centers used is $N_c = 2000$. As expected the $D_2$ plot for the random data is consistent with the $D_2 = M$ curve, while the plot for the Lorenz system shows significant deviation and saturates at $M \approx 3$ to a $D_2 \approx 2$, which is close to the known value of $2.04$. The random data and the low dimensional chaotic system can clearly be distinguished in this scheme. Results ======= The temporal property of GRS 1915+105 have been classified into twelve different classes by @Bel00, who also present the observational dates and identification number of the RXTE data they had used to make the classification. Here, we have chosen a representative data for each class and extracted a few continuous data streams ($\approx 3000$ sec long) from it. The observational IDs of the data used in this work are tabulated in Table 1. The light curves were generated with a resolution of $0.1$ seconds resulting in $\approx 30000$ data points for each of them and $\approx 1500$ counts per bin. Light curves with finer time resolution are Poisson noise dominated, while larger binning gives too few data points. In general, $D_2(M)$ is proportional to $\tau$ when $\tau$ is small and saturates (i.e. it is nearly invariant) for $\tau$ greater than a critical value and it is this saturated value which is the correct estimate of $D_2(M)$. As an example, the $D_2(M)$ curves for different values of $\tau$ are plotted in Figure 1 (b), where it can be seen that the curve is similar within error bars for $\tau = 15$, $25$ and $100$ sec. For all the data analyzed here the critical $\tau < 5-20$ sec, and hence the saturated curve (typically for $\tau \approx 50$ sec) is considered. It has been verified that the $D_2 (M)$ curves for two separate light curves for the same class, are similar to within the error bars. This shows that as expected the temporal behavior of the system is more or less stationary for the same class. Hence such curves can be averaged to obtain a statistically more significant result. Figure 2 shows the $D_2 (M)$ curves for seven temporal classes. For four classes ($\lambda$, $\kappa$, $\beta$ and $\mu$) the curves show clear deviation from random behavior. For $\lambda$ and $\kappa$ there is saturation of $D_2 \approx 5$ for $M > 8$. For $\beta$ and $\mu$, the increase in $D_2$ is less than one when $M$ increases from $8$ to $15$. Thus these classes can be classified unambiguously as chaotic systems with correlation dimension less than 5 while the the behavior of the class $\phi$ is identical to a stochastic light curve. The classes $\alpha$ and $\rho$ show some deviation from stochastic behavior and hence this behavior, which is also seen in the classes $\theta$, $\nu$ and $\delta$, is named “non stochastic” in this work. As discussed below, these classes may be inferred to be low dimensional chaotic systems based on comparison with results from simulated data of chaotic systems with additional noise. Similar comparisons were made to infer the chaotic behavior of Cyg X-1 [@Unn90] and NGC 4051 [@Leh93]. We show in the last column of Table 1 the classification of all the twelve classes into one of these three categories, namely chaotic, non-stochastic and stochastic. We have listed in Table 1 the average counts $<S>$, the root mean square variation, the expected Poisson noise $ <PN> \equiv \sqrt{<S>}$, and the ratio of the expected Poisson noise to the actual RMS value. It can be seen that there is a strong correlation between the inferred behavior of the system and the ratio of the expected Poisson noise to the rms values. This indicates that Poisson noise is affecting the analysis. To estimate the effect of Poisson noise, we consider the Lorenz system points $S_L(t)$ and rescale it by $S_{LR} = A S_L(t) + B$. A light curve is then simulated using $S_{LR}$ from the corresponding Poisson noise distributions. The constants $A$ and $B$ were chosen such that the simulated light curve had the same average count and rms variation as the two extreme cases for the GRS1915+105 data for $\beta$ and $\gamma$ classes. The results of the non linear time series analysis are shown in Figure 3, where it can be seen that even for the $\beta$ like case where the ratio of the expected Poisson noise to rms variation is only $4$%, the $D_2$ versus $M$ curve saturates at a higher value than that of the original no noise data points. This implies that the correlation dimension of $\approx 4$ inferred from the analysis of the classes showing chaotic behavior (Figure 2) is an overestimation due to the inherent Poisson noise in the data. For larger Poisson noise fractions, the curve no longer saturates and becomes qualitatively similar to that obtained for the non-stochastic case. Discussion ========== The saturation of the correlation dimension $D_2 \approx 4-5$ for four of the temporal classes clearly indicates that the underlying dynamic mechanism that governs the variability of the black hole system is a low dimensional chaotic one. As indicated by simulations of the Lorenz system with noise, the effect of Poisson noise in the data is to increase the $D_2$ values. Hence the real dimension of the system is probably smaller than $D_2 \approx 4-5$ that is obtained here. In fact it is possible that the the temporal behavior of the black hole system is always governed by a low dimensional chaotic system, but is undetectable when Poisson noise affects the analysis. Alternatively, there may be a stochastic component to the variability which dominates for certain temporal classes. The two scenarios may be distinguished and better quantitative estimates of the correlation dimension may be obtained by either appropriate noise filtering of the data and/or appropriate averaging of the different light curves. Much longer ($\approx 30000$ sec long) continuous data streams sampled at $1$ second resolution, would decrease Poisson noise and hence provide better quantitative measure of $D_2$. However, such long data streams are presently not available and merging non-continuous light curves, will require sophisticated gap filling techniques which might give rise to spurious results. The variability of GRS 1915+105 can be interpreted as being transitions between three spectral states [@Bel00], one of which (the so called soft state) is a long term canonical state observed in other black hole systems like Cygnus X-1 which do not show such high amplitude variability. It is attractive to identify these spectral states as fixed points which for GRS 1915+105 become unstable giving rise to the observed chaotic behavior which may also account for the ring like movement of the system in color-color space [@Vil98]. The above hypothesis may be verified by future characterization of the chaos in GRS 1915+105. Note that GRS15+105 spends most of it’s time in the $\chi$ class whose variability is similar to that observed in other black hole systems like Cygnus X-1. However, as shown in this work, Poisson noise effects the analysis for the $\chi$ class and the $D_2 (M)$ values reflect stochastic behavior. This may be the reason why earlier different non-linear analysis of Cygnus X-1 data, while showing non-linearity [@Tim00; @Thi01] did not conclusively reveal chaotic behavior. The identification of the temporal behavior of the black hole system as a chaotic one, has opened a new window toward the understanding of the origin and nature of their variability. The present analysis can be extended to characterize the chaotic behavior. Using the minimum required phase space dimension, the data can be projected into different $2$ dimensional planes, which will reveal the structure of the attractor and help to identify any possible centers of instability in the system. Further, dynamical invariants like the full Lyapunov spectrum , multi-fractal dimensions etc. can be also be computed. Recently, @Win03 have studied and quantified the chaotic flow in magneto-hydrodynamic simulations of the mass accretion processes that is believed to be happening in black hole systems. The measured chaos parameters like the largest Lyapunov exponent for such simulations can be compared with that obtained from the light curve of black hole systems to validate such simulations and enhance our understanding of these systems. Note that such analysis can practically be applied only after the identification of the minimum phase space dimension which in turn usually requires the computation of $D_2 (M)$. GA and KPH acknowledge the hospitality and the facilities in IUCAA. BM thanks the Academy of Finland grant 80750 for support. Belloni, T., Mendez, M., King, A. R., van der Klis, M., & van Paradijs, J., 1997a, , 479, L145. Belloni, T., Klein-Wolt, M., Mendez, M., van der Klis, M., & van Paradijs, J., 2000, , 355, 271. Belloni, T., Mendez, M., & Sanchez-Fernandez, C., 2001, , 372, 551. Chakrabarti, S. K., & Manickam, S. G., 2000, , 531, L41. Chen, X., Swank, J. H., & Taam, R. E., 1997, , 477, L41. Cui, W., 1999, , 524, 59. Gliozzi, M., [[*et al.* ]{}]{}, 2002, , 391, 875. Grassberger, P. & Procaccia, I., 1983, [*Physica D*]{}, 9, 189. Lehto, H. J., Czerny, B., & McHardy, I. M., 1993, , 261, 125. Maccarone, T. J., & Coppi, P. S., 2002, , 336, 817. Misra, R., 2000, , 529, L95. Nobili, L., Belloni, T., Turolla, R., & Zampieri, L., 2001, , 276 , 217. Norris, J. P., & Matilsky, T. A., 1989, , 346, 912. Nowak, M. A., Vaughan, B. A., Wilms, J., Dove, J. B. & Begelman, M. C., 1999, , 510, 874. Paul, B., [[*et al.* ]{}]{}, 1997, , 320, L37. Poutanen, J., & Fabian, A. C., 1999, , 306, L31. Rodriguez, J., Durouchoux, P., Mirabel, I. F., Ueda, Y., Tagger, M., & Yamaoka, K., 2002, , 386, 271. Schreiber, T., 1999, , 308, 1. Thiel, M. [[*et al.* ]{}]{}, 2001, , 276, 187. Timmer, J., [[*et al.* ]{}]{}, 2000, , 61, 1342. Tomsick, J. A., & Kaaret, P., 2001, , 548, 401 Unno, W., [[*et al.* ]{}]{}, 1990, , 42, 269. Vilhu, O., & Nevalainen, J., 1998, , 508, L85. Voges, W., Atmanspacher, H., & Scheingraber, H., 1987, , 320, 794. Winters, W. F., Balbus, S. A. & Hawley, J. F., 2003, , 340, 519.\ [lcccccc]{} 10408-01-10-00 & $\beta$ &1917 & 1016 & 43.8 &0.04& C\ 20402-01-37-01 & $\lambda$ & 1493 & 1015 & 38.6 & 0.04 & C\ 20402-01-33-00 & $\kappa$ & 1311 & 800 & 36.2 & 0.04 & C\ 10408-01-08-00 & $\mu$ & 3026 & 999 & 55 & 0.06 & C\ & & & & & &\ 20402-01-45-02 & $\theta$ & 1740 & 678 & 41.7 & 0.06 & NS\ 10408-01-40-00 & $\nu$ & 1360 & 462 & 36.9 & 0.08 & NS\ 20402-01-03-00 & $\rho$ & 1258 & 440 & 35.5 & 0.08 & NS\ 20187-02-01-00 & $\alpha$ & 582 & 244 & 24.1 & 0.10 & NS\ 10408-01-17-00 & $\delta$ & 1397 & 377 & 37.4 & 0.10 & NS\ & & & & & &\ 20402-01-56-00 & $\gamma$ & 1848 & 185 & 43.0 & 0.23 & S\ 10408-01-22-00 & $\chi$ & 981 & 118 & 31.3 & 0.27 & S\ 10408-01-12-00 & $\phi$ & 1073 & 118 & 32.7 & 0.28 & S\ & & & & & &\ psbox.tex
{ "pile_set_name": "ArXiv" }
--- author: - | Yi-Fang Wang\ Stanford University, Department of Physics\ Stanford, CA 94305,USA\ E-mail: yfwang@hep.stanford.edu title: A Water Čerenkov Calorimeter as the Next Generation Neutrino Detector --- 15.cm Introduction ============ Neutrino factories and conventional beams have been discussed extensively in the literature[@nuf] as the main facility of neutrino physics for the next decade. The main physics objectives include the measurements of $sin\theta_{13}$, $\Delta m^2_{13}$, the leptonic CP phase $\delta$ and the sign of $\Delta m^2_{23}$. All of these quantities can be obtained through the disappearance probability $\mathrm P(\nu_{\mu}\rightarrow\nu_{\mu})$ and the appearance probability $\mathrm P(\nu_{\mu}(\nu_e)\rightarrow \nu_e(\nu_{\mu}))$ and $\mathrm P(\bar\nu_{\mu}(\bar\nu_e)\rightarrow \bar\nu_e(\bar\nu_{\mu}))$. To measure these quantities, a detector should: 1) be able to identify leptons: e, $\mu$ and if possible $\tau$; 2) have good pattern recognition capabilities for background rejection; 3) have good energy resolution for event selection and to determine $\mathrm P_{\alpha\rightarrow\beta}(E)$; 4) be able to measure the charge for $\mu^{\pm}$ in the case of $\nu$ factories; and 5) be able to have a large mass(100-1000 kt) at an affordable price. -------------- ------------- ----------- -------------- ------------------ Iron Liquid Water Ring Under Water/Ice Calorimeter Ar TPC Imaging Čerenkov counter Mass 10-50 kt 1-10 kt 50-1000 kt 100 Mt Charge ID Yes Yes ? No E resolution good very good very good poor Examples Minos ICANOE Super-K, Uno Amanda, Icecube Monolith Aqua-rich Nestor,Antares -------------- ------------- ----------- -------------- ------------------ : Currently proposed detector for $\nu$ factories and conventional $\nu$ beams. Currently there are four types of detectors proposed[@nuf; @dick], as listed in table 1. These detectors are either too expensive to be very large, or too large to have a magnet for charge identification. In this talk, I propose a new type of detector – a water [Čerenkov ]{}calorimeter – which fulfills all the above requirements. Water [Čerenkov ]{}Calorimeter ============================== Water [Čerenkov ]{}ring image detectors have been successfully employed in large scale, for obvious economic reasons, by the IMB and the Super-Kamiokande experiments. However a substantial growth in size beyond these detectors appears problematic because of the cost of excavation and photon detection. To overcome these problems, we propose here a water [Čerenkov ]{}calorimeter with a modular structure, as shown in Fig. 1. Each tank has dimensions $\mathrm 1\times 1\times 10 m^3$, holding a total of 10 t of water. The exact segmentation of water tanks is to be optimized based on the neutrino beam energy, the experimental hall, the cost, etc. For simplicity, we discuss in the following 1 m thick tank, corresponding to 2.77 X$_0$ and 1.5 $\lambda_0$. The water tank is made of PVC with Aluminum lining. Čerenkov light is reflected by Aluminum and transported towards the two ends of the tank, which are covered by wavelength shifter(WLS) plates. Light from the WLS is guided to a 5” photon-multiplier tube(PMT), as shown in Fig. 2. The modular structure of such a detector allows it to be placed at a shallow depth in a cavern of any shape(or possibly even at surface), therefore reducing the excavation cost. The photon collection area is also reduced dramatically, making it possible to build a large detector at a moderate cost. A through-going charged particle emits about 20,000 [Čerenkov ]{}photons per meter. Assuming a light attenuation length in water of 20m and a reflection coefficient of the Aluminum lining of 90%, we obtain a light collection efficiency of about 20%. Combined with the quantum efficiency of the PMT(20%), the WLS collection efficiency(25%) and an additional safety factor of 50%, the total light collection efficiency is about 0.5%. This corresponds to 100 photoelectrons per meter, which can be translated to a resolution of $\mathrm 4.5\%/\sqrt{E}$. This is slightly worse than the Super-Kamiokande detector and liquid Argon TPC but much better than iron calorimeters[@nuf]. If this detector is built for a $\nu$ factory, a tracking device, such as Resistive Plate Chambers (RPC)[@rpc] will be needed between water tanks to identify the sign of charge. RPCs can also be helpful for pattern recognition, to determine precisely muon directions, and to identify cosmic-muons for either veto or calibration. The RPC strips will run in both X- and Y-directions with a width of 4 cm. A total of $\sim 10^5 ~m^2$ is needed for a 100 kt detector, which is more than an order of magnitude larger than the current scale[@rpc]. R&D efforts would be needed to reduce costs. The magnet system for such a detector can be segmented in order to minimize dead materials between water tanks. If the desired minimum muon momentum is 5 GeV/c, the magnet must be segmented every 20 m. Detailed magnet design still needs to be worked out; here we just present a preliminary idea to start the discussion. A toroid magnet similar to that of Minos, as shown in Fig. 3, can produce a magnetic field $\mathrm B>1.5~\mathrm T$, for a current $\mathrm I> 10^4$ A. The thickness of the magnet needed is determined by the error from the multiple scattering: $\mathrm \Delta P/P = 0.0136\sqrt{X/X_0}/0.3BL$, where L is the thickness of magnet. For L=50 cm, we obtain an error of 32%. The measurement error is given by $\mathrm \Delta P/P \simeq \delta\alpha/\alpha= \sigma P/0.3rBL$, where r is the track length before or after the magnet and $\sigma$ is the pitch size of the RPC. For P=5 GeV/c, $\sigma=4$ cm and r=10 m, the measurement error is 9%, much smaller than that from multiple scattering. It should be noted that $\mathrm P_{\mu}$ is also measured from the range. By requiring that both $\mathrm P_{\mu}$ measurements are consistent, we can eliminate most of the fake wrong sign muons. The iron needed for such a magnet is about 20% of the total mass of the water. The cost of such a detector is moderate compared to other types of detectors, enabeling us to build a detector as large as 100 - 1000 kt. The combination of size, excellent energy resolution and pattern recognition capabilities makes this detector very attractive. An incomplete but rich physics program can be listed as follows: 1) neutrino physics from $\nu$ factories or $\nu$ beams; 2) improved measurements of atmospheric neutrinos; 3) observation of supernovae at distances up to hundreds of kpc; 4) determination of primary cosmic-ray composition by measuring multiple muons; 5) searches for WIMP’s looking at muons from the core of the earth or the sun with a sensitivity covering DAMA’s allowed region; 6) searches for monopoles looking at slow moving particles with high dE/dx; 7) searches for muons from point sources; 8) searches for exotic particles such as fractionally charged particles. Depending on the location of the detector, other topics on cosmic-ray physics can be explored. Performance of Water Čerenkov Calorimeter ========================================= To study the performance of such a detector, we consider in the following two possible applications in the near future: JHF neutrino beam to Beijing with a baseline of 2100 km and NuMi beam from Fermilab to Minos with a baseline of 735 km. The energy spectra of visible $\nu_{\mu}$ CC events are shown in Fig. 4. We use a full GEANT Monte Carlo simulation program and the Minos neutrino event generator. A CC $\nu$ signal event is identified by its accompanying lepton, reconstructed as a jet. Fig. 5 shows the jet energy normalized by the energy of the lepton. It can be seen from the plot that leptons from CC events can indeed be identified and the jet reconstruction algorithm works properly. It is also shown in the figure that the energy resolution of the neutrino CC events is about 10% in both cases. The neutrino CC events are identified by the following 5 variables: $\mathrm E_{max}/\mathrm E_{jet}$, $\mathrm L_{shower}/\mathrm E_{jet}$, $\mathrm N_{tank}/\mathrm E_{jet}$, $\mathrm R_{xy}/\mathrm E_{tot}$, and $\mathrm R^{max}_{xy}/\mathrm E_{tot}$, where $\mathrm E_{jet}$ is the jet energy, $\mathrm E_{tot}$ the total visible energy, $\mathrm E_{max}$ the maximum energy in a cell, $\mathrm L_{shower}$ the longitudinal length of the jet, $\mathrm N_{tank}$ the number of cells with energy more than 10 MeV, $\mathrm R_{xy}$ the transverse event size and $\mathrm R^{max}_{xy}$ the transverse event size at the shower maxima. Fig. 6 shows $\mathrm R^{max}_{xy}/\mathrm E_{tot}$ for all different neutrino flavors. It can be seen that $\nu_e$ CC events can be selected with reasonable efficiency and moderate backgrounds. Table 2 shows the final results from this pilot Monte Carlo study. For $\nu_e$ and $\nu_{\mu}$ events, $\nu_{\tau}$ CC events are dominant backgrounds, while for $\nu_{\tau}$, the main background is $\nu_e$. It is interesting to see that this detector can identify $\nu_{\tau}$ in a statistical way. Similar results are obtained for a detector with 0.5m water tanks without RPCs. These results are similar to or better than those from water [Čerenkov ]{}image detectors[@other] and iron calorimeters[@wai2]. ----------------- --------- ------------- -------------- --------- ------------- $\nu_e$ $\nu_{\mu}$ $\nu_{\tau}$ $\nu_e$ $\nu_{\mu}$ CC Eff. 30% 53% 9.3% 15% 53% $\nu_{e}$ CC - $>$1300:1 3:1 - $>$1300:1 $\nu_{e}$ NC 166:1 665:1 60:1 600:1 $>$610:1 $\nu_{\mu}$ CC 700:1 - 270:1 14000:1 - $\nu_{\mu}$ NC 92:1 $>$6000:1 39:1 320:1 2000:1 $\nu_{\tau}$ CC 20:1 12:1 - 33:1 18:1 $\nu_{\tau}$ NC 205:1 1100:1 61:1 530:1 3200:1 ----------------- --------- ------------- -------------- --------- ------------- 0.3cm [Table 2. Results from Monte Carlo simulation: Efficiency vs background\ rejection power for different flavors.]{} Summary ======= In summary, the water [Čerenkov ]{}calorimeter is a cheap and effective detector for $\nu$ factories and $\nu$ beams. The performance is excellent for $\nu_e$ and $\nu_{\tau}$ appearance and $\nu_{\mu}$ disappearance from a Monte Carlo simulation. Such a detector is also very desirable for cosmic-ray physics and astrophysics. There are no major technical difficulties although R&D and detector optimization are needed. Acknowledgments {#acknowledgments .unnumbered} =============== I would like to thank G. Gratta, S. Wojcicki, L. Wai and H.S. Chen for many useful discussions. [99]{} See for example, C. Albright [*et al.*]{}, hep-ph/0008064. K. Dick [*et al*]{}, hep-ph/0008016. C. Bacci [*et al.*]{}, Nucl. Phys. Proc. Suppl. 78 (1999) 38. Y. Itow [*et al.*]{}, “Letter of Intent: A Long Baseline Neutrino Oscillation Experiment using JHF 50 GeV proton-Synchrotron and the Super-Kamiokande Detector”. L. Wai, private communication.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Domain adaptation is a key feature in Machine Translation. It generally encompasses terminology, domain and style adaptation, especially for human post-editing workflows in Computer Assisted Translation (CAT). With Neural Machine Translation (NMT), we introduce a new notion of domain adaptation that we call “specialization” and which is showing promising results both in the learning speed and in adaptation accuracy. In this paper, we propose to explore this approach under several perspectives.' author: - | Christophe Servan    [and]{}    Josep Crego   [and]{}   Jean Senellart\ [firstname.lastname@systrangroup.com]{}\ SYSTRAN / 5 rue Feydeau, 75002 Paris, France\ bibliography: - 'eacl2017.bib' title: | Domain specialization: a post-training domain adaptation\ for Neural Machine Translation --- Introduction ============ Domain adaptation techniques have successfully been used in Statistical Machine Translation. It is well known that an optimized model on a specific genre (litterature, speech, IT, patent...) obtains higher accuracy results than a “generic” system. The adaptation process can be done before, during or after the training process. We propose to explore a new post-process approach, which incrementally adapt a “generic” model to a specific domain by running additional training epochs over newly available in-domain data. In this way, adaptation proceeds incrementally when new in-domain data becomes available, generated by human translators in a post-edition context. Similar to the Computer Assisted Translation (CAT) framework described in [@Cettolo2014]. #### Contributions The main contribution of this paper is a study of the new “specialization” approach, which aims to adapt generic NMT model without a full retraining process. Actually, it consist in using the generic model in a retraining phase, which only involves additional in-domain data. Results show this approach can reach good performances in a far less time than full-retraining, which is a key feature to adapt rapidly models in a CAT framework. Approach ======== Following the framework proposed by [@Cettolo2014], we seek to adapt incrementally a generic model to a specific task or domain. They show incremental adaptation brings new information in a Phrase-Based Statistical Machine Translation like terminology or style, which can also belong to the human translator. Recent advances in Machine Translation focuses on Neural Machine Translation approaches, for which we propose a method to adapt incrementally to a specific domain, in this specific framework. The main idea of the approach is to specialize a generic model already trained on generic data. Hence, we propose to retrain the generic model on specific data, though several training iterations (see figure \[fig:selection\]). The retraining process consist in re-estimating the conditional probability $p(y_1, \ldots , y_{m} |x_1, \ldots , x_n )$ where $(x_1, \ldots , x_n )$ is an input sequence of length $n$ and $(y_1, \ldots , y_{m})$ is its corresponding output sequence whose length $m$ may differ from $n$. This is done without dropping the previous learning states of the Recurrent Neural Network. The resulting model is considered as adapted or specialized to a specific domain. ![\[fig:model\]The generic model is trained with generic data, then the generic model obtained is retrained with in-domain data to generate an specialized model.[]{data-label="fig:selection"}](figure_training){width="\linewidth"} Experiment framework ==================== We create our own data framework described in the next section and we evaluate our results using the BLEU score [@papineni02bleu] and the TER [@Snover2006]. The Neural Machine Translation system combines the attention model approach [@Luong2015effective] jointly with the sequence-to-sequence approach [@Sutskever2014]. According to our approach, we propose to compare several configurations, in which main difference is the training corpus. On one hand, we consider the generic data and several amounts of in-domain data for the training process. On the other hand, only the generic data are considered for the training process, then several amounts of in-domain data are used only for the specialization process in a retraining phase. The main idea behind these experiment is to simulate an incremental adaptation framework, which enables the adaptation process only when data are available (e.g.: translation post-editions done by a human translator.) The approach is studied in the light of two experiments and a short linguistic study. The first experiment concerns the impact of “specialization” approach among several additional epochs; then, the second one, focuses on the amount of data needed to observe a significant impact on the translation scores. Finally, we propose to compare some translation examples from several outputs. Training data {#sec:data} ------------- The table \[tab:alldata\] presents all data used in our experiments. We propose to create a generic model with comparable amount of several corpora, which each of them belong to a specific domain (IT, literature, news, parliament). All corpora are available from the OPUS repository [@Tiedemann2012parallel]. We propose to specialize the generic model using a last corpus, which is a corpus extracted from the European Medical Agency (*emea*). The corpus is composed of more than 650 documents, which are medicine manuals. We took apart a $2K$ lines as test corpus, then, to simulate the incremental adding of data, we created four training corpora corresponding to several amount of documents: $500$, $5K$, $50K$ and all the lines of the training corpus. These amount of data are corresponding roughly to $10\%$ of a document, one document and ten documents, respectively. Type Domain \#lines \#src tokens \#tgt tokens ------ ------------- --------- -------------- -------------- *generic* 3.4M 73M 86M *emea-0.5K* 500 5.6K 6.6K *emea-5K* 5K 56.1K 66.4K *emea-50K* 50K 568K 670K *emea-full* 922K 10.5M 12.3M dev. *generic* 2K 43.7K 51.3K test *emea* 2K 35.6K 42.9K : \[tab:alldata\] details of corpora used in this paper. Training Details {#ssec:training} ---------------- The Neural Machine Translation approach we use is following the sequence-to-sequence approach [@Sutskever2014] combined with attentional architecture [@Luong2015effective]. In addition, all the generic and in-domain data are pre-processed using the *byte pair encoding* compression algorithm [@sennrich2016improving] with 30K operations, to avoid Out-of-Vocabulary words. We keep the most frequent $32K$ words for both source and target languages with $4$ hidden layers with $500$-dimensional embeddings and $800$ bidirectional Long-Short Term Memory (bi-LSTM) cells. During training we use a mini-batch size of $64$ with dropout probability set to $0.3$. We train our models for $18$ epochs and the learning rate is set to $1$ and start decay after epoch $10$ by $0.5$. It takes about $8$ days to train the generic model on our NVidia GeForce GTX 1080. The models were trained with the open-source toolkit `seq2seq-attn`[^1] [@kim2016sequence]. Experiments ----------- Models BLEU TER ------------------------- ------- ------- *generic* 26.23 62.47 *generic*$+$*emea-0.5K* 26.48 63.09 *generic*$+$*emea-5K* 28.99 58.98 *generic*$+$*emea-50K* 33.76 53.87 *generic*$+$*emea-full* 41.97 47.07 : \[tab:resultsFull\] BLEU score of full trained systems. As a baseline, we fully trained five systems, one with the generic data (*generic*) and the other with generic and various amount of in-domain data: $500$ lines (*emea-0.5K*), $5K$ lines (*emea-5K*) and $50K$ lines (*emea-50K*). The evaluation is done on the in-domain test (*emea-tst*) and presented in the table \[tab:resultsFull\]. Without surprises, the more the model is trained with in-domain data the more BLEU and TER scores are improved. These models are baselines in incremental adaptation experiments. ### Performances among training iterations {#sec:stuEpochs} ![\[fig:incEpohcs\]Curve of “specialization” performances among epochs.[]{data-label="fig:selection"}](emea_curves){width="\linewidth"} The first study aims to evaluate the approach among additional training iterations (also called “epochs”). Figure \[fig:incEpohcs\] presents the curve of performances when the specialization approach is applied to the *generic* model by using all in-domain data (*emea-full*). We compare the results with two baselines: on the top of the graphic, we show a line which corresponds to the score obtained by the model trained with both generic and in-domain data (noted *generic*$+$*emea-full*); On the bottom, the line is associated to the generic model score, which is trained with only generic data (noted *generic*). The curve is done with the generic model specialized with five epochs additional epochs on all in-domain data (noted *specialized model with emea*). In the graphic, we can observe that a gap obtained with the first additional epoch with more than $13$ points, but then the BLEU score improves around $0.15$ points with each additional epoch and tend to stall after $10$ epochs (not shown). So far, the specialization approach does not replace a full retraining, while the specialization curve does not reach the *generic*$+$*emea-full* model. But, the retraining time of one additional epoch with all in-domain data is around $1$ hour and $45$ minutes, while a full retraining would takes more than $8$ days. In our CAT framework, even $1$ hour and $45$ minutes is too much, the adaptation process need to be performed faster with smaller amount of data like a part of a document ($500$ lines) or a full document ($5K$ lines). Considering the time constraint, the approach tends to be performed though one additional epoch. ### Performances among data size The second experiment concerns the observation of specialization performances when we vary the amount of data. Using the data presented Table \[tab:alldata\], we apply the specialization process on the generic corpus by taking $0.5K$, $5K$, $50K$ and all the in-domain data (as presented in section \[sec:data\]). According to our previous study (see section \[sec:stuEpochs\]), we focuses on the results obtained with only one additional epoch. Training corpus Specialization corpus BLEU TER ------------------------- ----------------------- ------- ------- *generic* N/A 26.23 62.47 *generic*$+$*emea-0.5K* N/A 26.48 63.09 *generic*$+$*emea-5K* N/A 28.99 58.98 *generic*$+$*emea-50K* N/A 33.76 53.87 *generic*$+$*emea-full* N/A 41.97 47.07 *generic* *emea-0.5K* 27.33 60.92 *generic* *emea-5K* 28.41 58.84 *generic* *emea-50K* 34.25 53.47 *generic* *emea-full* 39.44 49.24 : \[tab:results\] BLEU and TER scores of the specialization approach on the in-domain test set. ----------- ------------- --------- -------- -------- ---------------- -- Process Corpus \#lines \#src \#tgt Process tokens tokens time Train *generic* 3.4M 73M 86M 8 days *emea-0.5K* 500 5.6K 6.6K $<$1 min Speciali- *emea-5K* 5K 56.1K 66.4K $\approx$1 min zation *emea-50K* 50K 568K 670K $\approx$6 min *emea-full* 922K 10.5M 12.3M 105 min ----------- ------------- --------- -------- -------- ---------------- -- : \[tab:timeresults\] Time spent for each process, the training and the specialization process, according to the amount of data we have. Source: What benefit has SonoVue shown during the studies ? ---------------------------- ---------------------------------------------------------------------------- Reference: Quel est le bénéfice démontré par SonoVue au cours des études ? *generic model* Quel [ avantage SSonVue]{} [a-t-il montré pendant les études]{} ? specialization *emea-0.5K* Quel [ bénéfice]{} [ SSonVue]{} [a-t-il montré lors des études]{} ? specialization *emea-5K* Quel [ bénéfice]{} [ SSonVue a-il]{} [montré pendant les études]{} ? specialization *emea-50K* [ Quels]{} [ est le bénéfice démontré par SonoVue au cours des études]{} ? We can observe that with only 500 lines, the improvements reaches more than $1$ BLEU points and $2$ TER points. Then, with 10 time more additional data, BLEU and TER scores improved the baseline of $2$ and nearly $4$ points, respectively. With more additional data ($10$ documents), improvements reach $8$ points of BLEU and $9$ points of TER. Finally with all the in-domain data available, the specialization increase the baseline of $13$ points of both BLEU and TER scores. Comparing the approach with retraining all the generic data, with the same amount of in-domain data, it appears our approach reaches nearly the same results. Moreover, with $50K$ of in-domain data, the specialization approach performs better of $0.5$ of BLEU and TER points. But, when we have much more in-domain data available, the specialization approach does not outperforms the full retraining ($39.44$ against $41.97$ BLEU points). Discussion ---------- Focussing on the time constraint of the CAT framework, the table \[tab:timeresults\] presents the time taken to process our specialization approach. It goes from less than one minutes to more than $1$ hour and $45$ minutes. If we compare this table with the table \[tab:results\], we observe that this approach enables to gain $1$ BLEU point in less than $1$ minute, $2$ points in $1$ minute and more than $6$ BLEU points in $6$ minutes. The ratio of “time spent” to “score gained” seems impressive. The table \[tab:ex\] shows an example of the outputs obtained with the specialization approach. We compare the generic model compared to the specialized models with respectively $0.5K$, $5K$ and $50K$ lines of in-domain data. We can clearly see the improvements obtained on the translation outputs. Even if the last one does not stick strictly to the reference, the translation output can be considered as a good translation (syntactically well formed and semantically equivalent). This specialization approach can be seen as an optimization process (like in classical Phrase-Based approach), which aims to tune the model [@Och2003]. Related work ============ Last years, domain adaptation for machine translation has received lot of attention and studies. These approaches can be processed at three levels: the pre-processing, the training, the post-processing. In a CAT framework, most of the approaches focuses on the pre-processing or on the post-processing to adapt models. Such pre-processing approaches like data selection introduced by [@Lue2007] and improved by [@Gao2002improving] and many others [@moore2010dataSelection; @Axelrod2011] are effective and their impact studied [@Lambert2011; @Cettolo2014; @Wuebker2014]. But, the main draw back of these approaches is they need a full retrain to be effective. The post-training family concerns methods which aims to update the model or to optimize the model to a specific domain. Our approach belongs to this category. This approach is inspired by [@Luong2015], they propose to train a generic model and, then, they further a training over a dozen of epochs on a full in-domain data (the TED corpus). We do believe this approach is under estimated and we propose to study its efficiency in a specific CAT framework with a few data. On one hand, we propose to follow this approach by proposing to use a fully trained generic model. But, on the other hand, we propose to train further only on small specific data over a few additional epochs (from 1 to 5). In this way, our approach is slightly different and can be equated to a tuning process [@Och2003]. Conclusion ========== In this paper we propose a study of the “specialization” approach. This domain adaptation approach shows good improvements with few in-domain data in a very short time. For instance, to gain 2 BLEU points, we used 5K lines of in-domain data, which takes 1 minute to be performed. Moreover, this approach reaches the same results as a full retraining, when $10$ documents are available. Within a CAT framework, this approach could be a solution for incremental adaptation of NMT models, and could be performed between two rounds of post-edition. In this way, we propose as future work to evaluate our approach in a real CAT framework. [^1]: <https://github.com/harvardnlp/seq2seq-attn>
{ "pile_set_name": "ArXiv" }
--- abstract: 'It was predicted by Wigner in 1934 that the electron gas will undergo a transition to a crystallized state when its density is very low. Whereas significant progress has been made towards the detection of electronic Wigner states, their clear and direct experimental verification still remains a challenge. Here we address signatures of Wigner molecule formation in the transport properties of InSb nanowire quantum dot systems, where a few electrons may form localized states depending on the size of the dot (i.e. the electron density). By a configuration interaction approach combined with an appropriate transport formalism, we are able to predict the transport properties of these systems, in excellent agreement with experimental data. We identify specific signatures of Wigner state formation, such as the strong suppression of the antiferromagnetic coupling, and are able to detect the onset of Wigner localization, both experimentally and theoretically, by studying different dot sizes.' author: - 'L.H. Kristinsdóttir' - 'J.C. Cremon' - 'H.A. Nilsson' - 'H.Q. Xu' - 'L. Samuelson' - 'H. Linke' - 'A. Wacker' - 'S.M. Reimann' date: 'December 1, 2010' title: Signatures of Wigner Localization in Epitaxially Grown Nanowires --- The transition to a Wigner crystal [@wigner1934] can be viewed as a contest between the electronic Coulomb repulsion and the quantum mechanical kinetic energy. If the Coulomb repulsion dominates, the many-particle ground state and its excitations resemble a distribution of classical particles located in a lattice minimizing the Coulomb energy. In the bulk, the transition to a Wigner crystal is only expected for extremely dilute systems [@ceperley1980; @drummond2004], while in lower dimensions, or for broken translational invariance, it becomes accessible at higher densities [@tanatar1989; @jauregui1993; @rapisarda1996]. A lot of work has focused on finite-sized two-dimensional quantum dots [@creffield1999; @egger1999; @yannouleas1999; @filinov2001; @reimann2002], where the crossover from liquid to localized states in the transport properties of the nanostructure has been addressed [@cavaliere2009; @EllenbergerPRL2006]. For one-dimensional systems, localization has been reported in cleaved edge overgrowth structures [@AuslaenderScience2005] and for holes in carbon nanotubes [@deshpande2008]. These highly correlated one-dimensional systems exhibit a variety of fascinating features as reviewed recently [@DeshpandeNature2010]. Here we introduce a third system, based on epitaxially grown semiconductor nanowires, which allows a straightforward application of tunneling spectroscopy compared to the rather involved cleaved edge overgrowth structures and avoids further complications due to the isospin degree of freedom in carbon nanotubes. InSb nanowires [@NilssonNL2009], as used here, allow for the realization of quantum dots, where the electronic confinement along the nanowire is established by Schottky barriers to gold contact stripes, see Fig. \[fig:SEMandDensplot\](a). Varying the distance between the stripes (here: 70 nm and 160 nm) allows for the systematic realization of wires with specific length and thereby controlled electron densities. For our calculations we model the nanowire as a hard-wall cylinder with the experimental radius 35 nm. The Schottky barrier at the semiconductor-metal interface creates a standard quantum well with a width equal to the contact spacing. The Coulomb interaction between the electrons is approximated as that in a cylinder embedded in homogeneous matter, taking into account the different dielectric constants of the wire and the surrounding material [@SlachmuyldersPRB2006; @LiPRB2008]. Exact many-particle states in the wire are evaluated with the configuration interaction method. The results can be understood in terms of two limiting cases: a short wire with no electron localization and a long wire with Wigner localization[^1]. ![(a) SEM-image of the InSb nanowire on a SiO$_2$ capped Si substrate, where the quantum dot is defined by Schottky barriers of the gold contacts (‘source’ and ‘drain’). Calculated electron density in nanowires of lengths 70 nm, 160 nm, and 300 nm is displayed in panels (b,c,d), respectively, for the lowest two-electron states (excitation energies are given; ‘S’ stands for singlet and ‘T’ for triplet). For the two-particle ground state the pair-correlated density is shown with the position of one electron marked by a black arrow.[]{data-label="fig:SEMandDensplot"}](Fig1_SEM_density_new.png){width="\figwidth\columnwidth"} The first limiting case, where interaction is dominated by kinetic energy, can be described by the independent-particle shell model. There the two-particle ground state is obtained by populating the lowest single-particle level with a spin-up and a spin-down electron. Thus the spatial electron density follows that of the lowest single-particle level and exhibits a peak in the center of the quantum dot. The lowest excited two-particle state is obtained by moving one electron to the first excited single-particle level at the cost of the level spacing energy $\Delta\varepsilon$. Thus one expects the two-particle excitation energy $\Delta E_2\approx \Delta\varepsilon$. Furthermore the spin degrees allow for four realizations of such an excited two-particle state, which are typically split into a triplet and a singlet due to exchange interaction. In the second limiting case, Wigner localization, the electrons are localized at different positions along the wire, minimizing the Coulomb repulsion. Thus the two-particle ground state density exhibits two peaks and a minimum in the center of the nanowire segment. As the electrons can have arbitrary spin on each site, one has four realizations of this configuration, with a minor energy split between a singlet and a triplet. Hence, we expect a very small $\Delta E_2\ll\Delta\varepsilon$, while further excitations are significantly higher in energy and exhibit a different spatial distribution of charge. At the onset of localization, the electron density is expected to resemble two weakly separated peaks in the two-particle ground state. The interaction of the electrons is substantial, without yet dominating the kinetic part. Hence the two-particle excitation energy is considerably lower than the single-particle excitation energy, $\Delta E_2<\Delta\varepsilon$. However, as the two electrons are not yet fully crystallized, $\Delta E_2$ is expected to be well above zero. Tunneling spectroscopy is a convenient way to study ground and excited states in quantum dot systems. Here we can use the gold contacts (Fig. \[fig:SEMandDensplot\](a)) as source and drain by applying a bias $V_\text{sd}$ between both stripes. The nanowire is located on a highly doped Si substrate covered by an insulating SiO$_2$ layer, which allows for application of a back-gate voltage $V_\text{bg}$ providing an approximately homogeneous shift in energy of all levels in the dot. Varying $V_\text{sd}$ and $V_\text{bg}$ provides the characteristic charging diagrams (see e.g. [@reimann2002]) displayed in Figs. \[fig:L070\](c) and \[fig:L160\](c) at a temperature of 300 mK. Here high differential conductance indicates that the electron addition energy (affinity) coincides with the chemical potential in either of the gates. The diamonds of vanishing conductance centered around zero $V_\text{sd}$ are the regions of Coulomb blockade, where the chemical potentials of both reservoirs are above the energy difference between the $(N-1)$- and $N$-electron ground state and below the energy difference between the $N$- and $(N+1)$-electron ground state. As no further lines of high conductance are found for lower gate bias, we assume that the lowest diamond corresponds to $N=1$. Half the width of this diamond defines the charging energy $U$. ![Results for an InSb nanowire of length $L=70$ nm. (a) Simulated differential conductance as a function of bias ($V_{\text{sd}}$) and gate energy $E_g$. The number of particles in the dot, $N$, is shown in each diamond. (b) A closer look at the area marked by a dashed box in panel (a). The conduction lines, where tunneling into the $N=1$ ground state and first excited state sets in, are marked by the symbols and , respectively. The corresponding lines for the entering of the second electron, where the dot reaches the $N=2$ ground state and the $N=2$ excited state, are marked by and symbols, respectively. The separation between these lines provides the excitation energies from the $N=1$ and $N=2$ ground states, $\Delta\varepsilon$ and $\Delta E_2$, respectively, which are depicted by arrows. (c) Experimental differential conductance as a function of bias ($V_{\text{sd}}$) and gate voltage ($V_{\text{bg}}$). (d) Experimental differential conductance as a function of magnetic field ($B$) and gate voltage ($V_{\text{bg}}$).[]{data-label="fig:L070"}](Fig2_condCondCutExp_L070.png){width="\figwidth\columnwidth"} ![Results for an InSb nanowire of length $L=160$ nm, panels as in Fig. \[fig:L070\][]{data-label="fig:L160"}](Fig3_condCondCutExp_L160.png){width="\figwidth\columnwidth"} Based on the calculated many-particle states, electron transport is treated within the master equation model [@ChenPRB1994; @KinaretPRB1992; @PfannkuchePRL1995] with tunneling matrix elements calculated as in Ref. [@cavaliere2009]. The results are displayed in Figs. \[fig:L070\](a) and \[fig:L160\](a) for the respective experimental samples displayed in panel (c). We find that all Coulomb diamonds agree rather well, which indicates that the radial excitations, which are disregarded in our effectively one-dimensional model, only become of relevance for higher particle numbers in the dot. Now we focus on the excited states and show, that the experimental conductance data along with our theoretical calculations allow for a verification of the Wigner localization scenario described above. For a 70 nm wire, the 2-electron density along the wire is a single peak, see Fig. \[fig:SEMandDensplot\](b). This corresponds to the independent-particle shell model as described above. In (b) we have marked the lines, where the first electron enters the one-electron ground state and the one-electron excited state, by the symbols and , respectively. This reflects the level spacing $\Delta\varepsilon=12$ meV as shown by the horizontal arrow. Similarly, starting from the one-electron ground state, the second electron enters the dot reaching the two-electron ground state and the two-electron excited state at lines marked by the and symbols. The separation between these two lines represents the excitation energy $\Delta E_2=11$ meV. The four lines, -, can be observed in the experimental data in Fig. \[fig:L070\](c) (this is clearer for negative bias, as the measurement results in the positive bias region most likely suffer from charging of impurity states). From this figure, we read $\Delta E_2^\text{exp}=15~\mathrm{meV}\approx \Delta\varepsilon^\text{exp}=16~\mathrm{meV}$, and hence for the sample of length 70 nm, the experimental data are in good agreement with the independent-particle shell model discussed above. Note that there is some discrepancy between theory and experiment regarding the value of $\Delta\varepsilon$ and $\Delta E_2$. This could be due to bending of energy levels at the interface of the wire and the gold contacts (Schottky barriers), which makes the wire effectively shorter than the spacing of the contacts. Indeed, simulations of a 60 nm wire give $\Delta\varepsilon=16$ meV and $\Delta E_2=15$ meV. We can quantify the electron-electron interaction strength by the energy difference between the two-particle ground state and twice the energy of the lowest single-particle level (half-width of the $N=1$ Coulomb diamond). This provides the charging energy $U^\text{exp}=6.5$ meV for the 70 nm sample, as read from Fig. \[fig:L070\](c). That is $U<\Delta\varepsilon$, in accordance with the independent-particle shell model being valid when kinetic energy dominates interaction. For the 160 nm wire, the 2-electron density in Fig. \[fig:SEMandDensplot\](c) resembles two semi-separated peaks, indicating the onset of Wigner localization (as also seen in the pair-correlated density). In , the lines - can be identified both in the simulation and the experiment. The theoretical results give $\Delta E_2=1.0$ meV and $\Delta\varepsilon=2.8$ meV, as in the experiments we observe $\Delta E_2^\text{exp}=1.0~\mathrm{meV}< \Delta\varepsilon^\text{exp}=3.2 \mathrm{meV}$. Again, this is in agreement with the scenario of onset of Wigner localization discussed above. Note that if we would neglect the different dielectric consant outside the wire, the onset of Wigner localization would first appear at double the actual wire length. Hence the screening due to the different dielectric constants of the wire and the surrounding material is an important effect and must be included in the modelling. The energy separation between the singlet and the triplet two-electron state, the antiferromagnetic coupling, can also be manifested by the magnetic field dependence of the differential conductance. The $S_z=1$ part of the triplet is lowered in energy by a magnetic field with respect to the singlet state by $g\mu_B B$, where $\mu_B$ is the Bohr magneton. Fig. \[fig:L160\](d) shows that there is a level crossing at $B_{\rm cross}\approx0.4$ T (marked by an arrow). According to Ref. [@NilssonNL2009] the electronic $g$-factors are around 40 for two electrons in the dot. This provides an energy splitting $\Delta E^{\text{mag}}_2=g\mu_B B_{\text{cross}}\approx1$ meV in full agreement with the calculated value for the 160 nm wire. Note that for the 70 nm wire, the level splitting is no longer linear in the high magnetic field, $B_\text{cross}\approx4$ T, at which the crossing appears (marked by an arrow in Fig. \[fig:L070\](d)). Hence we cannot apply the same method to find $\Delta E^{\text{mag}}_2$ for the 70 nm wire, although its result $\Delta E^{\text{mag}}_2\approx10$ meV is of the correct order of magnitude. The strong suppression of this antiferromagnetic coupling between the two electrons (by an order of magnitude, while changing the length by about a factor of two) is one of the hallmarks of the Wigner crystal state [@DeshpandeNature2010]. Finally, our theoretical results indicate complete Wigner localization for a 300 nm long wire. Fig. \[fig:SEMandDensplot\](d) shows that in the two-particle ground state, the electrons are strongly localized, i.e. they form a Wigner molecule. From (b) we observe that the conductance line of the $N=2$ triplet first excited state () has merged into the line of the singlet ground state (), as expected: There is no difference in the energy of these two states, as there should be no difference between the singlet and triplet states of two strongly localized particles. More precisely we find $\Delta E_2=9.3~\mu$eV and $\Delta\varepsilon=0.84$ meV, i.e. $\Delta E_2\ll\Delta\varepsilon$. Furthermore we find $U=5.7$ meV, that is $\Delta\varepsilon \ll U$. This conforms to Wigner localization being present when kinetic energy is strongly dominated by interaction. Even for the $N=3$ ground state the theoretical calculations suggest the onset of Wigner localization in a 300 nm wire, as seen in (c). The small energy difference between the three lowest $N=3$ states results in a broad conduction line, marked by the symbol in (b). Unfortunately, we could not obtain experimental data for this length, since for such a long sample and low charge densities the effect of disorder is too strong, creating an effective double quantum dot. This can be identified in a charge stability diagram as additional kinks in the conductance lines that comprise the $N=1$ Coulomb diamond [@fuhrer2007]. Such kinks are not present in the stability diagram for the 160 nm wire shown in Fig. \[fig:L160\](c), implying that disorder has no significant effect in that case. Also, Coulomb interaction has been shown to decrease the effect of Anderson localization [@filinov2002]. However the theoretical results demonstrate the prospects of our approach, if more efficient gating schemes are developed. ![Simulation of a 300 nm long wire. (a) Charge stability diagram. (b) A closer look at the area in the dashed box in panel (a). Symbols - as in Fig. \[fig:L070\]. The two lowest $N=2$ states have approximately the same energy, and hence the double conduction line of the 160 nm wire ( and in b) has merged into a single line leading to the $N=2$ Coulomb diamond. The broad conduction line consisting of three lines for the three lowest $N=3$ states, is marked by the symbol . (c) Electron density of the four lowest $N=3$ states.[]{data-label="fig:L300"}](Fig4_condCondCutDens_L300.png){width="\figwidth\columnwidth"} We have demonstrated the transition from the independent-particle shell model to Wigner localization with increasing length of a semiconductor nanowire sample. While the excitation spectrum follows the independent-particle shell model for the 70 nm wire ($\Delta E_2\approx\Delta\varepsilon$), the onset of Wigner localization is observed for the 160 nm wire ($\Delta E_2<\Delta\varepsilon$) and finally our simulations show complete Wigner localization in a wire of length 300 nm. There the excitation energy of the two-particle state is almost negligible and much lower than the level spacing, $\Delta E_2\ll\Delta\varepsilon$, and the calculated electron density exhibits two peaks. This shows that InSb nanowires form a convenient system to investigate strongly correlated systems by well established transport measurement techniques. This work was supported by the Swedish Research Council (VR) as well as the Swedish Foundation for Strategic Research (SSF). [24]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , ****, (). , , , , , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , ****, (). , , , , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , ****, (). , , , , , , , ****, (). , , , , , , , , ****, (). , ****, (). , , , , ****, (). , , , , , , , , ****, (). , , , , ****, (). , , , , , ****, (). , , , , , ****, (). , , , , , ****, (). , ****, (). , , , , , , , ****, (). , , , , , , , ****, (). [^1]: As we focus on 2-3 electrons, we cannot speak of a macroscopic effect such as Wigner crystallization. Hence the term *Wigner localization*.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The temperature dependences of the enhanced critical current in wide and thin Sn films exposed to the microwave field have been investigated experimentally and analyzed. It was found that the microwave field stabilizes the current state of a wide film with respect to the entry of Abrikosov vortices. The stabilizing effect of irradiation increases with frequency. Using similarity between the effects of microwave enhancement of superconductivity observed for homogeneous (narrow films) and inhomogeneous (wide films) distributions of the superconducting current over the film width, we have succeeded in partial extension of the Eliashberg theory to the case of wide films.' author: - 'V.M. Dmitriev' - 'I.V. Zolochevskii' - 'E.V. Bezuglyi' title: Enhancement of critical current by microwave irradiation in wide superconducting films --- Introduction ============ During last decades, current states in wide superconducting films in the absence of external magnetic and microwave fields have been studied in considerable detail. The main property of wide films, which distinguishes them from narrow channels, is an inhomogeneous distribution of the transport current over the film width. This distribution is characterized by increase in the current density towards the film edges due to the Meissner screening of the current-induced magnetic field. It should be emphasized that the current state of a wide film is qualitatively different from the Meissner state of a bulk current-carrying superconductor, despite their seeming resemblance. Indeed, whereas the transport current in the bulk superconductor flows only within a thin surface layer and vanishes exponentially at the distance of the London penetration depth $\lambda(T)$ from the surface, the current in the wide film is distributed over its width $w$ more uniformly, according to the approximate power-like law $[x(w-x)]^{-1/2}$ [@Larkin; @Aslamazov], where $x$ is the transversal coordinate. Thus the characteristic length $\lambda_\perp(T)=2\lambda^2(T)/d$ ($d$ is the film thickness), which is commonly referred to as the penetration depth of the perpendicular magnetic field, has nothing to do with any spatial scale of the current decay with the distance from the edges. In fact, the quantity $\lambda_\perp(T)$ plays the role of a “cutoff factor” in the above-mentioned law of the current distribution at the distances $x,w-x \sim \lambda_\perp$ from the film edges and thereby determines the magnitude of the edge current density. In films whose width is much larger than $\lambda_\perp(T)$ and the coherence length $\xi(T)$, the edge current density approaches the value $j_0=I/d\sqrt{\pi w\lambda_\perp}$ [@Larkin] if the total current $I$ does not exceed the homogeneous pair-breaking current $I_{\rm c}^{\rm GL}$. In such an inhomogeneous situation, the mechanism of superconductivity breaking by the transport current differs from homogeneous Ginzburg-Landau (GL) pair-breaking in narrow channels. In wide films this mechanism is associated with disappearance of the edge barrier for the vortex entry into the film when the current density at the film edges approaches the value of the order of the GL pair-breaking current density $j_{\rm c}^{\rm GL}$ [@Larkin; @Aslamazov; @Shmidt; @Likharev]. Using a qualitative estimate of $j_0 \approx j_{\rm c}^{\rm GL}$ for the current density which suppresses the edge barrier, one thus derives the expression $I_{\rm c}(T) \approx j_{\rm c}^{\rm GL}(T)d\sqrt{\pi w\lambda_\perp(T)}$ for the critical current of a wide film. This equation imposes a linear temperature dependence of the critical current, $I_{\rm c}(T) \propto (1-T/T_{\rm c})$, near the critical temperature $T_{\rm c}$, and it is widely used in analysis of experimental data (see, e.g., [@Andratskiy]). A quantitative theory of the resistive states of wide films by Aslamazov and Lempitskiy (AL) [@Aslamazov] also predicts the linear temperature dependence of $I_{\rm c}$ but gives the magnitude of the critical current larger by a factor of $1.5$ than the above estimate of $I_{\rm c}$. This result was supported by recent experiments on the critical current in wide films [@Dmitriev]. Since the parameters $\xi(T)$ and $\lambda_\perp(T)$ infinitely grow while the temperature approaches $T_{\rm c}$, any film reveals the features of a narrow channel in the immediate vicinity of $T_{\rm c}$; in particular, its critical current is due to the uniform pair-breaking (narrow channel regime) thus showing the temperature dependence of the GL pair-breaking current $I_{\rm c}^{\rm GL}(T) \propto (1-T/T_{\rm c})^{3/2}$. As the temperature decreases, the film exhibits a crossover to an essentially inhomogeneous current state, in which vortex nucleation is responsible for the resistive transition; in what follows, this regime will be referred to as a wide film regime. A following quantitative criterion of such a crossover was formulated in [@Dmitriev] on the basis of careful measurements: if the temperature $T$ satisfies an implicit condition $w<4\lambda_\perp(T)$, the superconducting film can be treated as a narrow channel, whereas at $w>4\lambda_\perp(T)$ it behaves as a wide film. The physical interpretation of this criterion is quite simple: existence of the resistive vortex state requires at least two opposite vortices (vortex and anti-vortex), with the diameter $2\lambda_\perp$ each, to be placed across the film of the width $w$. At the same time, it was noted in [@Dmitriev] that after entering the wide film regime, the dependence $I_{\rm c}(T) \propto (1-T/T_{\rm c})^{3/2}$, typical for narrow channels, still holds within a rather wide temperature range, though the absolute value of $I_{\rm c}$ is lower than the pair-breaking current $I_{\rm c}^{\rm GL}(T)$. In practice, the linear temperature dependence [@Aslamazov] of the critical current becomes apparent only at low enough temperatures, when $\lambda_\perp(T)$ becomes 10–20 times smaller than the film width. Whereas the equilibrium critical current of wide films has been quite well studied, an interesting question about current states in the films under nonequilibrium conditions was much less investigated. In recent papers [@Agafonov; @Dmitriev1] it was first reported that wide superconducting films, similar to short bridges [@bridges] and narrow channels [@narrow], exhibit increase in the critical current under microwave irradiation (superconductivity enhancement). In this paper, we present results of systematic investigations of the enhanced critical current in wide superconducting films. We argue that all essential features of the enhancement effect in wide films with an inhomogeneous current distribution are very similar to that observed before in narrow channels [@narrow]. We found that microwave irradiation stabilizes the current state in respect to the vortex nucleation and thus considerably extends the temperature region of the narrow channel regime. The relative moderateness of the current inhomogeneity in wide films enables us to exploit the theory of the superconductivity stimulation in spatially homogeneous systems with minor modifications for a quantitative treatment of our experimental data. Nonequilibrium critical current of superconducting channels in microwave field ============================================================================== The theory of superconductivity enhancement under microwave irradiation was created by Eliashberg [@Eliashberg1; @Eliashberg2; @Ivlev] for superconducting systems, in which the equilibrium energy gap $\Delta$ and the superconducting current density $j_{\rm s}$ are distributed homogeneously over the sample cross-section. The theory applies to rather narrow and thin films \[$w,d\ll\xi(T),\lambda_\perp(T)$\] with the homogeneous spatial distribution of the microwave power and, correspondingly, of the enhanced gap. The length of electron scattering by impurities, $l_{\rm i}$, is assumed to be small compared to the coherence length. According to this theory, the effect of the microwave irradiation on the energy gap $\Delta$ of a superconductor carrying the transport current with the density $j_{\rm s}$ is described by the generalized GL equation, $$\begin{aligned} \label{GLnoneq} \frac{T_{\rm c} - T}{T_{\rm c}}-\frac{7\zeta(3)\Delta^2}{8(\pi k T_{\rm c})^2}-\frac{2kT_{\rm c}\hbar}{\pi e^2 D \Delta^4 N^2(0)}j_{\rm s}^2 +\Phi(\Delta)=0.\end{aligned}$$ Here $N(0)$ is the density of states at the Fermi level, $D=v_{\rm F} l_{\rm i}/3$ is the diffusion coefficient, $v_{\rm F}$ is the Fermi velocity, and $\Phi(\Delta)$ is a nonequilibrium term arising from the nonequilibrium addition to the electron distribution function [@Eliashberg1; @Eliashberg2; @Klapwijk], $$\begin{aligned} \Phi(\Delta)&=-\frac{\pi \alpha}{2kT_{\rm c}}\left[ 1+0.11\frac{(\hbar \omega)^2}{\gamma kT_{\rm c}} -\frac{(\hbar \omega)^2}{2\pi \gamma \Delta}\left(\ln \frac{8 \Delta}{\hbar \omega}-1\right)\right],\nonumber \\ % \hbar \omega &< \Delta.\label{Phi}\end{aligned}$$ In this equation, $\alpha=Dp_{\rm s}^2/ \hbar$ is the quantity proportional to the irradiation power $P$, $p_{\rm s}$ is the amplitude of the superfluid momentum excited by the microwave field, $\gamma = \hbar/ \tau_\varepsilon$, and $\tau_\varepsilon$ is the energy relaxation time. In our studies of the superconductivity enhancement, we usually measure the critical current rather than the energy gap. Thus, in order to compare our experimental data with the Eliashberg theory, we should express the superconducting current density $j_{\rm s}$ through the function of the energy gap, temperature and irradiation power, using and , $$\begin{aligned} \label{js} j_{\rm s}&= \eta \Delta^2 \left[ \frac{T_{\rm c} - T}{T_{\rm c}}-\frac{7\zeta(3)\Delta^2}{8(\pi k T_{\rm c})^2}+\Phi(\Delta) \right] ^{1/2}, \\ \eta &=eN(0) \sqrt\frac{\pi D}{2 \hbar kT_{\rm c}}.\label{eta}\end{aligned}$$ The extremum condition for the superconducting current, $\partial j_{\rm s}/\partial \Delta=0$, at given temperature and irradiation power results in a transcendental equation for the energy gap $\Delta$, $$\begin{aligned} \frac{T_{\rm c}-T}{T_{\rm c}}&-\frac{21\zeta(3)\Delta^2}{(4\pi k T_{\rm c})^2}-\frac{\pi \alpha}{2kT_{\rm c}} \left[ 1+0.11\frac{(\hbar \omega)^2}{\gamma kT_{\rm c}}\right.\nonumber\\ % &- \left.\frac{(\hbar \omega)^2}{4\pi \gamma \Delta}\left(\frac{3}{2} \ln \frac{8 \Delta}{\hbar \omega}-1\right) \right] =0.\label{EqDelta}\end{aligned}$$ The solution of this equation, $\Delta=\Delta_{\rm m}$, being substituted into , determines the maximum value of $j_{\rm s}$, i.e., the critical current [@Dmitriev2], $$\label{IcP} I_{\rm c}^{\rm P}(T) =\eta d w \Delta_{\rm m}^2 \left[ \frac{T_{\rm c}-T}{T_{\rm c}}-\frac{7\zeta(3)\Delta_{\rm m}^2}{8(\pi k T_{\rm c})^2} + \Phi(\Delta_{\rm m}) \right] ^{1/2}.$$ This basic equation for the enhanced critical current will be used throughout this paper. With no microwave field applied ($\alpha=0$), equation transforms into the following expression for the equilibrium pair-breaking current, $$\begin{aligned} \label{IcGL} I_{\rm c}(T)= I_{\rm c}^{\rm GL}(T) = \eta d w \Delta_{\rm m}^2 \left[ \frac{T_{\rm c}-T}{T_{\rm c}}-\frac{7\zeta(3)\Delta_{\rm m}^2}{8(\pi k T_{\rm c})^2} \right]^{1/2},\end{aligned}$$ where $\Delta_{\rm m}=\sqrt{2/3} \Delta_0$, and $$\label{Deltaeq} \Delta_0= \pi k T_{\rm c} \sqrt{8(T_{\rm c}- T)/7\zeta(3)T_{\rm c}}=3.062 k T_{\rm c} \sqrt{1-T/T_{\rm c}}$$ is the equilibrium value of the gap at zero transport current. When using equation for the quantity $\eta$ with the density of states given by the free electron model, $N(0)=m^2v_{\rm F}/ \pi^2 \hbar^{3}$, in calculation of the equilibrium critical current , we faced with a considerable discrepancy of with experimental values of $I_{\rm c}^{\rm GL}(T)$. This implies that such an estimate of $N(0)$ for the metal (Sn) used in our experiments is rather rough. The way to overcome such an inconsistency is to express $N(0)$ through the experimentally measured quantity, viz., film resistance per square, $R^\square=R_{4.2}w/L$, where $R_{4.2}$ is the total film resistance at $T=4.2 K$ and $L$ is the film length. Then equation transforms to the relation $$\eta=(e d R^\square)^{-1} \sqrt{3 \pi /2k T_{\rm c} v_{\rm F} l_{\rm i} \hbar}, \nonumber$$ which provides good agreement of equation with both the experimental values of equilibrium pair-breaking current and the values calculated through the microscopic parameters of the film, $\xi_0$ and $\lambda_\perp(0)$ \[see below\]. Curiously, as far as we know, the temperature dependences of the enhanced critical current following from have been never quantitatively compared with experimental data. However, qualitative attempts of a comparison of the experimental dependences $I_{\rm c}^{\rm P}(T)$ with the Eliashberg theory have been done. For instance, the authors of [@Klapwijk] interpreted their experimental results on the enhancement effect in narrow films in a following way. First, using the relation for the equilibrium gap, they presented the GL pair-breaking current as $$\begin{aligned} \label{IcGL1} I_{\rm c}^{\rm GL}(T)= \frac{c\Phi_0w}{6\sqrt{3}\pi^2\xi(0)\lambda_\perp(0)}(1- T/T_{\rm c})^{3/2}=K_1 \Delta_0^3(T),\end{aligned}$$ where $\Phi_0=hc/2e$ is the magnetic flux quantum. Then, using an empiric fact that the temperature dependence of the enhanced critical current in the narrow film is close to the equilibrium one, $I_{\rm c}^{\rm P}(T) \propto (1-T/T_{\rm c}^{\rm P})^{3/2}$, where $T_{\rm c}^{\rm P}$ is a superconducting transition temperature in a microwave field, this dependence was modelled by equation similar to , $$\begin{aligned} \label{Klap} I_{\rm c}^{\rm P}(T)=K_2 \Delta_P^{3}(T).\end{aligned}$$ The enhanced energy gap $\Delta_P(T)$ in was calculated by the Eliashberg theory for zero superconducting current \[equation at $j_{\rm s}=0$\]. Assuming $K_1=K_2$ and using the magnitude of the microwave power as a fitting parameter, the authors of [@Klapwijk] eventually achieved good enough agreement between the calculated and measured values of $I_{\rm c}^{\rm P}(T)$. Obviously, such a comparison of the experimental data with the Eliashberg theory should be considered as a qualitative approximation which cannot be used to obtain quantitative results. First, equations and involve the gap value at zero current ($j_{\rm s}=0$) which is different from that with the current applied. Second, the pair-breaking curve $j_{\rm s}(\Delta)$ in the equilibrium state, which is implicitly assumed in , significantly differs from that in the microwave field [@Dmitriev2]. Of course, such model assumptions might nevertheless give a relatively good numerical approximation to a right formula (actually, that is what has been used in [@Klapwijk]), yet the basic inconsistency of such an approximation is due to qualitatively different behaviour of $I_{\rm c}^{\rm P}(T)$ and $\Delta_P(T)$ in the vicinity of the critical temperature. Indeed, as the temperature approaches $T_{\rm c}^{\rm P}$, the enhanced order parameter $\Delta_P(T)$ approaches a finite (though small) value, $\Delta_P(T_{\rm c}^{\rm P}-0)=(1/2) \hbar \omega$, and vanishes through a jump at $T>T_{\rm c}^{\rm P}$ [@Ivlev; @Klapwijk; @Dmitriev2], whereas the critical current vanishes continuously, without any jump. Thus, the temperature dependence of the critical current cannot in principle be adequately described by equation of type ; incidentally, this can be seen from a pronounced deviation of the formula from the experimental points in a close vicinity of $T_{\rm c}$. In the present paper, the experimental data will be analyzed by means of the exact formula , in which the numerical solution of equation for the quantity $\Delta_{\rm m}$ is used. Experimental results ==================== -------- -------- -------- ------ ------------ -------------- -------------- ---------- ------------ Sample $L$, $w$, $d$, $R_{4.2}$, $R^\square$, $T_{\rm c}$, $l_{i}$, $R_{300}$, $\mu$m $\mu$m nm $\Omega$ $\Omega$ K nm $\Omega$ SnW8 84 25 136 0.206 0.061 3.816 148 3.425 SnW10 88 7.3 181 0.487 0.040 3.809 169 9.156 SnW13 90 18 332 0.038 0.008 3.836 466 1.880 -------- -------- -------- ------ ------------ -------------- -------------- ---------- ------------ : Parameters of the film samples: $L$ is the length, $w$ the width, $d$ the thickness of the sample, and $l_{\rm i}$ is the electron mean free path.[]{data-label="tab"} We investigate superconducting Sn thin films fabricated by a novel technique [@Dmitriev] which ensures minimum defects both at the film edge and in its bulk. The critical current of such samples approaches the maximum possible theoretical value [@Aslamazov]. This implies that the current density at the film edges approaches a value of the order of $j_{\rm c}^{\rm GL}$ and thereby indicates the absence of the edge defects which might produce local reduction of the edge barrier to the vortex entry and corresponding decrease in $I_{\rm c}$. While measuring the $I$-$V$ curves (IVC), the samples were placed in a double screen of annealed permalloy. The $I$-$V$ curves were measured by a four-probe method. The external irradiation was applied to the sample which was placed inside a rectangular wave guide. The electric component of the microwave field in the wave guide was directed parallel to the transport current in the sample. The parameters of some measured films are listed in the Table \[tab\]; conventional values $v_{\rm F}=6.5 \times 10^{7}$ cm/s and $\tau _{\varepsilon}=4 \times 10^{-10}$ s were used for the Fermi velocity and inelastic relaxation time of Sn. ![Typical $I$-$V$ characteristic of a wide \[$w \gg \xi(T), \lambda_\perp(T)$\] superconducting film (sample SnW13) at the temperature $T=3.798$ K. the temperature T=3.798 K. $I_{\rm m}$ is the maximum current of the existence of the vortex state. $I_{\rm c}$ is the critical current of the wide film.[]{data-label="f"}](fig1.eps){height="3in"} The IVC of one of the samples is shown in figure \[f\]. The film resistivity caused by motion of the Abrikosov vortices occurs within the current region $I_{\rm c}<I<I_{\rm m}$ (vortex portion of the IVC), where $I_{\rm m}$ is the maximum current of the existence of the vortex state [@Aslamazov; @Dmitriev]. When the current exceeds $I_{\rm m}$, the IVC shows voltage steps indicating appearance of phase-slip lines. ![ Experimental temperature dependences of the critical current for the sample SnW10 shown by symbols: $I_{\rm c}(P=0)$ – $\protect\blacksquare$, $I_{\rm c}(f=9.2~\rm{GHz})$ – $\protect\blacktriangle$, and $I_{\rm c}(f=12.9~\rm{GHz})$ – $\protect\blacktriangledown$. Theoretical and approximating dependences are shown by curves:\ 1 – theoretical dependence $I^{\rm GL}_{\rm c}(T) = 7.07\times 10^2(1-T/T_{\rm c})^{3/2}$ mA \[see \];\ 2 – calculated dependence $I_{\rm c}(T)$ = 5.9$\times 10^2(1-T/T_{\rm c})^{3/2}$ mA;\ 3 – linear theoretical dependence $I^{AL}_{\rm c}(T)$ = 9.12$\times 10^{1}(1-T/T_{\rm c})$ mA [@Aslamazov];\ 4 – theoretical dependence $I_{\rm c}(f=9.2~\rm{GHz})$ calculated from ; this curve can be approximated by dependence $I_{\rm c}(T) = 6.5\times 10^2(1- T/3.818)^{3/2}$ mA;\ 5 – theoretical dependence $I_{\rm c}(f=12.9~\rm{GHz})$ calculated from ; this curve can be approximated by dependence $I_{\rm c}(T) = 6.7\times 10^2(1-T/3.822)^{3/2}$ mA;\ 6 – theoretical dependence $I_{\rm c}(f=9.2~\rm{GHz})$ calculated from and normalized to curve 2; curve 6 can be approximated by dependence $I_{\rm c}(T)$ = 5.9$\times 10^2(1-T/3.818)^{3/2}$ mA;\ 7 – linear approximating dependence $I_{\rm c}(T)$ = 9.4$\times 10^1(1-T/3.818)$ mA. \[ff\] ](fig2.eps){height="3.2in"} Figure \[ff\] shows experimental temperature dependences of the critical current for the sample SnW10. First we consider the behaviour of $I_{\rm c}(T)$ with no electromagnetic field applied (squares). The film width is rather small ($w=7.3~\mu$m), therefore the sample behaves as a narrow channel within a relatively wide temperature range, $T_{\rm cros1}<T<T_{\rm c}=3.809$ K, in which the critical current is equal to the GL pair-breaking current $I_{\rm c}^{\rm GL}(T) \propto (1-T/T_{\rm c})^{3/2}$. A crossover temperature, $T_{\rm cros1}=3.769$ K, corresponds to the transition to the wide film regime: at $T<T_{\rm cros1}$, the vortex portion in the IVC occurs. The temperature dependence $I_{\rm c}(T)$ at $T<T_{\rm cros1}$ initially holds the form $(1- T/T_{\rm c})^{3/2}$; then the value of $I_{\rm c}(T)$ becomes smaller than $I_{\rm c}^{\rm GL}(T)$ below a certain characteristic temperature, $T^{**} \approx 3.76$ K, which is due to formation of inhomogeneous distribution of the current density and its decrease away from the film edges. Finally, at $T<T_{\rm cros2}=3.717$ K, the temperature dependence of the critical current becomes linear, $I_{\rm c}(T)=I^{AL}_{\rm c}(T)$ = 9.12$\times 10^1(1-T/T_{\rm c})$ mA, in accord with the AL theory [@Aslamazov]. In our measurements in a microwave field, the irradiation power was selected to achieve a maximum critical current $I_{\rm c}^{\rm P}(T)$. First we discuss the behaviour of $I_{\rm c}^{\rm P}(T)$ for the sample SnW10 in the microwave field of the frequency $f=9.2$ GHz (figure \[ff\], triangles). In the temperature range $T_{\rm cros1}^{\rm P}(9.2~{\rm{GHz}}) =3.744$ K $<T<T_{\rm c}^{\rm P}(9.2~{\rm{GHz}})=3.818$ K, no vortex portion in the IVC was observed similar to the narrow channel case. We see that under optimum enhancement, the narrow channel regime holds down to the temperature $T_{\rm cros1}^{\rm P}(9.2~{\rm{GHz}})$ lower than that in equilibrium state, $T_{\rm cros1}$. Furthermore, within the region $T^{**} =3.760$ K$<T<T_{\rm c}^{\rm P}$, the experimental values of $I_{\rm c}^{\rm P}$ are in good agreement with those calculated from equation (curve 4 in figure \[ff\]), in which the microwave power (quantity $\alpha$) was a fitting parameter. Below the temperature $T^{**}$, the experimental values of $I_{\rm c}^{\rm P}$ descend below the theoretical curve 4, similar to the values of the equilibrium critical current discussed above. Nevertheless, the experimental points can be well fitted by equation (figure \[ff\], curve 6) normalized with a supplementary numerical factor which provides agreement of this equation at zero microwave field with the measured equilibrium critical current $I_{\rm c}(T)$. We interpret this factor as the form-factor which takes qualitative account of inhomogeneity of the current distribution across the film width. Eventually, at $T<T_{\rm cros2}^{\rm P}(9.2~\rm{GHz})=3.717$ K, the temperature dependence of the critical current becomes linear (figure \[ff\], straight line 7). The temperature dependence of the enhanced critical current measured at higher frequency, $f=12.9~{\rm GHz}$, (figure \[ff\], turned triangles) shows both the critical current and the superconducting transition temperature, $T_{\rm c}^{\rm P}(12.9~{\rm GHz})=3.822$ K $>T_{\rm c}^{\rm P}(9.2~{\rm GHz})$, to increase with the frequency, like in narrow channels. Furthermore, at this frequency, the IVC shows no vortex portion in the temperature range studied, down to $3.700$ K and even somewhat below. Thus the temperature $T_{\rm cros1}^{\rm P}$ of the transition to the wide film regime (not shown in figure \[ff\]) decreases, $T_{\rm cros1}^{\rm P}(12.9~{\rm GHz})< 3.700~{\rm K}<T_{\rm cros1}^{\rm P}(9.2~{\rm GHz})$ which considerably extends the region of the narrow channel regime. It is interesting to note that the experimental dependence $I_{\rm c}^{\rm P}(T)$ is in good agreement with the equation (curve 5) without any additional normalization in the whole temperature range studied. This would mean that in moderately wide films, the temperature $T^{\ast \ast}$ of deviation of the experimental dependence $I_{\rm c}^{\rm P}(T)$ from the Eliashberg theory is likely to decrease at high enough frequency, similar to the crossover temperature $T_{\rm cros1}^{\rm P}$. ![ Experimental temperature dependences of the critical current $I_{\rm c}(P=0)$ – $\blacksquare$ and $I_{\rm c}(f=15.2~{\rm GHz})$ – $\blacktriangle$, for the sample SnW8.\ Theoretical and approximating dependences are shown by curves:\ 1 – calculated dependence $I_{\rm c}(T)$ = 1.0$\times 10^{3}(1-T/T_{\rm c})^{3/2}$ mA;\ 2 – linear theoretical dependence $I^{AL}_{\rm c}(T)$ = 1.47$\times 10^2(1-T/T_{\rm c})$ mA [@Aslamazov];\ 3 – theoretical dependence $I_{\rm c}(f=15.2 GHz)$ calculated by and normalized to curve 1, curve 3 can be approximated by dependence $I_{\rm c}^{\rm P}(T)$ = 1.0$\times 10^{3}(1-T/3.835)^{3/2}$ mA;\ 4 – linear calculated dependence $I_{\rm c}^{\rm P}(T)$ = 1.72$\times 10^2(1-T/3.835)$ mA. \[fff\] ](fig3.eps){height="3in"} The temperature dependences of the critical current for the sample SnW8 are shown in figure \[fff\]. We begin with the analysis of the behaviour of $I_{\rm c}(T)$ with no electromagnetic field applied. The film width is relatively large, $w$=25 $\mu$m, therefore this sample can be treated as a narrow channel only very close to $T_{\rm c} = 3.816$ K; at $T<T_{\rm cros1}=3.808$ K it behaves as a wide film, with the vortex portion in the IVC. The temperature dependence of the critical current holds the form $(1-T/T_{\rm c})^{3/2}$ from $T_{\rm c}$ down to $T_{\rm cros2}=3.740$ K, though the value of $I_{\rm c}$ is smaller than $I^{\rm GL}_{\rm c}$ within this temperature range. This means that substantial current inhomogeneity develops very close to $T_{\rm c}$ as well (the difference between $T_{\rm c}$ and the characteristic temperature $T^{**}$ cannot be reliably resolved). At $T<T_{\rm cros2}$, the temperature dependence of the critical current becomes linear, according to the AL theory [@Aslamazov]: $I_{\rm c}(T)$ = $I^{AL}_{\rm c}(T)$ = 1.47$\times 10^2(1-T/T_{\rm c})$ mA. In the microwave field of the frequency $f=15.2~{\rm GHz}$, the superconducting transition temperature increases up to $T_{\rm c}^{\rm P}=3.835$ K, whereas the temperatures of both the crossover to the linear AL dependence, $T_{\rm cros2}^{\rm P}=3.720~$K, and the transition to the wide film regime, $T_{\rm cros1}^{\rm P} = 3.738$ K, exhibit noticeable decrease. At the same time, in order to achieve good agreement between the experimental dependence $I_{\rm c}^{\rm P}(T)$ and equation , we must apply the normalization of this equation on the measured equilibrium critical current $I_{\rm c}(T) = 1.0\times 10^{3}(1-T/T_{\rm c})^{3/2}$ mA within the whole temperature range $T>T_{\rm cros2}^{\rm P}$ (figure \[fff\], curve 3) including the close vicinity of the critical temperature (the temperature $T^{**}$ is still indistinguishable from $T_{\rm c}^{\rm P}$). Thus, in rather wide films, even the frequency $f=15.2$ GHz is not high enough to transform the absolute magnitude of $I_{\rm c}^{\rm P}(T)$ to the bare dependence calculated for a narrow channel, unlike the case of the relatively narrow sample SnW10 at $f=12.9$ GHz discussed above. Discussion of the results ========================= We begin this section with discussion of the superconductivity enhancement effect which manifests itself as increase in the superconducting transition temperature $T_{\rm c}^{\rm P}$ and in the magnitude of the critical current $I_{\rm c}^{\rm P}$ compared with their equilibrium values. Qualitative similarity of the results with those obtained for the narrow channels [@Dmitriev1] and the possibility to quantitatively describe the temperature dependence $I_{\rm c}^{\rm P}$ by the equations of the Eliashberg theory convince us that the mechanism of the enhancement effect is the same for the wide films and the narrow channels, i.e., it consists in enhancement of the energy gap caused by a redistribution of the microwave excited nonequilibrium quasiparticles to higher energies [@Eliashberg1]. This conclusion seems to be not quite evident for the wide films with inhomogeneous current distribution across the sample because it is the current inhomogeneity that may be suggested to completely suppress the superconductivity stimulation in bulk superconductors. Indeed, in the latter case, the concentration of the transport current and the microwave field within the Meissner layer near the metal surface gives rise to an extra mechanism of quasiparticle relaxation, namely, spatial diffusion of the microwave excited nonequilibrium quasiparticles from the surface to the equilibrium bulk. The efficiency of this mechanism is determined by the time of quasiparticle escape, $ \tau_{\rm D}(T)=\lambda^2(T)/D$, from the Meissner layer, which is three-to-four orders of magnitude smaller than typical inelastic relaxation time. Such high efficiency of the diffusion relaxation mechanism is likely to result in the suppression of the enhancement effect in bulk superconductors. However, relying upon a qualitative difference between the current states in bulk and thin-film superconductors outlined in Introduction, one can argue that moderate non-uniformity of the current distribution in wide films (with no concentration of an exciting factor at short distances) causes no fatal consequences for the enhancement effect, and that the diffusion of nonequilibrium quasiparticles excited within the whole bulk of the film introduces only insignificant quantitative deviations from the Eliashberg theory. In our consideration, we used a model approach to take these deviations into account by introducing the numerical form-factor of the current distribution into the formula for the enhanced critical current. We evaluate this form-factor by fitting the limit form of equation at zero microwave power, i.e., equation , to the measured values of the equilibrium critical current. Then we apply obtained values of the form-factor 0.83 for SnW10 and 0.57 for SnW8 to equation at $P\neq 0$, which results in a considerably good fit to the experimental data, as demonstrated in previous section. There is a question worth to discuss how the Eliashberg mechanism works in the regime of a wide film, i.e., when the superconductivity breaking is due to the vortex nucleation rather than due to exceeding the maximum value allowed by the pair-breaking curve by the transport current. We believe that in this case the enhancement of the energy gap results in corresponding growth of the edge barrier for the vortex entry, and this is what enhances the critical current in the wide film regime. It is interesting to note that no essential features appear in the curves $I_{\rm c}(T)$ when the films enter the vortex resistivity regime. From this we conclude that the transition between the regimes of the uniform pair-breaking and vortex resistivity has no effect on both the magnitude and the temperature dependence of the critical current. To complete with the discussion of the superconductivity enhancement effect, we draw one’s attention to the empiric fact that all the fitting curves for $I_{\rm c}^{\rm P}(T)$ obtained by equations of the Eliashberg theory are excellently approximated by the power law $(1-T/T_{\rm c}^{\rm P})^{3/2}$. This law is quite similar to the temperature dependence of the GL pair-breaking current, in which the critical temperature $T_{\rm c}$ is replaced by its enhanced value $T_{\rm c}^{\rm P}$. Explicit expressions for such approximating dependencies, with numerical coefficients, are presented in captions to the figures \[ff\] and \[fff\]. The next important result of our studies is the essential extension of the temperature range of the narrow channel regime of wide films on enhancement of superconductivity: in the microwave field, the temperature of the crossover to the wide film regime, $T_{\rm cros1}^{\rm P}$, considerably decreases as compared with its equilibrium value, $T_{\rm cros1}$. In the first glance, this result somewhat contradicts the criterion of the transition between the different regimes mentioned in Introduction, $w = 4\lambda_\perp(T_{\rm cros1})$, because increasing energy gap under irradiation implies decreasing magnitude of $\lambda_\perp$ and, correspondingly, decreasing characteristic size of the vortices. This obviously facilitates the conditions of the vortex entry to the film, thus the crossover temperature should [*increase*]{} in the microwave field. We believe, however, that there exists a more powerful stabilizing effect of irradiation on the vortices. The role of this effect is to delay the vortex nucleation and/or motion and to maintain the existence of the narrow channel regime down to low enough temperatures at which the current inhomogeneity eventually becomes well developed. Indeed, for the sample SnW10 at zero microwave power, the transition to the wide film regime occurs at the temperature $T_{\rm cros1}^{\rm P}(T)$ [*higher*]{} than the temperature $T^{**}$ at which the deviation of $I_{\rm c}^{\rm P}(T)$ from the GL uniform pair-breaking current begins to be observed. This means that in equilibrium case the vortex nucleation begins at relatively small inhomogeneity which still weakly affects the magnitude of the critical current. In contrast to this, under microwave irradiation of the frequency $f=9.2$ GHz, the vortex resistivity occurs at the temperature $T_{\rm cros1}^{\rm P}$ [*lower*]{} than $T^{**}$, i.e., the inhomogeneous current state shows enhanced stability with respect to the vortex nucleation in the presence of a microwave power. Similar conclusion can be drawn regarding the behaviour of the critical current in the wider film SnW8. We suggest the following qualitative explanation of the stabilization effect. Since the dimensions of the samples are small compared to the electromagnetic wavelength (the sample length is $\sim 10^{-4}$ m and the minimum wavelength is $\sim 10^{-2}$ m), we deal, in fact, with an alternating high-frequency current, $I_{\rm f} \propto \sqrt{P}$, flowing through the sample. The relative power $P/P_{\rm c}\sim 0.1 \div 0.2$, corresponding to the maximum enhanced current $I_{\rm c}^{\rm P}(T)$, is rather high, therefore the amplitude of $I_{\rm f}$ may be comparable with the magnitude of the critical transport current. This results in a considerable modulation of the net current flow through the sample which presumably enhances the stability of the current flow with respect to the nucleation and motion of the vortices. A possible analogy of such phenomenon is given by stabilization of the current state in narrow channels at supercritical currents caused by self-radiation of phase-slip centers [@PSC]. As noted in previous Section, the stabilizing effect becomes more pronounced with increasing irradiation frequency $f$: the region of the narrow channel regime considerably extends as the frequency grows. In the framework of our assumption about the stabilizing role of the high-frequency modulation of the current flow, such effect can be explained as follows. The relative microwave power of optimum enhancement, $P/P_{\rm c}$, was found to increase with $f$ [@Dmitriev1], while the critical power $P_{\rm c}$ changes with $f$ only at small enough frequencies, $f< f_\Delta$ [@Pals; @Bezuglyi], where $f_{\Delta} \approx (1-T/T_{\rm c})^{1/2} / 2.4 \tau_{\varepsilon}$ is the inverse of the gap relaxation time and does not exceed $0.1$ GHz for our samples and temperatures. At larger frequencies used in our experiments, $P_{\rm c}$ holds a constant value, which enables us to attribute the variations in $P/P_{\rm c}$ to variations in the absolute magnitude of the irradiation power $P$, i.e., in the amplitude of the modulating high-frequency current $I_{\rm f}$. Thus the increase in the irradiation frequency causes the microwave power of optimum enhancement of superconductivity to increase and, correspondingly, the stabilizing effect of the electromagnetic field on the vortices to enhance. Conclusions =========== The results of our investigation enable us to conclude that the mechanism of superconductivity enhancement by a microwave field is the same for both the narrow channels and wide films – this is the Eliashberg mechanism of enhancement of the superconducting energy gap caused by the excitation of nonequilibrium quasiparticles to a high energy region within the whole bulk of a superconductor. In the vicinity of $T_{\rm c}$, where any film can be treated as a narrow channel, the Eliashberg mechanism enhances the critical current in a usual way, through modification of the pair-breaking curve for the superconducting current. At lower temperatures, when the wide film enters the regime of the vortex resistivity, the enhancement of the critical current is likely associated with the growth of the edge barrier to the vortex entry, nevertheless giving the magnitude and the temperature dependence of the critical current quite similar to that in the previous case. Such similarity extends down to low enough temperatures, at which a linear temperature dependence predicted for extremely wide films is observed. Another important effect of the microwave irradiation is to stabilize the current state of the film in respect to the entry of the vortices, presumably by deep modulation of the transport current by the induced high-frequency current. This results in the extension of the temperature range in which the film behaves as a narrow channel, showing no vortex resistivity in the current-voltage characteristic. The stabilizing effect grows with frequency which is explained by simultaneous increase in the pumping power and, correspondingly, in the amplitude of the high-frequency current. We achieve good accordance of the experimental data with Eliashberg theory by introducing a numerical form-factor into equations initially derived for homogeneous distribution. Despite the simplicity and high enough quality of such approximation, this problem requires a more consistent approach, involving solution of diffusive equations of nonequilibrium superconductivity in a spatially inhomogeneous case. Acknowledgments =============== The authors are grateful to T.V.Salenkova for preparing the films and to E.V.Khristenko for helpful discussions. [99]{} Larkin A I and Ovchinnikov Yu N 1972 [*Sov. Phys. JETP*]{} [**34**]{} 651 Aslamazov L G and Lempitskiy S V 1983 [*Sov. Phys. JETP*]{} [**57**]{} 1291 Shmidt V V 1969 [*Sov. Phys. JETP)*]{} [**30**]{} 1137 Likharev K K 1971 [*Izv. Vyssh. Uchebn. Zaved. Radiofizika*]{} [**14**]{} 909 Andratskiy V P, Grundel’ L M, Gubankov V N, and Pavlov N B 1974 [*Sov. Phys. JETP*]{} [**38**]{} 794 Dmitriev V M and Zolochevskii I V 2006 [*Supercond. Sci. Technol.*]{} [**19**]{} 342 Agafonov A B, Dmitriev V M, Zolochevskii I V, and Khristenko E V 2001 [*Low Temp. Phys.*]{} [**27**]{} 686 Dmitriev V M, Zolochevskii I V, Salenkova T V, and Khristenko E V 2005 [*Low Temp. Phys.*]{} [**31**]{} 957 Wyatt A F G, Dmitriev V M, Moore W S, and Sheard F W 1966 [*Phys. Rev. Lett.*]{} [**16**]{} 1166 Dmitriev V M and Khristenko E V 1978 [*Sov. J. Low Temp. Phys.*]{} [**4**]{} 387 Eliashberg G M 1970 [*Sov. Phys. - JETP Lett.*]{} [**11**]{} 114 Eliashberg G M 1972 [*Sov. Phys. JETP* ]{} [**34**]{} 668 Ivlev B I, Lisitsyn S G, and Eliashberg G M 1973 [*J. Low Temp. Phys.*]{} [**10**]{} 449 Klapwijk T M, van den Bergh J N, and Mooij J E 1977 [*J. Low Temp.Phys.*]{} [**26**]{} 385 Dmitriev V M and Khristenko E V 1979 [*Le Journal de Physique - Lettres*]{} [**40**]{} L85 Dmitriev V M, Zolochevskii I V, and Khristenko E V 1999 [*J. Low Temp. Phys.*]{} [**115**]{} 173 Pals J A, and Ramekers J J 1982 [*Phys. Lett.*]{} [**A87**]{} 186 Bezuglyi E V, Dmitriev V M, Svetlov V N, and Churilov G E 1987 [*Low Temp. Phys.*]{} [**13**]{} 517
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a simple algorithm to solve the quantum linear system problem (QLSP), i.e. preparing a quantum state $\ket{x}$ that is proportional to $A^{-1}\ket{b}$. The algorithm decomposes a general QLSP into an initial state preparation problem solved by the time-optimal adiabatic quantum computation, and an eigenstate filtering problem solved by quantum signal processing. The algorithm does not rely on phase estimation, variable-time amplitude amplification, or in fact any amplitude amplification procedure. The probability of success is $\Omega(1)$. The query complexity of the algorithm is ${\mathcal{O}}(\kappa(\log(\kappa)/\log\log(\kappa)+\log(1/\epsilon)))$, which is near-optimal with respect to the condition number $\kappa$ up to a logarithmic factor, and is optimal with respect to the target accuracy $\epsilon$.' author: - Lin Lin - Yu Tong bibliography: - 'qlsp.bib' title: 'Solving quantum linear system problem with near-optimal complexity' --- ¶ The quantum linear systems problem (QLSP) aims at preparing a state $\ket{x}=A^{-1}\ket{b}/{\lVertA^{-1}\ket{b}\rVert}_2$ on a quantum computer, where $A\in{\mathbb{C}}^{N\times N}$, and $\ket{b}\in{\mathbb{C}}^N$, $N=2^n$. Due to the wide range of applications of linear systems in scientific and engineering computation, the efficient solution of QLSP has received significant attention in recent years [@HarrowHassidimLloyd2009; @ChildsKothariSomma2017; @GilyenSuLowEtAl2019; @SubasiSommaOrsucci2019; @AnLin2019; @ChakrabortyGilyenJeffery2018; @WossnigZhaoPrakash2018; @CaoPapageorgiouPetrasEtAl2013; @XuSunEndoEtAl2019; @Bravo-PrietoLaRoseCerezoEtAl2019]. All QLSP solvers share the desirable property that the complexity with respect to the matrix dimension can be only $\text{poly}(n)$, which is exponentially faster compared to every known classical solver. However, the pre-constant of the original Harrow, Hassidim, and Lloyd (HHL) algorithm scales as ${\mathcal{O}}(\kappa^2/\epsilon)$, where $\kappa$ is the condition number of $A$, and $\epsilon$ is the target accuracy. This is significantly weaker compared to classical methods such as the steepest descent (SD) or conjugate gradient (CG) methods with respect to both $\kappa$ and $\epsilon$. For instance, for positive definite matrices, the complexity of SD and CG is only ${\mathcal{O}}( {\kappa}\log(1/\epsilon))$ and ${\mathcal{O}}( \sqrt{\kappa}\log(1/\epsilon))$, respectively [@Saad2003]. In the past few years, there have been significant progresses towards reducing the pre-constants for quantum linear solvers. In particular, the linear combination of unitary (LCU) [@ChildsKothariSomma2017] and quantum signal processing (QSP) [@LowChuang2017; @GilyenSuLowEtAl2019] techniques can reduce the query complexity to ${\mathcal{O}}(\kappa^2 \log(\kappa/\epsilon))$. Therefore the algorithm is optimal with respect to $\epsilon$, but is still suboptimal with respect to $\kappa$. The scaling with respect to $\kappa$ can be reduced by the variable-time amplitude amplification (VTAA) [@Ambainis2012] technique, and the resulting complexity for solving QLSP is ${\mathcal{O}}(\kappa{~}\text{poly}(\log(\kappa/\epsilon)))$ [@ChildsKothariSomma2017]. However, VTAA requires considerable modification of the LCU or QSP algorithm, and has significant overhead itself. To the extent of our knowledge, the performance of VTAA for solving QLSP has not been quantitatively reported in the literature. The recently developed randomization method (RM) [@SubasiSommaOrsucci2019] is the first algorithm that yields near-optimal scaling with respect to $\kappa$, without using techniques such as VTAA. RM was inspired by adiabatic quantum computation (AQC) [@FarhiGoldstoneGutmannEtAl2000; @AlbashLidar2018; @JansenRuskaiSeiler2007]. The time complexity of the vanilla AQC is at best ${\mathcal{O}}(\kappa^2/\epsilon)$ [@ElgartHagedorn2012] , and RM improves the time complexity to ${\mathcal{O}}(\kappa\log (\kappa)/\epsilon)$. On the other hand, if we optimize the scheduling function of AQC, then the resulting time-optimal AQC can achieve even lower time complexity as ${\mathcal{O}}(\kappa/\epsilon)$ [@AnLin2019]. This is similar to the time-optimal AQC for performing Grover’s search [@RolandCerf2002]. Furthermore, numerical observation indicate that the time complexity of the quantum approximate optimization algorithm (QAOA) [@FarhiGoldstoneGutmann2014] can be only ${\mathcal{O}}(\kappa \text{poly}(\log(1/\epsilon)))$ [@AnLin2019]. The reason of such significant improvement of the accuracy due to QAOA remains an open question. In this Letter, we present an algorithm to solve QLSP with query complexity ${\mathcal{O}}(\kappa(\log(\kappa)/\log\log(\kappa)+\log(1/\epsilon)))$, with $\Omega(1)$ probability of success and without using any amplitude amplification procedure. For any $\delta>0$, a quantum algorithm that is able to solve a generic QLSP with cost ${\mathcal{O}}(\kappa^{1-\delta})$ would imply **BQP**=**PSPACE**  [@HarrowHassidimLloyd2009; @FarhiGoldstoneGutmannEtAl2000]. Therefore our algorithm is near-optimal with respect to $\kappa$ up to a logarithmic factor, and is optimal with respect to $\epsilon$. Similar to AQC, our algorithm treats QLSP as an eigenvalue problem. The first term of the complexity comes from an initial state preparation problem. We use the time-optimal AQC to prepares an initial state $\ket{x_0},$ with only a nontrivial overlap with the true solution as ${\lvert\braket{x_0|x}\rvert}\sim {\mathcal{O}}(1)$. The second term comes from an eigenstate filtering problem. We filter out the unwanted components in $\ket{x_0}$, and the filtered state is then $\epsilon$-close to $\ket{x}$ upon measurement. Our filtering polynomial yields the *optimal* compression ratio among all polynomials, and can be efficiently implemented using QSP [@GilyenSuLowEtAl2019; @LowChuang2017]. We confirm the complexity analysis using numerical examples for solving QLSP, and find that the degree of the filtering polynomial needed is rather modest. The solution can be highly accurate (with fidelity as high as $1-10^{-10}$), which is difficult to achieve using AQC based methods, either due to the $\epsilon^{-1}$ scaling for AQC and RM, or due to the difficulty of numerical optimization for QAOA. *Block-encoding and quantum signal processing:* For simplicity we assume $N=2^n$, and the normalization condition ${\lVertA\rVert}_2=1,\braket{b|b}=1$. We also assume $A$ is Hermitian, as general matrices can be treated using the standard matrix dilation method (included in the supplementary materials). An $(m+n)$-qubit unitary operator $U$ is called an $(\alpha, m, \epsilon)$-block-encoding of an $n$-qubit operator $A$, if $${\lVertA-\alpha(\bra{0^m}\otimes I) U (\ket{0^m}\otimes I)\rVert}_2\leq \epsilon. \label{eqn:block_encoding}$$ Many matrices used in practice can be efficiently block-encoded. For instance, if all entries of $A$ satisfies ${\lvertA_{ij}\rvert}\le 1$, and $A$ is Hermitian and $d$-sparse (i.e. each row / column of $A$ has no more than $d$ nonzero entries), then $A$ has a $(d,n+2,0)$-encoding $U$ [@ChildsKothariSomma2017; @BerryChildsKothari2015]. With a block-encoding available, QSP allows us to construct a block-encoding for an arbitrary polynomial eigenvalue transformation of $A$. \[thm:qsp\] **(Quantum signal processing [@GilyenSuLowEtAl2019])** : Let $U$ be an $(\alpha,m,\epsilon)$-block-encoding of a Hermitian matrix $A$. Let $P\in{\mathbb{R}}[x]$ be a degree-$\ell$ real polynomial and ${\lvertP(x)\rvert}\le 1/2$ for any $x\in[-1,1]$. Then there exists a $(1, m+2,4 \ell \sqrt{\epsilon} / \alpha)$-block-encoding ${\widetilde{U}}$ of $P(A/\alpha)$ using $\ell$ queries of $U,U^{\dagger}$ . Compared to methods such as LCU, one distinct advantage of QSP is that the number of extra ancilla qubits needed is only $1$. Hence QSP may be carried out efficiently even for near-term to intermediate-term devices. Furthermore, a polynomial can be expanded into different basis functions as $P(x)=\sum_{k=0}^{\ell} c_k f_k(x)$, where $f_k$ can be the monomial $x^k$, the Chebyshev polynomial $T_k(x)$, or any other polynomial. The performance of LCU crucially depends on the 1-norm ${\lVertc\rVert}_1:=\sum_{k=0}^\ell |c_k|$, which can be very different depending on the expansion [@ChildsKothariSomma2017]. The block encoding ${\widetilde{U}}$ in QSP is independent of such a choice, and therefore provides a more intrinsic representation of matrix function. We may readily apply QSP to solve QLSP. We assume $A$ is given by its $(\alpha,m,\epsilon)$-block-encoding, and its eigenvalues are contained in $\mathcal{D}_{1/\kappa}:=[-1,-1/\kappa]\cup[1/\kappa, 1]$. We first find a polynomial $P(x)$ satisfying $|P(x)|\le \frac12$ for any $x\in[-1,1]$, and $P(x)\approx 1/(2 c x)$ on ${\mathcal{D}}_{1/\alpha\kappa}$ for some $c>\alpha\kappa$. Then we have $$\begin{aligned} {\widetilde{U}}\ket{0^{m+2}}\ket{b}=&\ket{0^{m+2}}(P(A/\alpha)\ket{b})+\ket{\phi}\\ \approx& \ket{0^{m+2}}\left(\frac{\alpha}{2c}A^{-1}\ket{b}\right)+\ket{\phi}, \end{aligned}$$ where $\ket{\phi}$ is orthogonal to all states of the form $\ket{0^{m+2}}\ket{\psi}$. Measuring the ancilla qubits, we obtain the solution $\ket{x}$ with probability $\Theta\left(\left(\frac{\alpha}{2c}{\lVertA^{-1}\ket{b}\rVert}_2\right)^2\right)$. Assuming ${\lVertA^{-1}\ket{b}\rVert}_2\sim {\mathcal{O}}(1)$, the probability of success is ${\mathcal{O}}(1/\kappa^2)$. Using amplitude amplification [@BrassardHoyerMoscaEtAl2002], the number of repetitions needed for success can be improved to ${\mathcal{O}}(\kappa)$. Furthermore, the query complexity of application of ${\widetilde{U}}$ is ${\mathcal{O}}(\alpha\kappa\log(\kappa/\epsilon))$. Therefore the overall query complexity of QSP to solve QLSP is ${\mathcal{O}}(\alpha\kappa^2\log(\kappa/\epsilon))$. We observe that the quadratic scaling with respect to $\kappa$ is very much attached to the procedure above: each application of QSP costs ${\mathcal{O}}(\kappa)$ queries of $U,U^{\dagger}$, and the other from that QSP needs to be performed for ${\mathcal{O}}(\kappa)$ times. The same argument applies to LCU. To reduce the $\kappa$ complexity, one must involve significantly modify method, such as the modified LCU based on VTAA [@ChildsKothariSomma2017]. *Adiabatic computation:* Instead of tackling the linear system problem directly, AQC converts a QLSP into an eigenvalue problem. For simplicity we further restrict $A$ to be Hermitian positive definite, and the treatment for Hermitian indefinite matrices and general matrices can be found in the supplementary materials. Let $Q_{b}=I-\ket{b}\bra{b}$. We introduce $$H_0=\begin{pmatrix} 0 & Q_b\\ Q_b & 0 \end{pmatrix}, \quad H_1=\begin{pmatrix} 0 & AQ_b\\ Q_bA & 0 \end{pmatrix}. \label{eq:ham_qlsp}$$ If $\ket{x}$ satisfies $A\ket{x}\propto\ket{b}$, we have $Q_bA\ket{x}=Q_b\ket{b}=0$. Define $\ket{0}\ket{b}=(b,0)^{\top},\ket{0}\ket{x}=(x,0)^{\top}$. Then $\ket{0}\ket{b}$ and $\ket{0}\ket{x}$ are in the null space of $H_0,H_1$, respectively, and therefore we need to find the eigenvector corresponding to the eigenvalue 0. The form of the matrices in Eq.  is important for achieving ${\mathcal{O}}(\kappa)$ complexity. Furthermore, the other eigenvector in the null space of $H_1$ is $\ket{1}\ket{b}=(0,b)^{\top}$, which is orthogonal to the solution vector $\ket{0}\ket{x}$, and is not accessible via the adiabatic evolution $$\frac{1}{T}{\imath}\partial_s \left|\psi_T(s)\right> = H(f(s))\left|\psi_T(s)\right>, \quad \ket{\psi_T(0)}=\ket{0}\ket{b}.$$ Here $H(f(s)) = (1-f(s))H_0 + f(s)H_1, 0\le s\le 1$. The function $f:[0,1]\rightarrow [0,1]$ is called a scheduling function, and is a strictly increasing mapping with $f(0) = 0, f(1) = 1$. The simplest choice is $f(s)=s$, which gives the “vanilla AQC”. Besides $\ket{0}\ket{x}$, all other eigenstates of $H_1$ that can be connected to $\ket{0}\ket{b}$ through an adiabatic evolution are separated from $\ket{0}\ket{x}$ by an energy gap of at least $1/\kappa$ [@AnLin2019]. The time complexity of vanilla AQC is at least $T\sim {\mathcal{O}}(\kappa^2/\epsilon)$ [@JansenRuskaiSeiler2007; @AnLin2019; @AlbashLidar2018; @ElgartHagedorn2012]. By properly choosing a scheduling function $f(s)$, the time-optimal AQC can find an $\epsilon$-approximation to $\ket{0}\ket{x}$, with time complexity ${\mathcal{O}}(\kappa/\epsilon)$. Using truncated Dyson series for simulating the time-dependent Hamiltonian [@LowWiebe2018], the query complexity is ${\widetilde{{\mathcal{O}}}}(\kappa/\epsilon)$. This choice of the scheduling function and the Hamiltonian simulation are discussed in supplemental materials. As mentioned before, the fact that the $0$-eigenspace of $H(f(s))$ is two dimensional is not a problem because $\ket{\psi_T(t)}$ is orthogonal to $\ket{1}\ket{b}$ for all $t$. *Eigenstate filtering:* The main difficulty of the time-optimal AQC is that it can be costly to reach high accuracy. Nonetheless, if we choose the simulation time to be ${\mathcal{O}}(\kappa/\epsilon_0)$ for some constant target accuracy (e.g. $\epsilon_0=0.5$), the query complexity to prepare a state $\ket{{\widetilde{x}}_0}$ $$\ket{{\widetilde{x}}_0}=\gamma \ket{0}\ket{x}+ \sqrt{1-|\gamma|^2}\ket{{\widetilde{y}}} $$ is only ${\mathcal{O}}(\kappa\log(\kappa))$, where $\gamma= \Omega(1)$ and $(\bra{0}\bra{x})\ket{{\widetilde{y}}}=0$. Our goal is therefore to construct a filtering polynomial $P(H_1)$ so that $$P(H_1)\ket{0}\ket{x}\approx \ket{0}\ket{x}, \quad P(H_1)\ket{0}\ket{y}\approx 0,$$ *i.e.* it filters out the unwanted component $\ket{0}\ket{y}$ while leaving $\ket{0}\ket{x}$ intact. Since $\ket{0}\ket{x}$ is an eigenstate of $H_1$, this filtering problem can be implemented by QSP. We use the following $2\ell$-degree polynomial $$R_{\ell}(x;\Delta)=\frac{T_{\ell}\left(-1+2\frac{x^{2}-\Delta^{2}}{1-\Delta^{2}}\right)}{T_{\ell}\left(-1+2\frac{-\Delta^{2}}{1-\Delta^{2}}\right)}.$$ A plot of the polynomial is given in Fig. \[minimax\_poly\]. $R_{\ell}(x;\Delta)$ has several nice properties: \[lem:minimax\_poly\] (i) $R_{\ell}(x;\Delta)$ solves the minimax problem $$\underset{p(x)\in\mathbb{P}_{2\ell}[x],p(0)=1}{\mathrm{minimize}} \max_{x\in\mathcal{D}_{\Delta}}|p(x)|.$$ \(ii) $|R_{\ell}(x;\Delta)|\leq2e^{-\sqrt{2}\ell\Delta}$ for all $x\in\mathcal{D}_{\Delta}$ and $0<\Delta\leq1/\sqrt{12}$. Also $R_{\ell}(0;\Delta)=1$. \(iii) $|R_{\ell}(x;\Delta)|\leq1$ for all $|x|\leq1$. ![The polynomial $R_{\ell}(x,\Delta)$ for $\ell=16$ and $30$, $\Delta=0.1$.[]{data-label="minimax_poly"}](minimax_poly){width="0.7\linewidth"} Now consider a Hermitian matrix $H$, with a known eigenvalue $\lambda$ that is separated from other eigenvalues by a gap $\Delta$. $H$ is assumed to have an $(\alpha,m,0)$-block-encoding denoted by $U_H$. In order to preserve the $\lambda$-eigenstate while filtering out all other eigenstates, Lemma \[lem:minimax\_poly\] (i) states that $R_{\ell}$ achieves the best compression ratio of the unwanted components, among all polynomials of degrees up to $2\ell$. To prepare a quantum circuit, we define ${\widetilde{H}}=(H-\lambda I)/(\alpha+|\lambda|)$ and ${\widetilde{\Delta}}=\Delta/2\alpha$, then we can also construct a $(1,m+1,0)$-block-encoding for ${\widetilde{H}}$ (see supplemental materials). The gap separating 0 from other eigenvalues of ${\widetilde{H}}$ is lower bounded by ${\widetilde{\Delta}}$. Because of (ii) of Lemma \[lem:minimax\_poly\], we have that $$\|R_\ell({\widetilde{H}},{\widetilde{\Delta}})-P_\lambda\|_2\leq 2e^{-\sqrt{2}\ell{\widetilde{\Delta}}}.$$ where $P_{\lambda}$ is the projection operator onto the eigenspace corresponding to $\lambda$. Also because of (iii), $(1/2)R_{\ell}(x;{\widetilde{\Delta}})$ satisfies the requirements in Theorem \[thm:qsp\], which enables us to implement $(1/2)R_{\ell}({\widetilde{H}};{\widetilde{\Delta}})$ using QSP. This gives the following theorem: \[thm:eigenstate\_filter\] **(Eigenstate filtering):** Let $H$ be a Hermitian matrix and $U_H$ is an $(\alpha,m,0)$-block-encoding of $H$. If $\lambda$ is an eigenvalue of $H$ that is separated from the rest of the spectrum by a gap $\Delta$, then we can construct a $(2,m+3,\epsilon)$-block-encoding of $P_\lambda$, by $\mathcal{O}((\alpha/\Delta)\log(1/\epsilon))$ applications of (controlled-) $U_H$ and $U^{\dagger}_H$. Suppose we can prepare a state $\ket{\psi} = \gamma \ket{\psi_\lambda} + \sqrt{1-|\gamma|^2}\ket{\psi^\perp_\lambda}$ using an oracle $O_\psi$, where $\ket{\psi_\lambda}$ is a $\lambda$-eigenvector and $\braket{\psi_\lambda|\psi^\perp_\lambda}=0$, for some $0<\gamma\leq 1$. Theorem \[thm:eigenstate\_filter\] states that we can get an $\epsilon$-approximation to $\ket{\psi_{\lambda}}$ with $\mathcal{O}((\alpha/\Delta)\log(1/\epsilon))$ queries to $U_H,$ with a successful application of the block-encoding of $P_\lambda$, denoted by $U_{P_\lambda}$. The probability of applying this block-encoding successfully, [*i.e.* ]{}getting all 0’s when measuring ancilla qubits, is at least $\gamma^2/4$. Therefore when $\gamma = \Omega(1)$, and $\ket{\psi}$ can be repeatedly prepared, we only need to run $U_{P_\lambda}$ on average $\mathcal{O}(1)$ times to obtain $\ket{\psi_{\lambda}}$ successfully. We remark that the eigenstate filtering procedure can also be implemented by alternative methods such as LCU. The polynomial $R_{\ell}(\cdot,{\widetilde{\Delta}})$ can be expanded exactly into a linear combination of the first $2\ell+1$ Chebyshev polynomials. The 1-norm of the expansion coefficients is upper bounded by $2 \ell+2$ because $|R_{\ell}(x,{\widetilde{\Delta}})|\leq 1$. However, this comes at the expense of additional ${\mathcal{O}}(\log \ell)$ qubits needed for the LCU expansion [@ChildsKothariSomma2017]. *Applications to QLSP:* For the QLSP given by a $d$-sparse Hermitian matrix $A\in{\mathbb{C}}^{2^n\times 2^n}$ whose eigenvalues are contained in $\mathcal{D}_{1/\kappa}$ and $\ket{b}$, we assume the entries of $A$ can queried by oracles $$O_{A,1}\vert j,l\rangle = \vert j,\nu(j,l)\rangle,\quad O_{A,2}\vert j,k,z\rangle = \vert j,k,A_{jk}\oplus z\rangle, \label{eq:oracles_A}$$ where $j,k,l,z\in [N]$, and $\nu(j,l)$ is the row index of the $l$-th nonzero element in the $j$-th column. $\ket{b}$ can be prepared by an oracle $O_{B}\vert0\rangle = \vert b\rangle$. This is the same as in [@ChildsKothariSomma2017; @SubasiSommaOrsucci2019]. The oracles can be used to construct a $(d,n+2,0)$-block-encoding of $A$. Furthermore, as discussed in the supplemental materials, we can construct a $(d,n+4,0)$-block-encoding of $H_1$, denoted by $U_{H_1}$ in (\[eq:ham\_qlsp\]) by applying $O_B, O_{A,1}, O_{A,2}$ twice. As noted before the null space of $H_1$ is spanned by $\ket{0}\ket{x}$ and $\ket{1}\ket{b}$, and the eigenvalue 0 is separated from the rest of the spectrum by a gap of $1/\kappa$ [@AnLin2019]. Thus if we are given an initial state $$\ket{{\widetilde{x}}_0} = \gamma_0 \ket{0}\ket{x} + \gamma_1 \ket{1}\ket{b} + \gamma_2 \ket{{\widetilde{y}}} \label{eqn:x0expand}$$ with $|\gamma_0|=\Omega(1)$ and $\ket{{\widetilde{y}}}$ orthogonal to the 0-eigenspace, then we can run the eigenstate filtering algorithm described above to precision $\epsilon$ to obtain $R_{\ell}(H_1/d;1/d\kappa)\ket{{\widetilde{x}}_0}$. The $\ket{{\widetilde{y}}}$ component will be filtered out, while the $\ket{0}\ket{x}$ and $\ket{1}\ket{b}$ components remain. To further remove the $\ket{1}\ket{b}$ component, we measure the first qubit. Upon getting an outcome 0, the outcome state will just be $\ket{0}\ket{x}+\mathcal{O}(\epsilon)$. The success probability of applying the eigenstate filtering is lower bounded by $|\gamma_0|^2+|\gamma_1|^2$, and the success probability of obtaining 0 in measurement is $|\gamma_0|^2/(|\gamma_0|^2+|\gamma_1|^2)+\mathcal{O}(\epsilon)$. Thus the total success probability is $\Omega(1)$. Each single application of eigenstate filtering applies $U_{H_1}$, and therefore $O_{A,1}$, $O_{A,2}$, and $O_{B}$, for ${\mathcal{O}}(d\kappa\log(1/\epsilon))$ times. It only needs to be repeated $\Omega(1)$ times so the total query complexity of eigenstate filtering is still ${\mathcal{O}}(d\kappa\log(1/\epsilon))$. The initial state $\ket{{\widetilde{x}}_0}$ can be prepared using the time-optimal AQC procedure. Again we first assume $H$ is Hermitian positive definite. To make $\gamma_0=\Omega(1)$ we only need to run AQC to constant precision. Thus the time complexity of AQC is $\mathcal{O}(\kappa)$. However we still need to implement AQC on a quantum circuit. To do this we use the time-dependent Hamiltonian simulation introduced in [@LowWiebe2018], which gives a ${\mathcal{O}}(d\kappa\log(d\kappa)/\log\log(d\kappa))$ query complexity to achieve ${\mathcal{O}}(1)$ precision. This procedure also needs to be repeated $\mathcal{O}(1)$ times. It should be noted that $\gamma_1$ in Eq.  comes entirely from the error of the Hamiltonian simulation, since AQC should ensure that the state is orthogonal to $\ket{1}\ket{b}$ for all $t$. The procedure above can be generalized to Hermitian indefinite matrices, and general matrices (see supplemental materials). Therefore our QLSP solver can be summarized as \[thm:qlsp\] $A$ is a $d$-sparse matrix whose eigenvalues are in $\mathcal{D}_{1/\kappa}$ and can be queried through oracles $O_{A,1}$ and $O_{A,2}$ in (\[eq:oracles\_A\]), and $\ket{b}$ is given by an oracle $O_B$. Then $\ket{x}\propto A^{-1}\ket{b}$ can be obtained with fidelity $1-\epsilon$ using $\mathcal{O}(d\kappa(\log(d\kappa)/\log\log(d\kappa)+\log(1/\epsilon)))$ queries to $O_{A,1}$, $O_{A,2}$, and $O_B$. The number of qubits needed in the eigenvalue filtering procedure using QSP is $\mathcal{O}(n)$. In the Hamiltonian simulation $\mathcal{O}(n+\log(d\kappa))$ qubits are needed (see supplemental materials). Therefore the total number of qubits needed is $\mathcal{O}(n+\log(d\kappa))$. We present numerical results in Fig. \[fig:qls\_results\] to validate the complexity estimate. In the numerical test, $A$ is formed by adding a randomly generated symmetric positive definite tridiagonal matrix $B$, whose smallest eigenvalue is very close to 0, to a scalar multiple of the identity matrix. After properly rescaling, the eigenvalues of $A$ lie in $[-1,1]$. This construction enables us to estimate condition number with reasonable accuracy without computing eigenvalues. The off-diagonal elements of $B$ are drawn uniformly from $[-1,0]$ and the diagonal elements are the negative of sums of two adjacent elements on the same row. The $(0,0)$ and $(N,N)$ elements of $B$ are slightly larger so that $B$ is positive definite. ![Left: fidelity $\eta^2$ converges to 1 exponentially as $\ell$ in the eigenvalues filtering algorithm increases, for different $\kappa$. Right: the smallest $\ell$ needed to achieve fixed fidelity $\eta^2$ grows linearly with respect to condition number $\kappa$. The initial state in eigenstate filtering is prepared by running AQC for $T=0.2\kappa$, which achieves an initial fidelity of about 0.6.[]{data-label="fig:qls_results"}](plot_qls){width="50.00000%"} *Discussion:* In this paper, we have introduced a simple algorithm to solve QLSP with near-optimal complexity with respect to both $\kappa$ and $\epsilon$. In the case when the precise value of $\kappa$ is not known *a priori*, the knowledge of an upper bound of $\kappa$ would suffice. The problem of directly targeting at the solution $A^{-1}\ket{b} $ is that a $(\beta,m,\epsilon)$ block-encoding of $A^{-1}$ requires at least $\beta\ge\kappa$ to make sure that ${\lVertA^{-1}/\beta\rVert}_2\le 1$. Therefore the probability of success is already ${\mathcal{O}}(\kappa^{-2})$, and the number of repetitions needed is already ${\mathcal{O}}(\kappa)$ with the help of amplitude amplification. Motivated by the success of AQC, our algorithm solves views QLSP as an eigenvalue problem, which can be implemented via $P \ket{{\widetilde{x}}_0}$, where $P$ is an approximate projection operator, and $P \ket{{\widetilde{x}}_0}$ encodes the solution $\ket{x}$. The advantage of such a filtering procedure is that $P$ is bounded independent of $\kappa$, and its $(\beta,m,\epsilon)$ block-encoding only requires $\beta\sim{\mathcal{O}}(1)$. Therefore assuming ${\mathcal{O}}(1)$ overlap between $\ket{{\widetilde{x}}_0}$ and the solution vector, which can be satisfied by running the time-optimal AQC to constant precision, the probability of success of the filtering procedure is already ${\mathcal{O}}(1)$ without any amplitude amplification procedure. We remark that the eigenstate filtering procedure can be implemented via other choices of eigensolvers. Other approaches to prepare eigenstates include (i) phase estimation, (ii) the filtering method developed by Poulin and Wocjan [@PoulinWocjan2009], (iii) ground state preparation method based on LCU developed by Ge [*et al* ]{}. [@GeTuraCirac2019], and (iv) a variant of the third approach based on Chebyshev expansion and LCU (Appendix D of [@GeTuraCirac2019]). Some of these methods are focused on ground state but can be adapted to compute interior eigenstates as well. Now we compare our method with each of the methods mentioned above. For phase estimation, as discussed in Appendix B of [@GeTuraCirac2019], directly using phase estimation has a $1/\epsilon$ dependence on the allowed error. For the second and third approaches, the original paper by Poulin and Wocjan was intended for a different task, but a modified version (Appendix C of [@GeTuraCirac2019]) and the third approach by Ge [*et al* ]{}[@GeTuraCirac2019] can both attain ${\widetilde{\mathcal{O}}}((\alpha/\Delta)\log(1/\epsilon))$ query complexity, which is similar to our method modulo logarithmic factors. However our filtering method, which is based on QSP, uses significantly fewer qubits, which does not depend on either $\epsilon$ or $\Delta$. The fourth approach filters eigenstates using matrix polynomials. Our method is optimal in this sense, because it solves the the minimax problem as recorded in Lemma \[lem:minimax\_poly\] (i). We also remark that although we did not consider the dependence on the initial overlap $\gamma$ since it is assumed to be $\Omega(1)$, it can be easily seen that with amplitude amplification we can get a $1/\gamma$ dependence in the query complexity, with some additional logarithmic factors. In order to implement the QSP-based eigenstate filtering procedure, one still needs to find the phase factors associated with the block encoding ${\widetilde{U}}$. For a given polynomial $R_\ell(\cdot,\Delta)$, the phase factors are obtained on a classical computer in time that is polynomial in the degree and the logarithm of precision [@GilyenSuLowWiebe2018Long Theorems 3-5]. However, this procedure requires solution of all roots of a high degree polynomial, which can be unstable for the range of polynomials $\ell\sim 100$ considered here. The stability of such procedure has recently been improved by Haah [@Haah2019], though the number of bits of precision needed still scales as ${\mathcal{O}}(\ell \log(\ell/\epsilon))$. In this sense, there is no algorithm yet to evaluate the QSP phase factors that is numerically stable in the usual sense, i.e. the number of bits of precision needed is ${\mathcal{O}}(\text{poly}\log(\ell/\epsilon))$. We note that these phase factors in the eigenvalue filtering procedure only depend on ${\widetilde{\Delta}}$ and $\ell$, and therefore can be reused for different matrices once they are obtained on a classical computer. *Acknowledgements:* This work was partially supported by the Department of Energy under Grant No. DE-SC0017867, the Quantum Algorithm Teams Program under Grant No. DE-AC02-05CH11231, the Google Quantum Research Award (L.L.), and by the Air Force Office of Scientific Research under award number FA9550-18-1-0095 (L.L. and Y.T.). We thank Dong An, Yulong Dong, Nathan Wiebe for helpful discussions. **Supplemental Materials:\ Solving quantum linear system problem with near-optimal complexity** Block encoding {#sec:apdx_block_encoding} ============== When discussing the number of queries to an oracle $O,$ we do not distinguish between $O$ and its controlled version. The asymptotic notations $\mathcal{O}$, $\Omega$ are used for the limit $\kappa\rightarrow\infty$ and $\epsilon\rightarrow 0$. We use ${\widetilde{\mathcal{O}}}$ to mean $\mathcal{O}$ multiplied by a poly-logarithmic part. Sometimes we do not distinguish between the different ways of measuring error, e.g. in terms of fidelity or 2-norm distance of density matrices, since the query complexity is logarithmic in the error defined in both ways. Floating-point arithmetic is assumed to be exact for conciseness. If floating-point error is taken into account this will only lead to a logarithmic multiplicative overhead in the number of primitive gates, and a logarithmic additive overhead in the number of qubits needed. The technique of block-encoding has been recently discussed extensively [@GilyenSuLowEtAl2019; @LowChuang2019]. Here we discuss how to construct block-encoding for $H-\lambda I$ which is used in eigenstate filtering, and $Q_b$, $H_0$, and $H_1$ which are used in QLSP and in particular the Hamiltonian simulation of AQC. We first introduce a simple technique we need to use repeatedly. Given $U_A$, an $(\alpha,a,0)$-block-encoding of $A$ where $\alpha>0$, we want to construct a block encoding of $A+cI$ for some $c\in{\mathbb{C}}$. This is in fact a special case of the linear combination of unitaries (LCU) technique introduced in [@ChildsKothariSomma2017]. Let $$Q=\frac{1}{\sqrt{\alpha+|c|}}\left( \begin{array}{cc} \sqrt{|c|} & -\sqrt{\alpha} \\ \sqrt{\alpha} & \sqrt{|c|} \end{array} \right)$$ and $\ket{q}=Q\ket{0}$. Since $(\bra{0^m}\otimes I) U_A (\ket{0^m}\otimes I) = A/\alpha$, we have $$(\bra{q}\bra{0^m}\otimes I) (\ket{0}\bra{0}\otimes e^{i\theta}I + \ket{1}\bra{1}\otimes U_A) (\ket{q}\ket{0^m}\otimes I) = \frac{1}{\alpha+|c|}(A+cI),$$ where $\theta = \mathrm{arg} (c)$. Therefore Fig. \[fig:circuit\_shift\] gives an $(\alpha+|c|,a+1,0)$-block-encoding of $e^{-i\theta}(A+cI)$. ![Quantum circuit for block-encoding of $e^{-i\theta}(A+cI)$, where $c=e^{i\theta}|c|$. $R_{-\theta}$ is a phase shift gate. The three registers are the ancilla qubit for $Q$ and $\ket{q}$, the ancilla register of $U_A$, and the main register, respectively.[]{data-label="fig:circuit_shift"}](plot_shift_block_encoding){width="40.00000%"} Therefore we may construct an $(\alpha+|\lambda|,m+1,0)$-block-encoding of $H-\lambda I$. We remark that here we do not need the phase shift gate since $\lambda \in {\mathbb{R}}$. This is at the same time a $(1,m+1,0)$-block-encoding of ${\widetilde{H}}=(H-\lambda I)/(\alpha+|\lambda|)$. Now we construct a block-encoding of $Q_b=I-\ket{b}\bra{b}$ with $\ket{b}=O_B\ket{0}$. Let $S_0=I-2\ket{0}\bra{0}$ be the reflection operator about the hyperplane orthogonal to $\ket{0}$. Then $S_b := O_B S_0 O_B^\dagger = I-2\ket{b}\bra{b}$ is the reflection about the hyperplane orthogonal to $\ket{b}$. Note that $Q_b = (S_b+I)/2$. Therefore we can use the technique illustrated in Fig. \[fig:circuit\_shift\] to construct a $(1,1,0)$-block-encoding of $Q_b$. Here $\ket{q}=\ket{+}=\frac12 (\ket{0}+\ket{1})$. Since $H_0 = \sigma_x \otimes Q_b$, we naturally obtain a $(1,1,0)$-block-encoding of $H_0$. We denote the block-encoding as $U_{H_0}$ For the block-encoding of $H_1$, first note that $$H_1 = \left( \begin{array}{cc} I & 0 \\ 0 & Q_b \end{array} \right) \left( \begin{array}{cc} 0 & A \\ A & 0 \end{array} \right) \left( \begin{array}{cc} I & 0 \\ 0 & Q_b \end{array} \right).$$ From the block-encoding of $Q_b$, we can construct the block-encoding of controlled-$Q_b$ by replacing all gates with their controlled counterparts. The block matrix in the middle is $\sigma_x \otimes A$. For a $d$-sparse matrix $A$, we have a $(d,n+2,0)$-block-encoding of $A$, and therefore we obtain a block-encoding of $\sigma_x \otimes A$. Then we can use the result for the product of block-encoded matrix [@GilyenSuLowEtAl2019 Lemma 30] to obtain a $(d,n+4,0)$-block-encoding of $H_1$, denoted as $U_{H_1}$. Gate-based implementation of time-optimal adiabatic quantum computing {#sec:apdx_aqc} ===================================================================== Consider the adiabatic evolution $$\frac{1}{T}{\imath}\partial_s \left|\psi_T(s)\right> = H(f(s))\left|\psi_T(s)\right>, \quad \ket{\psi_T(0)}=\ket{0}\ket{b},$$ Where $H(f)=(1-f)H_0+fH_1$ for $H_0$ and $H_1$ defined in (\[eq:ham\_qlsp\]). It is proved in [@AnLin2019] that the gap between $0$ and the rest of the eigenvalues of $H(f)$ is lower bounded by $1-f+f/\kappa$. With this bound it is proved that in order to get an $\epsilon$-approximate solution of the QLSP for a positive definite $A$ we need to run for time $\mathcal{O}(\kappa/\epsilon)$ using the optimal scheduling [@AnLin2019 Theorem 1]. In order to carry out AQC efficiently using a gate-based implementation, we use the recently developed time-dependent Hamiltonian simulation method based on truncated Dyson series introduced in [@LowWiebe2018]. In Hamiltonian simulation, several types of input models for the Hamiltonian are in use. Hamiltonians can be input as a linear combination of unitaries [@BerryChildsCleveEtAl2015], using its sparsity structure [@AharonovTaShma2003; @LowChuang2017], or using its block-encoding [@LowChuang2019; @LowWiebe2018]. For a time-dependent Hamiltonian Low and Wiebe designed an input model based on block-encoding named HAM-T [@LowWiebe2018 Definition 2], as a block-encoding of $\sum_s\ket{s}\bra{s}\otimes H(s)$ where $s$ is a time step and $H(s)$ is the Hamiltonian at this time step. In the gate-based implementation of the time-optimal AQC, we construct HAM-T in Fig. \[fig:circuit\_ham\_t\]. We need to use the block-encodings $U_{H_0}$ and $U_{H_1}$ introduced in the previous section. We denote $n_0$ and $n_1$ as the number of ancilla qubits used in the two block-encodings. We know that $n_0=1$ and $n_1=n+4$. Our construction of HAM-T satisfies $$(\bra{s} \bra{0^{l+1+n_0}} \otimes I \otimes \bra{0^{n_1+1}}) \text{HAM-T} (\ket{s} \ket{0^{l+1+n_0}} \otimes I \otimes \ket{0^{n_1+1}}) = H(f(s))/d,$$ for any $s\in \mathcal{S}=\{j/2^l:j=0,1,\ldots,2^l-1\}$. ![Quantum circuit for HAM-T. The registers from top to bottom are: (1) input register for $s$ (2) register for storing $f(s)$ (3) register for a control qubit (4) ancilla register for $U_{H_0}$ (5) main register for input state $\ket{\phi}$ (6) ancilla register for $U_{H_1}$ (7) register for changing normalizing factor from $\alpha(s)$ to $d$. []{data-label="fig:circuit_ham_t"}](ham_t_circuit){width="60.00000%"} In this unitary HAM-T we also need the unitary $$U_{f}\ket{s}\ket{z} = \ket{s}\ket{z\oplus f(s)}$$ to compute the scheduling function needed in the time-optimal AQC, and the unitaries $$\begin{aligned} V_1 &= \sum_{s\in\mathcal{S}} \ket{s}\bra{s}\otimes\frac{1}{\sqrt{1-s+ds}}\left( \begin{array}{cc} \sqrt{1-s} & -\sqrt{ds} \\ \sqrt{ds} & \sqrt{1-s} \end{array} \right) \\ V_2 &= \sum_{s\in\mathcal{S}} \ket{s}\bra{s}\otimes\left( \begin{array}{cc} \frac{\alpha(s)}{d} & -\sqrt{1-\left(\frac{\alpha(s)}{d}\right)^2} \\ \sqrt{1-\left(\frac{\alpha(s)}{d}\right)^2} & \frac{\alpha(s)}{d} \end{array} \right), \\ \end{aligned}$$ where $\alpha(s) = 1-s+ds$. Here $V_1$ is used for preparing the linear combination $(1-f(s))U_{H_0}+f(s)U_{H_1}$. Without $V_2$ the circuit would be a $(\alpha(s),l+n_0+n_1+2,0)$-block-encoding of $\sum_s \ket{s}\bra{s}\otimes H(s)$, but with $V_2$ it becomes a $(d,l+n_0+n_1+2,0)$-block-encoding, so that the normalizing factor is time-independent, as is required for the input model in [@LowWiebe2018]. For the AQC with positive definite $A$ we have $n_0=1$ and $n_1=n+4$. For indefinite case we have $n_0=2$ and $n_1=n+4$. Following Corollary 4 of [@LowWiebe2018], we may analyze the different components of costs in the Hamiltonian simulation of AQC. For time evolution from $s=0$ to $s=1$, HAM-T is a $(dT,l+n_0+n_1+2,0)$-block-encoding of $\sum_s \ket{s}\bra{s}\otimes TH(s)$. With the scheduling function given in [@AnLin2019] we have $\|TH(s)\|_2=\mathcal{O}(Td)$ and $\|\frac{\mathrm{d(}TH(s))}{\mathrm{d}s}\|_2=\mathcal{O}(dT\kappa^{p-1})$. We choose $p=1.5$ and by Theorem 1 of [@AnLin2019] we have $T=\mathcal{O}(\kappa)$. We only need to simulate up to constant precision, and therefore we can set $l=\mathcal{O}(\log(d\kappa))$. The costs are then 1. Queries to HAM-T: $\mathcal{O}\left(d\kappa\frac{\log(d\kappa)}{\log\log(d\kappa)}\right)$, 2. Qubits: $\mathcal{O}(n+\log(d\kappa))$, 3. Primitive gates: $\mathcal{O}\left(d\kappa(n+\log(d\kappa))\frac{\log(d\kappa)}{\log\log(d\kappa)}\right)$. The matrix dilation method ========================== In order to extend the time-optimal AQC method to Hermitian indefinite matrices, we follow [@AnLin2019 Theorem 2], where $H_0$ and $H_1$ are given by $$\begin{aligned} H_0 &= \sigma_{+}\otimes [(\sigma_z \otimes I_N)Q_{+,b}] + \sigma_{-}\otimes [Q_{+,b}(\sigma_z \otimes I_N)], \\ H_1 &= \sigma_{+}\otimes [(\sigma_z \otimes A)Q_{+,b}] + \sigma_{-}\otimes [Q_{+,b}(\sigma_z \otimes A)]. \end{aligned} \label{eqn:Hdilation}$$ Here $\sigma_{\pm}=\sigma_x \pm i\sigma_{y}$ and $Q_{+,b}=I_N - \ket{+}\ket{b}\bra{+}\bra{b}$. The dimension of the dilated matrices $H_0,H_1$ is $4N$. The lower bound for the gap of $H(f)$ then becomes $\sqrt{(1-f)^2+f^2/\kappa}$ [@SubasiSommaOrsucci2019]. The initial state is $\ket{0}\ket{-}\ket{b}$ and the goal is to obtain $\ket{0}\ket{+}\ket{x}$. After running the AQC we can remove the second qubit by measuring it with respect to the $\{\ket{+},\ket{-}\}$ basis and accepting the result corresponding to $\ket{+}$. The resulting query complexity remains unchanged. We remark that the matrix dilation here is only needed for AQC. The eigenstate filtering procedure can still be applied to the original matrix of dimension $2N$. For a general matrix, we may first consider the extended linear system. Define adjoint QLSP as $\ket{y}=(A^{\dagger})^{-1}\ket{b}/{\lVert(A^{\dagger})^{-1}\ket{b}\rVert}_2$, and consider an extended QLSP $\mathfrak{A}\ket{\mathfrak{x}} = \ket{\mathfrak{b}}$ in dimension $2N$ where $$\mathfrak{A} = \sigma_+ \otimes A + \sigma_- \otimes A^{\dagger} =\left(\begin{array}{cc} 0 & A \\ A^\dagger & 0 \end{array}\right), \quad \ket{\mathfrak{b}} = \ket{+,b}.$$ Here $\mathfrak{A}$ is a Hermitian matrix of dimension $2N$, with condition number $\kappa$ and $\|\mathfrak{A}\|_2 = 1$, and $\ket{\mathfrak{x}} := \frac{1}{\sqrt{2}}(\ket{1,x}+\ket{0,y})$ solves the extended QLSP. Therefore the time-optimal AQC can be applied to the Hermitian matrix $\mathfrak{A}$ to prepare an $\epsilon$-approximation of $x$ and $y$ simultaneously. The dimension of the corresponding $H_0,H_1$ matrices is $8N$. Again the matrix dilation method used in Eq. is not needed for the eigenstate filtering step. Optimal Chebyshev filtering polynomial {#sec:apdx_optimal_poly} ====================================== In this section we prove Lemma \[lem:minimax\_poly\]. We define $$Q_\ell(x;\Delta) = T_{\ell}\left(-1+2\frac{x^2-\Delta^2}{1-\Delta^2}\right),$$ then $R_{\ell}(x;\Delta)=Q_{\ell}(x;\Delta)/Q_{\ell}(0;\Delta)$. We need to use the following lemma: \[lem:minimax\_poly\_2\] For any $p(x)\in\mathbb{P}_{2\ell}[x]$ satisfying $|p(x)|\leq1$ for all $x\in\mathcal{D}_{\Delta}$, $|Q_{\ell}(x;\Delta)|\geq|p(x)|$ for all $x\notin\mathcal{D}_{\Delta}$. We prove by contradiction. If there exists $q(x)\in\mathbb{P}_{2\ell}[x]$ such that $|q(x)|\leq1$ for all $x\in\mathcal{D}_\Delta$ and there exists $y\notin\mathcal{D}_\Delta$ such that $|q(y)|>|Q_{\ell}(x;\Delta)|$, then letting $h(x)=Q_{\ell}(x;\Delta)-q(x)\frac{Q_{\ell}(y;\Delta)}{q(y)}$, we want to show $h(x)$ has at least $2\ell+1$ distinct zeros. First note that there exist $-1=y_{1}<y_{2}<\cdots<y_{\ell+1}=1$ such that $|T_{\ell}(y_{j})|=1$, and $T_{\ell}(y_{j})T_{\ell}(y_{j+1})=-1$. Therefore there exist $\Delta=x_{1}<x_{2}<\cdots<x_{\ell+1}=1$ such that $|Q_{\ell}(\pm x_{j};\Delta)|=1$, and $Q_{\ell}(x_{j};\Delta)Q_{\ell}(x_{j+1};\Delta)=-1$. In other words, $Q_{\ell}(\cdot;\Delta)$ maps each $(x_{j},x_{j+1})$ and $(-x_{j+1},-x_{j})$ to $(-1,1)$, and the mapping is bijective for each interval. Because $|\frac{Q_{\ell}(y;\Delta)}{q(y)}|<1$, there exists $z_{j},w_{j}\in(x_{j},x_{j+1})$ for each $j$ such that $h(z_{j})=h(-w_{j})=0$. Therefore $\{z_{j}\}$ and $\{-w_{j}\}$ give us $2\ell$ distinct zeros. Another zero can be found at $y$ as $h(y)=Q_{\ell}(y)-Q_{\ell}(y)=0$. Therefore there are $2\ell+1$ distinct zeros. However $h(x)$ is of degree at most $2\ell$. This shows $h(x)\equiv0$. This is clearly impossible since $h(1)=Q_{\ell}(1;\Delta)-q(1)\frac{Q_{\ell}(y;\Delta)}{q(y)}=1-q(1)\frac{Q_{\ell}(y;\delta)}{q(y)}>0$. Therefore any $y\notin \mathcal{D}_{\Delta}$, $R_\ell(\cdot;\Delta)$ solves the minimax problem $$\underset{\substack{p(x)\in\mathbb{P}_{2\ell}[x] \\ p(y)=R_\ell(y;\Delta)}}{\mathrm{minimize}} \max_{x\in\mathcal{D}_{\Delta}}|p(x)|.$$ This implies (i) of Lemma \[lem:minimax\_poly\]. To prove (ii), we need to use the following lemma: Let $T_{\ell}(x)$ be the $\ell$-th Chebyshev polynomial, then $$T_{\ell}(1+\delta)\geq\frac{1}{2}e^{\ell\sqrt{\delta}}$$ for $0\leq\delta\leq3-2\sqrt{2}$. The Chebyshev polynomial can be rewritten as $T_{\ell}(x)=\frac{1}{2}(z^{\ell}+\frac{1}{z^{\ell}})$ for $x=\frac{1}{2}(z+\frac{1}{z})$. Let $x=1+\delta$, then $z=1+\delta\pm\sqrt{2\delta+\delta^{2}}$. The choice of $\pm$ does not change the value of $x$, so we choose $z=1+\delta+\sqrt{2\delta+\delta^{2}}\geq1+\sqrt{2\delta}$. Since $\log(1+\sqrt{2\delta})\geq\sqrt{2\delta}-\delta\geq\sqrt{\delta}$ for $0\leq\delta\leq 3-2\sqrt{2}$, we have $z^{\ell}\geq e^{\ell\sqrt{\delta}}$. Thus $T_{\ell}(x)\geq\frac{1}{2}e^{\ell\sqrt{\delta}}$. We use this lemma to prove (ii). Since $|T_\ell(-1+2\frac{-\Delta^2}{1-\Delta^2})|\geq T_\ell(1+2\Delta^2)$, when $\Delta^2\leq 1/12$, we have $2\Delta^2\leq 1/6< 3-2\sqrt{2}$. Thus by the above lemma we have $|T_\ell(-1+2\frac{-\Delta^2}{1-\Delta^2})|\geq \frac{1}{2}e^{\ell\sqrt{2\Delta}}$. Since $|T_\ell(-1+2\frac{x^2-\Delta^2}{1-\Delta^2})|\leq 1$ for $x\in\mathcal{D}_\Delta$, we have the inequality in (ii). (iii) follows straightforwardly from the monotonicity of Chebyshev polynomials outside of $[-1,1]$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Supermassive black holes are commonly found in the center of galaxies and evolve with their hosts. The supermassive binary black holes (SMBBH) are thus expected to exist in close galaxy pairs, however, none has been unequivocally detected. The square kilometre array (SKA) is a multi-purpose radio telescope with a collecting area approaching 1 million square metres, with great potential for detecting nanohertz gravitational waves (GWs). In this paper, we quantify the GW detectability by SKA for a realistic SMBBH population using pulsar timing array (PTA) technique and quantify its impact on revealing SMBBH evolution with redshift for the first time. With only $\sim20$ pulsars, much smaller a requirement than in previous work, the SKA PTA is expected to obtain detection within about 5 years of operation and to achieve a detection rate of more than 100 SMBBHs/yr after about 10 years. Although beyond the scope of this paper, we must acknowledge that the presence of persistent red noise will reduce the number of expected detections here. It is thus imperative to understand and mitigate red noise in the PTA data. The GW signatures from a few well-known SMBBH candidates, such as OJ 287, 3C 66B, NGC 5548 and Ark 120, will be detected given the currently best-known parameters of each system. Within 30 years of operation, about 60 individual SMBBHs detection with $z<0.05$ and more than $10^4$ with $z<1$ are expected. The detection rate drops precipitately beyond $z=1$. The substantial number of expected detections and their discernible evolution with redshift by SKA PTA will make SKA a significant tool for studying SMBBHs.' author: - Yi Feng - Di Li - Zheng Zheng - 'Chao-Wei Tsai' bibliography: - 'haha.bib' title: | Supermassive Binary Black Hole Evolution\ can be traced by a small SKA Pulsar Timing Array --- Introduction ============ GW observatories such as advanced LIGO (aLIGO) [@2015CQGra..32g4001L] and Virgo [@2015CQGra..32b4001A] have reached remarkable sensitivities in the high frequency band ($\sim 10-1000\,\rm Hz$). The detection of GWs from compact binary mergers has become a regular occurrence. In the nanohertz frequency band, pulsar timing arrays (PTAs), in which a collection of millisecond pulsars is monitored, can be used to detect and study GWs [@Detweiler1979; @Foster_Backer90]. The primary single source of GWs in nanohertz band are believed to be inspiralling SMBBHs, formed in the aftermath of galaxy mergers [@1980Natur.287..307B]. The detection of SMBBH systems can yield direct information about the masses and spins of the black holes [@2012PhRvL.109h1104M]. These single GW sources can also be studied by coordinated electromagnetic observations, thus enabling a multi-messenger view of the black hole systems [@2013CQGra..30v4013B; @2019BAAS...51c.490K]. PTA based GW astronomy is expected to progress significantly with the new and high sensitivity radio telescopes such as FAST [@2011IJMPD..20..989N] and SKA . The pulsar timing effort by SKA will significantly enhance the sensitivity of the current PTA networks by providing a larger number of newly discovered millisecond pulsars (MSPs), and better timing precision on the existing and new MSPs. Upon its completion, SKA will be the most sensitive telescope for detecting nanohertz GWs in the next generation. Thus, it is important to estimate the GW detection abilities of SKA. Wang & Mohanty [@2017PhRvL.118o1104W] carried out a pioneering quantitative assessment of the GW detectability for individual SMBBH searches with a simulated SKAPTA containing $10^3$ pulsars and found the SKAPTA to significantly increase the maximum distance of detectable GWs emitted by SMBBHs. They also considered two realistic candidates, namely, PG 1302-102 and PSO J334+01. However, no previous work has provide actual predictions of the number of SMBBHs to be detected by SKAPTA and their evolution through redshifts. Here we provide one of the first quantitative estimates on the GW detectability for a realistic individual SMBBH population. We tackle the problem in three steps. First, a SKAPTA detection curve, namely the minimum detectable GW strain amplitude as a function of GW frequency, is calculated. A SKAPTA containing 20 randomly placed pulsars with timing root mean square (RMS) of 20ns is used to compute the detection curves, using the $\mathcal{F}_{e}$ statistic, the logarithm of the likelihood ratio maximized over the signal parameters developed by [@1998PhRvD..58f3001J; @2012ApJ...756..175E]. Here we assume circular binary orbits for the SMBBHs. Second, an expected SMBBH population in the PTA frequency band is constructed based on the probability of a galaxy hosting a SMBBH in the PTA band and the population of host galaxies. The SMBBH population is estimated following the descriptions of [@2017NatAs...1..886M; @chiara_mingarelli_2017_838712] (hereafter M17). M17 used data from local galaxies in the 2 Micron All Sky Survey (2MASS) [@2006AJ....131.1163S] Extended Source Catalog [@2000AJ....119.2498J] and galaxy merger rates from Illustris [@2015MNRAS.449...49R; @2014MNRAS.445..175G]. Due to the expected great improvement brought about by SKAPTA, we much extend the redshift range in M17 using galaxy stellar mass function from [@2003ApJS..149..289B; @2013ApJ...777...18M] (GSMF). Third, we extract detectable GW sources within each redshift bin according to the SKAPTA detection curve calculated above and the expected SMBBH population estimated in step two. Our recipe allows for a quantitative prediction of the properties of the SMBBH population to be detected by SKAPTA, such as the number of detections per redshift bin and detection rates per year, for the first time. Simulated SKAPTA ================ With broad frequency bands and massive collecting areas, the radiometer noise for some of the brightest pulsars can be reduced from current 100ns level down to below 10ns by the large radio telescopes like FAST and SKA. Jitter noise, which is assumed to be caused by the fluctuation in the shape and arrival time of individual pulses, will limit the timing precision achievable over data spans of a few years for these large facilities [@2019RAA....19...20H]. To estimate the timing RMS, we assume that the timing RMS is white and consists solely of radiometer noise and jitter noise. We defer the influence of red noise (e.g. GWB, dispersion measure variation noise, intrinsic timing noise [@2019RAA....19...20H]) to later discussions. Table 4 in [@2018PhRvD..98j2002P] listed white noise for 10 Parkes Pulsar Timing Array (PPTA) [@2013PASA...30...17M] pulsars with a harmonic mean of 20ns for integration time of 30 minutes with the SKA Phase 1[@2015aska.confE..37J]. Within the SKA sky, International Pulsar Timing Array (IPTA)[@2013CQGra..30v4010M] source list contains more millisecond pulsars than PPTA does. For example, IPTA PSR J0023+0923, PSR J0030+0451, PSR J0931$-$1902 are not in the PPTA line-up. Considering the better sensitivity of the full SKA than that of SKA Phase 1, which will further improve timing RMS, we assume a conservative 20 pulsars with a harmonic mean of 20ns. We thus construct a mock SKAPTA data set containing 20 millisecond pulsars randomly distributed in the sky. Noise realizations are drawn from an independent and identically distributed (i.i.d.) $\mathcal{N}(0,\sigma^2)$ (zero mean white Gaussian noise) process, with $\sigma = 20$ ns for all pulsars. We choose the cadence to be 20 $\rm{yr}^{-1}$ in order to match the typical cadence used in current PTAs. SMBBH population in the PTA band ================================ The SMBBH population emitting in the PTA frequency band depends on two quantities: 1. [*The probability of a galaxy hosting a SMBBH in the PTA band*]{}. We exploit the approach put forward by M17 (See M17 for details of the approach). The probability is the multiplication of the probability that a SMBBH is in the PTA band and the probability that a galaxy hosts a SMBBH. The probability that a SMBBH is in the PTA band is $t_\mathrm{obs}/T_\mathrm{life}$, where $t_\mathrm{obs} = 5/256 c^5 (\pi f)^{-8/3} {[G M_c(1+z)]}^{-5/3}$ is the time to coalescence of the binary in the observed frame [@1964PhRv..136.1224P]. Here $f=1\,\mathrm{nHz}$, chirp mass $M_c = \left[ q/(1+q)^2 \right]^{3/5} M_\bullet$ with black hole mass ratio $q$ drawn from a log-uniform distribution in $[0.25,1]$. The SMBBH total mass $M_\bullet$ is estimated using the $M_\bullet-M_{\mathrm{bulge}}$ empirical scaling relation from [@2013ApJ...764..184M]. As discussed in M17, only massive early-type galaxies are considered in this simulation, therefore we take the galaxy stellar mass $M_*$ as $M_{\mathrm{bulge}}$ for $M_\bullet$ estimates. $T_\mathrm{life}$ is the effective lifetime of the binary, which is the sum of the dynamical friction ($t_\mathrm{df}$) [@BinneyTremaine] and stellar hardening ($t_\mathrm{sh}$) [@2015MNRAS.454L..66S] timescales. The probability that a galaxy hosts SMBBH is computed using the Illustris [@2015MNRAS.449...49R; @2014MNRAS.445..175G] cumulative galaxy-galaxy merger rate, $\mathrm{d}N/\mathrm{d}t(M_*, z', \mu_*)$ where $\mu_*$ is the stellar mass ratio of the galaxies, taken at the beginning of the binary evolution at redshift $z'$, which is calculated at lookback time of $T_\mathrm{life}+T_\mathrm{lookback}$ with Planck cosmological parameters , here $T_\mathrm{lookback}$ is the lookback time at $z$. To summarize, probability of a galaxy hosting a SMBBH in the PTA band is: $$\label{eq:prob} p= \frac{t_\mathrm{obs}}{T_\mathrm{life}}\int_{0.25}^{1} \mathrm{d}\mu_* \frac{\mathrm{d}N}{\mathrm{d}t}(M_*, z', \mu_*) T_\mathrm{life}\, ,$$ 2. [*The population of host galaxies*]{}. As discussed in M17, we only consider massive early-type galaxies with galaxy stellar mass greater than $10^{11}\,\rm M_{\odot}$. In addition, we impose a cut on the galaxy population at galaxy stellar mass $M_*<10^{12}\,\rm M_{\odot}$ because such massive galaxies are rare. For a population of galaxy with known redshift $z$ and $M_*$, we can calculate the probability of a selected galaxy hosting a SMBBH in the PTA band using Eq. \[eq:prob\], and thus determining the population of SMBBH emitting in the PTA frequency band. For galaxies at $z<0.05$, we use the galaxy catalog in M17, which selected galaxies at $z<0.05$ from the 2 Micron All Sky Survey (2MASS) [@2006AJ....131.1163S] Extended Source Catalog [@2000AJ....119.2498J]. To approximate a mass selection for more distant galaxies, we use GSMF given in Table 4 of [@2003ApJS..149..289B] and Table 1 of [@2013ApJ...777...18M] for redshift interval between $z$ = 0.05, 0.2, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, and 4.0. For each $z$ interval at $z>0.05$, we randomly choose $10^6$ host galaxies with $z$ drawn from a uniform distribution and stellar mass drawn from the corresponding GSMF. The number of $10^6$ galaxies is used to ensure stable simulation results in the Monte Carlo process. The estimated numbers of galaxies in each redshift bin is then scaled to the expected galaxy population from the sample of $10^6$ galaxies in the simulation. Using Eq. \[eq:prob\], we calculate the probability of a selected galaxy hosting a SMBBH in the PTA band. We then generate a random number from $U$\[0, 1\], if the random number is smaller than the probability, then the galaxy is considered to host a true SMBBH. Finally, the inclination and polarization-averaged strain and GW frequency of the SMBBH are calculated as described in M17 using $$\label{eq:h} h=\sqrt{\frac{32}{5}}\frac{{M}_c^{5/3}}{D_c} \left[\pi f(1+z)\right]^{2/3}\, ,$$ $$\label{eq:freq} f=\pi^{-1}\left[\frac{G{M}_c(1+z)}{c^3}\right]^{-5/8}\left[ \frac{256}{5}(t_\mathrm{obs}-t)\right]^{-3/8} \, ,$$ where $D_c$ is the comoving distance of the binary, $(t_\mathrm{obs}-t)$ is drawn from a uniform distribution in $[26~\rm{Myr}, 100~\rm{yr}]$. ![Detection curves for PPTA\_DR1 (blue line), SKA\_5yr (orange line), SKA\_10yr (green line), SKA\_30yr (red line) respectively. The seven black dots represent the SMBBH candidates discussed in [@2019arXiv190703460F]. The yellow (best case) and red (pessimistic case) dots (’+’ for $z<2$ samples and diamond symbol for $z>2$) represent CRTS samples. The diamond symbols in the black box represent 4 unreliable candidates (two overlapped, see the text for details). The blue dots represent 87 single GW sources (one realization of sky) from M17.[]{data-label="pop1"}](pop1.png){width="50.00000%"} Results ======= We use the $\mathcal{F}_{e}$ statistic with False Alarm Probability of $10^{-3}$ to calculate detection curves for SKAPTA with total time span of 5, 10, 30yrs respectively. The results are shown in Figure \[pop1\]. The SMBBH candidates such as 3C 66B [@Iguchi2010], OJ 287 [@Valtonen2016], NGC 5548 [@2016ApJ...822....4L] and Ark 120 [@2019ApJS..241...33L] (black dots in Figure 1) discussed in [@2019arXiv190703460F] can be detected if they are true SMBBHs. For the other SMBBH candidates Mrk 231 [@2015ApJ...809..117Y], PG 1302-102 [@2015Natur.518...74G], NGC 4151 [@2012ApJ...759..118B], they are hard to detect even in SKA era for their weaker estimated GW signature caused by their small chirp masses. [@2015MNRAS.453.1562G] proposed 111 SMBBH candidates by inspecting the light curves of $\sim$250k quasars identified in the Catalina Real-time Transient Survey (CRTS, [@2009ApJ...696..870D]). We plot the 98 candidates (hereafter CRTS samples) with reported black hole mass estimates for optimistic case assuming mass ratio $q = 1$ (yellow dots in Figure 1) and pessimistic case assuming $q = 0.1$ (red dots in Figure 1). At least 10 single sources in CRTS samples can be detected, assuming these are all true sources, even for the pessimistic case. Unfortunately, CRTS samples are likely contaminated by several false positives [@2018ApJ...856...42S] and we will discuss this later. The blue dots represent 87 single GW sources (one realization of sky) from M17. 0, 2, 12, 64 sources of M17 can be detected for PPTA\_DR1, SKA\_5yr, SKA\_10yr, SKA\_30yr.\ ![Detection curves for SKA\_5yr (magenta line), SKA\_10yr (brown line), SKA\_20yr (cyan line), SKA\_30yr (purple line) respectively. Black, orange, gray, blue, red dots represent SMBBH population hosted by $10^6$ galaxies from $0.0<z<0.2$, $0.2<z<0.5$, $0.5<z<1.0$, $1.0<z<1.5$, $1.5<z<2.0$ respectively (for $0.0<z<0.2$, it is 87 SMBBHs from M17 combined with SMBBH population hosted by $10^6$ galaxies from $0.05<z<0.2$). Dashdot, dotted, solid, dashed number density contours represent 50% of the peak value for $0.0<z<0.2$, $0.2<z<0.5$, $0.5<z<1.0$, $1.0<z<1.5$ respectively. For $z>2.0$, there are no detectable SMBBHs in the PTA band, so SMBBH population at $z>2.0$ is not shown in the figure. The red curve cross the contour centers shows the evolution trend of SMBBH population from low redshift to high redshift.[]{data-label="pop2"}](pop2_trend.png){width="50.00000%"} Figure \[pop2\] shows the SMBBH population in the PTA band. In Figure \[pop2\], we also plot the 5, 10, 20, 30yr detection curves to show the potential detectability of these single GW sources. The strain of the population shifts to a low value at $z\lesssim0.5$ for higher redshift due to their further distance. The frequency of the population shifts to lower frequency at $z\gtrsim0.5$ because these SMBBHs do not have enough time to evolve to be closer, with higher redshift SMBBHs have less time to evolve. Given the detection curves and SMBBH population in the PTA band, we determine the detection number of single GW sources as a function of SKAPTA time span. The source is considered detected if it lies above the detection curve. The total number of host galaxies is the multiplication of comoving volume and integral of the corresponding GSMF from $10^{11}\,\rm M_{\odot}$ to $10^{12}\,\rm M_{\odot}$, and is listed in the last row of Table \[tab:number\]. The number of SMBBHs in the PTA band is listed in the second last row. Figure \[fig:number\] shows detection number for different redshift ranges as a function of SKAPTA time span. We list the detection number for different redshift ranges of 5, 10, 15, 20, 30 time span SKAPTA in Table \[tab:number\]. More than $10^4$ sources can be detected by SKAPTA after 30 years of operation. The primary detectable sources come from galaxies at $z<1$. Detectable sources at $z>1.5$ are rare as their larger distances dampen the apparent GW signal. Some of them also did not have enough time to evolve to be close enough, thus have a longer observed orbital period outside of the PTA frequency band. The dramatic drops of GW detection at $z>1.5$ is consistent with [@2013MNRAS.433L...1S; @2015MNRAS.447.2772R]. Moreover, no hosts at $z>2.0$ are expected to have detectable SMBBHs in our simulation. If this result is true, it implies that CRTS samples at $z>2.0$ may not be real SMBBHs. For example, Table 1 in [@2018ApJ...856...42S] listed top 10 candidates in CRTS samples providing the largest contribution to the expected GWB, which are likely to be false positives. In this sample, four of them (i.e., HS 0926+3608, SDSS J140704.43+273556.6, SDSS J131706.19+271416.7, SDSS J134855.27-032141.4) have redshifts $z>2.0$ and are shown using diamond symbols inside the black box in Figure \[pop1\]. Combining our results and [@2018ApJ...856...42S], these 4 candidates can be unreliable.\ $0.0<z<0.05$ $0.05<z<0.2$ $0.2<z<0.5$ $0.5<z<1.0$ $1.0<z<1.5$ $1.5<z<2.0$ -------------- -------------- ----------------- ----------------- ----------------- ----------------- ----------------- 5 2 0 0 0 0 0 10 12 81 26 0 0 0 15 35 593 221 102 25 13 20 57 3017 2067 884 250 13 30 64 6726 17329 11356 3925 39 total SMBBHs 87 $1.0\times10^5$ $1.0\times10^6$ $3.4\times10^6$ $8.4\times10^5$ $2.6\times10^2$ total hosts 5119 $1.6\times10^6$ $1.3\times10^7$ $3.4\times10^7$ $2.5\times10^7$ $1.3\times10^7$ \[tab:number\] ![Detection number of single GW sources versus time span of SKAPTA. The red, blue, orange, magenta, yellow, green, black colors represent number of sources at $0.0<z<0.05$, $0.05<z<0.2$, $0.2<z<0.5$, $0.5<z<1.0$, $1.0<z<1.5$, $1.5<z<2.0$ and total number respectively.[]{data-label="fig:number"}](Dnum.png){width="50.00000%"} We calculate the detection rate as a function of SKAPTA time span by comparing the detection number of two consecutive years. The results are shown in Figure \[fig:rate\]. Unlike GW sources in LIGO frequency band, the detection rate of single GW sources of SKAPTA is not uniform in time. The detectability increases slowly in the early times, but increases faster to be more than 100 detections/yr after about 10 yrs. This is a stipend from the unique PTA based search, which accumulates SNR with time, highlighting the importance of long time span of a PTA campaign. ![Detection rate of single GW sources versus time span of SKAPTA. The red, blue, orange, magenta, yellow, green, black colors represent number of sources at $0.0<z<0.05$, $0.05<z<0.2$, $0.2<z<0.5$, $0.5<z<1.0$, $1.0<z<1.5$, $1.5<z<2.0$ and total rate respectively.[]{data-label="fig:rate"}](Drate.png){width="50.00000%"} Discussion ========== Red noise was ignored under the assumption that it can be mitigated to a low noise level or it does not influence single GW detection using special techniques. If the timing residuals have a strong red noise component emulating an unresolved GWB with amplitude of $4\times 10^{-16}$, the detection number decreases drastically as shown in Figure 5. This is because PTA is less sensitive to high frequency ($> 1\,\rm{yr}^{-1}$) SMBBHs, and strong red noise in timing residuals greatly diminishes the chance of detecting lower frequency SMBBHs. The SKAPTA GW detections depend on how well the GWB can be subtracted, which should be further studied. The methodology of GWB subtraction is also important to mitigate other various types of red noises which could limit the detectability of SMBBHs with SKAPTA. ![Same as Fig. \[pop2\], but timing residuals have a strong red noise component of $4\times 10^{-16}$, emulating the amplitude of an unresolved GWB.[]{data-label="fig:red"}](pop2_red.png){width="50.00000%"} Conclusion ========== The unprecedented sensitivity of SKA facilitates realization of a significant PTA with a small number of pulsars. The SKAPTA in this work consists of only 20 pulsars versus $\sim1000$ in previous works. Such a simple SKAPTA can still detect a large number of SMBBHs and enable studies of their evolution through redshift up to $z = 1-2$ assuming red noise in the PTA data can be mitigated. The presence of red noise will reduce the number of detectable individual sources. Nevertheless, SKAPTA will be a revolutionary instrument for studying SMBBH evolution. We thank the anonymous referee for very useful comments on the manuscript. This work is supported by National Natural Science Foundation of China (NSFC) programs, No. 11988101, 11725313, 11690024, 11703036, by the CAS International Partnership Program No.114-A11KYSB20160008, by the CAS Strategic Priority Research Program No. XDB23000000 and the National Key R&D Program of China (No. 2017YFA0402600).
{ "pile_set_name": "ArXiv" }
--- author: - 'P. von Paris' - 'P. Gratier' - 'P. Bordé' - 'F. Selsis' bibliography: - 'literatur\_idex.bib' title: 'Inferring heat recirculation and albedo for exoplanetary atmospheres: Comparing optical phase curves and secondary eclipse data' --- Introduction ============ ![image](phase_orbit){width="250pt"} ![image](phase_illu){width="250pt"} ![image](planet_map){width="250pt"}\ In recent years, exoplanetary science has developed from a detection-centered astronomical science towards a characterization-centered planetary science. For basic properties of exoplanetary atmospheres such as albedo and heat redistribution from dayside to nightside, observational constraints are now available for a growing number of planets. Radiative transfer and atmospheric modeling by, e.g., @sudarsky2000 predicts very low optical albedos for cloud-free hot Jupiters because of strong absorption by alkali metals. When silicate clouds form high enough in the atmosphere, however, optical albedos could be significantly higher [@sudarsky2000]. The low optical albedos measured for a few hot Jupiters seem to confirm this (e.g., [@rowe2006; @rowe2008]), whereas high albedos inferred for other planets seem to indicate the potential presence of clouds (e.g., [@quintana2013], [@demory2013inhomogen], [@esteves2013]). Theoretically, the most basic circulation patterns of hot Jupiters are relatively well understood (e.g., [@showman2002], [@madhusudhan2014], [@heng2015]). Tidally locked planets in close-in orbits always present the same hemisphere to the host star. In atmospheric models, the resulting strong irradiation contrasts and slow planetary rotations lead to inefficient recirculation throughout the IR photosphere of the planet (around pressures of 0.1-10mbar). The incident stellar energy is re-radiated before being circulated to the nightside. Consequently, strong temperature contrasts between dayside and nightside develop (e.g., [@showman2002], [@parmentier2013]).When orbital distance or rotation period increases, atmospheric modeling predicts a transition to a circulation regime with much stronger recirculation and less pronounced day-night temperature differences (e.g., [@showman2015]). Furthermore, in most 3D atmosphere models of hot Jupiters, a strong equatorial zonal jet appears, that then results in a displacement of the hottest point away from the substellar point (e.g., [@showman2011], [@perez2013], [@showman2015]). Observed thermal phase curves in the (near-)infrared (IR) of a few hot Jupiters seem to confirm these predictions (e.g., [@knutson2007daynight_189733], [@knutson2009daynight_189733], [@stevenson2014]). Assembling many different observations for a large number of hot Jupiters, @cowan2011hot and @schwartz2015 point out a possibly emerging trend, in line with theoretical predictions (e.g., [@perez2013]). Strongly irradiated planets show very inefficient recirculation, whereas for less irradiated planets, the whole range of recirculation is possible. @cowan2011hot and @schwartz2015 use published secondary eclipse data (i.e., an estimate of the planetary dayside flux) and thermal phase curves to perform their homogeneous analysis. For a few planets, optical phase curves are available, obtained with the CoRoT and Kepler satellites. These are not taken into account in @schwartz2015. However, for hot planets, even optical phase curves offer some constraints on thermal radiation, and therefore albedo and heat recirculation. So far, published studies of optical phase curves could not be used for such an analysis. This is, in each case, because of one of the three following reasons. Firstly, some models do not take thermal emission into account (e.g., [@mazeh2010], [@barclay2012], [@esteves2013; @esteves2015]). In such cases, attributing an observed phase curve asymmetry to an offset of the thermal hotspot is physically inconsistent (e.g., [@esteves2015]). Secondly, a few models (e.g., [@snellen2009]) only treat thermal emission and do not include scattered light. Therefore, inferring constraints on the scattering properties of the planet (as done in [@snellen2009]) is equally inconsistent. Thirdly, models that take both thermal and scattering components into account (e.g., [@mislis2012], [@faigler2013], [@placek2014], [@faigler2015]), do not use appropriate phase functions for these components. They assume, for instance, that thermal and scattered light have identical phase functions which is incoherent when assuming explicitly Lambertian scattering and blackbody thermal radiation (see below, Appendix \[verification\_model\] and eqs. \[cellflux\] and \[thermalflux\]). Therefore, we present a new model with a physically consistent treatment of both components. We apply our model to three hot Jupiters with well-characterized IR and optical measurements, namely CoRoT-1b, TrES-2b and HAT-P-7b, to compare to results from secondary eclipse analysis. In Sect. \[formodel\] and Sect. \[invmodel\], we describe the physical forward model, the inverse model and the Markov Chain Monte Carlo (MCMC) approach used in this work. Section \[setup\] describes the model setup and planetary scenarios, Sect. \[results\] presents the results, Sect. \[discuss\] a discussion, and we conclude with Sect. \[summary\]. The Appendices contain the fitting results, MCMC output, a short model verification as well as a discussion about Rayleigh scattering, non-Lambertian phase functions and stellar parameter uncertainties. Forward model {#formodel} ============= Geometry -------- The basic geometry set-up of the star-planet system is shown in Fig. \[illu\]. We adopt a coordinate system where the substellar point is fixed throughout the calculations at 0$^{\circ}$ latitude and 180$^{\circ}$ longitude. Therefore, by definition, local latitude $\vartheta$ is 0$^{\circ}$ at the equator (range -90$^{\circ}<\vartheta<90^{\circ}$) and local longitude $\varphi$ is 0$^{\circ}$ at local midnight (range $0^{\circ}<\varphi<360^{\circ}$). For planets with zero obliquity and a planetary rotation synchronized with the orbital period, our choice of coordinate system would establish a stationary map of the planet. For close-in giant planets, this is a reasonable assumption (but see, e.g., [@arras2010] or [@rauscher2014] for a discussion). The planetary “surface” is divided into cells of 2.5$^{\circ}$x2.5$^{\circ}$ size (72 cells in latitude, 144 in longitude). We define here “surface” as the optical photosphere, i.e., where the planetary atmosphere becomes optically thick to visible radiation. It is generally identified with the surface of the sphere with radius $R_P$. The number of cells is reasonable for computational purposes and still allows for smooth light curves without noticeable effects of discretization. For each cell, the local surface normal **n**, stellar and observer directions (**s** and **o**, respectively) are calculated. A cell contributes to the reflected light if both **n**$\cdot$**s**$\geqslant0$ (i.e., dayside) and **n**$\cdot$**o**$\geqslant0$ (i.e., visible to the observer) are satisfied. The nightside is defined with **n**$\cdot$**s**$\leqslant0$, and correspondingly, the part of the nightside visible by the observer with **n**$\cdot$**s**$\leqslant0$ and **n**$\cdot$**o**$\geqslant0$ , simultaneously. The subobserver latitude $\theta$ is given by the inclination $i$ of the orbital plane with respect to the observer: $$\label{obslon} \theta=90^{\circ}-i,$$ i.e., an edge-on orbit has $i$=$90^{\circ}$, and a face-on orbit has $i$=0$^{\circ}$. The subobserver longitude $\phi$ as a function of time $t$ is given by the orbital phase $\alpha$ ($\alpha=0$ and $t=0$ at primary transit, or equivalently, inferior conjunction, see Fig. \[illu\]): $$\label{obslat} \phi(t)=360^{\circ} - \alpha(t),$$ with $$\label{orbphase} \alpha(t)=T(t)+\omega_P,$$ where $T$ is the true anomaly and $\omega_P$ the argument of periastron. The true anomaly is calculated from the eccentric anomaly $E$: $$\label{true} \tan\left(\frac{T}{2}\right)=\sqrt{\frac{1+e}{1-e}}\tan\left(\frac{E}{2}\right),$$ with $e$ eccentricity. $E$ is determined by numerically solving Kepler’s equation: $$\label{keplereq} E-e\sin(E)=M,$$ with $M$ mean anomaly. The solution is obtained with a publicly available Fortran procedure[^1], based on a method described in @meeus1991. The mean anomaly $M$ is given by $$\label{meananomaly} M=2\pi\cdot \left(x-\lfloor x \rfloor \right),$$ where $x=\frac{t-t_{\rm{peri}}}{P_{\rm{orb}}}$, $P_{\rm{orb}}$ is the planetary orbital period, $t_{\rm{peri}}$ is the time of periastron passage and $\lfloor x \rfloor$ represents the floor function, i.e., the greatest integer less than or equal to $x$. [^2] Reflected light --------------- Stellar light incident on the planetary dayside is partly reflected back into space and towards the observer. The amount of reflected light reaching the observer depends, for instance, on the scattering properties of the planet (e.g., Rayleigh scattering produces a different phase function than Mie scattering, see, e.g., [@madhusudhan2012] for a review). In this work, in the absence of any reliable information on the scattering properties, the Lambert approximation of diffuse scattering is used. In Appendix \[rayappend\] we discuss the possible influence of Rayleigh scattering and other phase functions on the phase curve. In the Lambert approximation, for each cell contributing to the observed flux **($\mathbf{n} \cdot \mathbf{o} \geqslant 0 $)**, the flux (in Wm$^{-2}$) received by the observer from this cell**, $F_{\rm{r,o,cell}}$,** is given by $$\label{cellflux} F_{\rm{r,o,cell}}=\cos z_s \cdot F_{\rm{\ast,p}} \cdot \frac{A_S}{\pi} \cdot \cos z_o \cdot \frac{\Delta S}{d^2} ,$$ with $z_s$ the local stellar zenith angle, $z_o$ the local observer zenith angle, $F_{\rm{\ast,p}}$ the stellar flux at the planet’s orbit (in W m$^{-2}$), $\Delta S$ the surface element of the cell on the planet (in sr m$^2$), $d$ the observer-planet distance and $A_S$ the (potentially wavelength-dependent) planetary scattering albedo. $A_S$ is assumed to be constant with time, hence neglecting any time-dependent processes such as cloud formation etc. Note that, since we use the Lambertian approximation in eq. \[cellflux\], the scattering albedo is related to the geometric albedo $A_G$ in a simple manner: $$\label{ageo} A_G=\frac{2}{3}A_S.$$ The surface element $\Delta S$ of the cell, as seen from the planet’s center, is calculated as $$\label{surfelement} \Delta S=\Delta \Omega \cdot R_p^2,$$ with $R_P$ the planetary radius and $\Delta \Omega$ the solid angle (in sr) of the cell (angular extent 2.5$^{\circ}$x2.5$^{\circ}$, see above). The stellar flux at the planet’s orbit, $F_{\rm{\ast,p}}$, is calculated as follows: $$\label{stellarflux} F_{\rm{\ast,p}}=\pi \left(\frac{R_{\ast}}{r}\right)^2 \int_{\lambda_{\rm{low}}}^{\lambda_{\rm{high}}}I_{\rm{\ast,s}}q_I(\lambda)d\lambda,$$ where $I_{\ast,s}$ is a stellar model intensity (in Wm$^{-2}$sr$^{-1}$$\mu$m$^{-1}$) at the stellar surface, $q_I$ is the instrumental filter function, $\lambda_{\rm{low}}$ and $\lambda_{\rm{high}}$ define the wavelength interval of the bandpass, $r$ the star-planet distance and $R_{\ast}$ is the stellar radius[^3]. The numerical integration for eq. \[stellarflux\] is done with a standard trapezoidal integration scheme using a roughly 1nm spectral resolution. Stellar intensities $I_{\ast,s}$ are obtained from the ATLAS stellar atmosphere grid [@castelli2004][^4]. Instrumental filter functions $T_l$ are taken, for CoRoT-1, from @snellen2009 and, for TrES-2 and HAT-P-7, the Kepler Handbook[^5]. The time-dependent planet-star distance $r$ is calculated with $$\label{distance} r(t)=a\frac{1-e^2}{1+e\cos(T(t))},$$ with $a$ the semi-major axis. The total reflected flux $F_{\rm{r,o}}$ received by the observer is the sum over all contributing cells: $$\label{refflux} F_{\rm{r,o}}=\sum_{ (\mathbf{n} \cdot \mathbf{s} \geqslant 0 )\wedge (\mathbf{n} \cdot \mathbf{o} \geqslant 0 )} F_{\rm{r,o,cell}}.$$ Emitted light ------------- In addition to reflected starlight, the planet also emits thermal radiation that can contribute to the overall phase curve. In optical phase curves, this contribution would be negligible for long-period (and consequently colder) planets. However, for close-in hot Jupiters, the thermal component can become comparable to (or even dominate) the reflected component of the phase curve. In this work, we make the specific assumption of two hemispheres (nightside and dayside) which radiate as a blackbody with uniform temperatures $T_{\rm{night}}$ and $T_{\rm{day}}$, respectively. A widely used approach (for example, [@snellen2009], [@alonso2009], [@mislis2012], [@schwartz2015]) is to relate these temperatures to the Bond albedo $A_B$ and the efficiency of heat re-distribution $\epsilon$. Temperatures are then calculated, based on @cowan2011hot: $$\label{daytemp} T_{\rm{day}}^4=T_{\ast}^4\frac{R_{\ast}^2}{a^2(1-e^2)^{0.5}}\cdot (1-A_B)(\frac{2}{3}-\frac{5}{12}\epsilon)=T_0^4\cdot (\frac{2}{3}-\frac{5}{12}\epsilon),$$ and $$\label{nighttemp} T_{\rm{night}}^4=T_{\ast}^4\frac{R_{\ast}^2}{a^2(1-e^2)^{0.5}}\cdot (1-A_B)\frac{\epsilon}{4}=T_0^4\cdot \frac{\epsilon}{4},$$ where $T_{\ast}$ is the stellar effective temperature and the additional factor $(1-e^2)^{0.5}$ accounts for the mean flux received over an orbit [@williams2002]. The parameter $\epsilon$ is a re-parameterization of the geometrical redistribution factor $f$. @spiegel2010 define $f$ via the apparent dayside flux $F_d$ (i.e., close to secondary eclipse) and the total planetary luminosity $L_P$: $$\label{f_def} L_P=\frac{1}{f}\cdot \pi R_P^2 \sigma T_{\rm{day}}^4=\frac{1}{f}F_d.$$ For perfect heat recirculation, the entire planet is at a uniform temperature, and thus $f=\frac{1}{4}$, because both dayside and nightside hemispheres contribute to the planetary luminosity. At zero heat recirculation, the nightside emission is zero, and each point on the dayside hemisphere emits with its local radiative equilibrium temperature. Hence, as shown by @spiegel2010 and @cowan2011hot, $f=\frac{2}{3}$. Assuming radiative equilibrium, meaning that the total stellar flux $F_S$ intercepted by the planet equals its luminosity $L_P$, and approximating the star as a blackbody, one can re-arrange eq. \[f\_def\] to yield $$\label{f_re} f=\frac{F_d}{L_P}=\frac{F_d}{F_S}=\frac{T_{\rm{day}}^4}{T_0^4}.$$ For $\epsilon$ to vary between 0 and 1 (corresponding to no recirculation and perfect redistribution, respectively) then simply requires a linear transformation of $f$, which leads directly to eq. \[daytemp\]. Physically, $\epsilon$ is determined by the atmospheric circulation and the strength of winds and jets that transport heat away from the illuminated hemisphere. For strongly irradiated planets, the radiative timescale is expected to be much shorter than the advective (dynamical) timescale, therefore large day-night temperature contrasts and small values of $\epsilon$ are expected. For less irradiated planets, a range of circulation regimes is possible (e.g., [@showman2002], [@showman2011], [@perez2013], [@heng2015], [@showman2015]). Another option in the model is to retain $T_{\rm{night}}$ and $T_{\rm{day}}$ as free parameters for the inverse modeling, an approach used by, e.g., @placek2014. Since blackbody radiation is isotropic, the thermal flux $F_{\rm{t,o,cell}}$ received by the observer from each cell is given by $$\label{thermalflux} F_{\rm{t,o,cell}}(T_c)=\int_{\lambda_{\rm{low}}}^{\lambda_{\rm{high}}}B(T_c,\lambda)q_I(\lambda)d\lambda \cdot \cos z_o \cdot \frac{\Delta S}{d^2},$$ with $B(T_c,\lambda)$ the blackbody intensity at the cell’s temperature $T_c$ and $T_c=T_{\rm{night}}$ or $T_c=T_{\rm{day}}$, depending on the location of the cell. Again, the total emitted flux $F_{\rm{t,o}}$ is the sum over all cells which are visible for the observer (i.e., **n**$\cdot$**o**$\geqslant0$): $$\label{emmflux} F_{\rm{t,o}}=\sum_{ (\mathbf{n} \cdot \mathbf{s} \geqslant 0 )\wedge (\mathbf{n} \cdot \mathbf{o} \geqslant 0 )} F_{\rm{t,o,cell}}(T_{\rm{day}})+\sum_{ (\mathbf{n} \cdot \mathbf{s} \leqslant 0 )\wedge (\mathbf{n} \cdot \mathbf{o} \geqslant 0 )} F_{\rm{t,o,cell}}(T_{\rm{night}}).$$ Note that in our approach, thermal emission produces a different phase curve behavior compared to Lambert scattering of stellar radiation because of the additional factor $\cos z$ in eq. \[cellflux\] compared to eq. \[thermalflux\]. Therefore it is potentially possible to disentangle reflected from emitted light. This effect is illustrated in Fig. \[planckeffect\]. ![Effect of thermal emission on the phase curve. See text for discussion.[]{data-label="planckeffect"}](show "fig:"){width="250pt"}\ Figure \[planckeffect\] shows the reflective and the thermal contribution for a hot Jupiter planet in a 2-day orbit around a Sun-like star (500-1,000nm bandpass, $A_B$=$A_S$=0.15, $\epsilon$=0). In this example, the planetary mass is set to zero for illustration purposes. The reflected component is shown as a dashed line, the thermal component of the phase curve is the dotted line. Also shown are the combined phase curve (plain line) and a purely reflective phase curve (dot-dashed line) that has the same maximum amplitude as the combined phase curve. Due to the additional angle dependence of the reflected light ($\cos z_s$ in eq. \[cellflux\]), the reflected phase curve is steeper than the thermal one and shows a more pronounced peak towards phase $\alpha=\pi$. Based on the slope, if the orbital period is short enough and the photometric precision good enough, it is possible to determine dayside and nightside temperatures, hence Bond albedo and heat redistribution, contrary to what is generally assumed in previous studies (e.g., [@esteves2013; @esteves2015], [@schwartz2015]) that did not incorporate thermal radiation consistently. Of course, note that in reality, non-uniform distribution of temperatures (due to, e.g., chemical composition changes, [@agundez2012]) will complicate the interpretation of thermal phase curves. However, because of the currently limited data, assuming uniform hemispheres is justifiable. Ellipsoidal variations and Doppler boosting ------------------------------------------- In addition to the planetary contributions to the phase curve, our model also considers two modulations of the stellar light induced by the planet. These are the ellipsoidal variations (e.g., [@pfahl2008]) and the Doppler boosting (e.g., [@loeb2003]). In short, ellipsoidal variations are due to the tidal deformation of the star by the orbiting planet. As a consequence, the star represents a varying cross section to the observer, hence, the luminosity changes periodically, with a period half that of the orbital period of the companion. Doppler boosting is a consequence of the Doppler shift of the stellar spectrum induced by the stellar reflex motion. As the star orbits the barycentre of the star-planet system, the emitted stellar light shifts its wavelength, and thus periodically more or less light is emitted in the bandpass of the used instruments. We use the formalism developed in @quintana2013 to calculate the respective contrasts: $$\label{ellips} C_{\rm{ell}}=-A_{\rm{ell}}\cdot \cos(2 \alpha),$$ $$\label{doppler} C_{\rm{dopp}}=A_{\rm{dopp}}\cdot \sin(\alpha),$$ where $A_{\rm{ell}}$, $A_{\rm{dopp}}$ are the amplitudes of the ellipsoidal and Doppler variations. Note that we do not fit for a potential phase lag between tidal bulge on the star and the planet (as in, e.g., [@barclay2012]). Furthermore, we do not incorporate other harmonics of the orbital period into our ellipsoidal variation term (contrary to what was as done, e.g., in [@esteves2013]). $A_{\rm{ell}}$ is given by (see also eq. 2 in [@quintana2013]): $$\label{ellipsamp} A_{\rm{ell}}=\alpha_{\rm{ell}}\frac{M_p}{M_{\ast}} \left(\frac{R_{\ast}}{a}\right)^3\sin^2(i),$$ with $M_{\ast}$, $M_p$ the stellar and planetary mass, respectively. $\alpha_{\rm{ell}}$ is a parameter determined by the stellar limb ($u$) and gravity ($g$) darkening: $$\label{darkening} \alpha_{\rm{ell}}=0.15\cdot \frac{(15+u)(1+g)}{3-u}.$$ The coefficients $u$ and $g$ depend on stellar characteristics such as effective temperature $T_{\ast}$, metallicity $\rm{[Fe}/\rm{H}]$ and surface gravity $\log g_s$ (determined from $M_{\ast}$ and $R_{\ast}$). As in @barclay2012, @quintana2013 or @esteves2013, we use pre-calculated tables for $u$ and $g$ to interpolate linearly in $T_{\ast}$, $\rm{[Fe}/\rm{H}]$ and $\log g_s$. These tables are taken from model calculations presented in @claret2011 for the Kepler and CoRoT bandpasses. We adopt their coefficients obtained with a microturbulence velocity of 2kms$^{-1}$ with ATLAS stellar models and, in the case of $u$, fitted with a linear least squares approach. $A_{\rm{dopp}}$ is given by (see also eq. 5 in [@quintana2013]): $$\label{doppleramp} A_{\rm{dopp}}=(3-\alpha_{\rm{dopp}})\frac{K}{c},$$ where $c$ is the speed of light, $K$ the radial velocity semi-amplitude and $\alpha_{\rm{dopp}}$ a parameter which depends on the wavelength of observation $\lambda_{\rm{obs}}$ and the stellar effective temperature. As in @quintana2013, we use the approximate equations from @loeb2003 for $\alpha_{\rm{dopp}}$: $$\label{alphaloeb} \alpha_{\rm{dopp}}=\frac{e^x(3-x)-3}{e^x-1},$$ where $x=\frac{h\frac{c}{\lambda_{\rm{obs}}}}{k_BT_{\ast}}$ ($h$ Planck’s constant, $k_B$ Boltzmann’s constant). This approach of calculating $(3-\alpha_{\rm{dopp}})$ is somewhat different from the approach used in, e.g., @esteves2013 [@esteves2015]. However, comparing results for TrES-2b (their work, 3.71, to 3.87 with our approach) and HAT-P-7b (3.41 to 3.59) indicates that both approaches yield values that are within 5%. Hence mass determinations are expected to be comparable. $K$ is determined as (with $G$ the gravitational constant) $$\label{velocityamp} K=\left(\frac{2\pi G}{P_{\rm{orb}}} \right)^{1/3} \frac{M_p}{M_{\ast}^{2/3}} \sin(i)\frac{1+e\cos \omega_P}{(1-e^2)^{0.5}}.$$ Asymmetries ----------- As demonstrated by, e.g., @demory2013inhomogen and @esteves2015, many exoplanets show asymmetries in their phase curves, with respect to secondary eclipse. This means that the maximum amplitude is reached before (after) secondary eclipse, meaning that the maximum brightness is shifted eastwards (westwards) of the substellar spot. In the case of a westwards shift, it has been interpreted as the presence of clouds that enhance the reflectivity of the “morning”[^6]side. When the shift is eastwards, i.e., the “evening” side is brighter, previous studies attributed this to a shift in the hottest region of the atmosphere. Such a shift is associated with atmospheric circulation which transports heat away from the substellar point before it is re-radiated. Most 3D atmospheric models of hot Jupiters produce such an offset in thermal emission (e.g., [@showman2002], [@showman2015]), and IR phase curves seem to confirm the theoretical predictions (e.g., [@knutson2007daynight_189733]). This change in brightness can be accounted for in the model in two ways. First, for the reflected-light component of the phase curve, we implement a simple dark-bright model, similar to the approach chosen in @demory2013inhomogen. Part of the planet has a scattering albedo $A_S$ (see eq. \[cellflux\]), and part of the planet has a different scattering albedo $d_S\cdot A_S$, with $d_S$ being a free parameter such that $d_S\cdot A_S\leq1$. The extent in longitude of the latter part of the planet is controlled by two further parameters, namely $l_{\rm{start}}$ and $l_{\rm{end}}$. We do not consider any latitudinal variation of the scattering properties. ![Illustration of the asymmetric models. Left: Reflective dayside (red) with bright “morning” (green). Right: Thermal offset modeled as a shifted dayside (red). See text for discussion.[]{data-label="asym"}](asym_geometry_scatt "fig:"){width="120pt"} ![Illustration of the asymmetric models. Left: Reflective dayside (red) with bright “morning” (green). Right: Thermal offset modeled as a shifted dayside (red). See text for discussion.[]{data-label="asym"}](asym_geometry_therm "fig:"){width="120pt"}\ Second, for the thermal component of the phase curve, we consider a simple offset $\Theta_{\rm{d}}$ of the dayside such that the dayside has an extent in longitude between $\Theta_{\rm{d}}$ and $180^{\circ}+\Theta_{\rm{d}}$. Figure \[asym\] illustrates the asymmetric models. Phase curve ----------- The final time-dependent contrast $C$ between star and planet is then $$\label{contrastref} C(t)=\frac{F_R(t)+F_E(t)}{F_{\rm{\ast,o}}}+C_{\rm{ell}}+C_{\rm{dopp}}.$$ The stellar flux at the observer, $F_{\rm{\ast,o}}$, is calculated analogous to $F_{\rm{\ast,p}}$ (see eq. \[stellarflux\]): $$\label{staratobserver} F_{\rm{\ast,o}}=\pi \left(\frac{R_{\ast}}{d}\right)^2 \int_{\lambda_{\rm{low}}}^{\lambda_{\rm{high}}}I_{\rm{\ast,s}}q_I(\lambda)d\lambda.$$ Model parameter summary ----------------------- In total, our full physical model as described above contains up to 19 parameters, listed in Table \[parasummary\]. Group Parameters ----------------- ------------------------------------------------------------ Stellar (4) $T_{\ast}$, $R_{\ast}$, $M_{\ast}$, $\rm{[Fe}/\rm{H}]$ Orbital (4) $P_{\rm{orb}}$, $e$, $\omega_p$, $i$ Planetary (2) $R_p$, $M_p$ Atmospheric (5) $A_S$, $A_B$, $\epsilon$, $T_d$, $T_n$ Asymmetries (4) $d_S$, $l_{\rm{start}}$, $l_{\rm{end}}$, $\Theta_{\rm{d}}$ : Parameters of the forward model[]{data-label="parasummary"} These 19 parameters however are not independent. For instance, if $A_B$ and $\epsilon$ are fixed, $T_d$ and $T_n$ can be determined using eqs. \[daytemp\] and \[nighttemp\]. Inverse model {#invmodel} ============= scenario parameters priors planets comments/constraints ----------------- ------------------------------------------------------------------------------------- --------------------------------------------------------------------- --------- ------------------------------------------------------ standard $\epsilon$, $A_S$, $M_P$ $\epsilon$, $A_S$uniform in \[0,1\] C, T, H $A_B$=$A_S$ $M_P$ uniform in \[0.1,10\] $M_{\rm{jup}}$ $M_P$ fixed for C standard + asy $\epsilon$, $A_S$, $M_P$,$d_S$, $l_{\rm{start}}$, $l_{\rm{end}}$ $\epsilon$, $A_S$uniform in \[0,1\] T, H $A_B$=$A_S$ $M_P$uniform in \[0.1,10\] $M_{\rm{jup}}$ $d_S$ uniform in \[0,10\] $d_S\cdot A_S\leq 1$ $l_{\rm{start}}$, $l_{\rm{end}}$uniform in \[$90$,$270$\]$^{\circ}$ $l_{\rm{start}} \leq l_{\rm{end}}$ standard + off $\epsilon$, $A_S$, $M_P$, $\Theta_{\rm{d}}$ $\epsilon$, $A_S$uniform in \[0,1\] T, H $A_B$=$A_S$ $M_P$ uniform in \[0.1,10\] $M_{\rm{jup}}$ $\Theta_{\rm{d}}$uniform in \[0,$360$\]$^{\circ}$ standard + both $\epsilon$, $A_S$, $M_P$, $\Theta_{\rm{d}}$,$d_S$, $l_{\rm{start}}$, $l_{\rm{end}}$ $\epsilon$, $A_S$uniform in \[0,1\] T, H $A_B$=$A_S$ $M_P$uniform in \[0.1,10\] $M_{\rm{jup}}$ $\Theta_{\rm{d}}$uniform in \[0,$360$\]$^{\circ}$ $d_S$uniform in \[0,10\] $d_S\cdot A_S\leq 1$ $l_{\rm{start}}$, $l_{\rm{end}}$uniform in \[$90$,$270$\]$^{\circ}$ $l_{\rm{start}} \leq l_{\rm{end}}$ free A $\epsilon$, $A_B$, $A_S$, $M_P$ $A_S$, $A_B$, $\epsilon$ uniform in \[0,1\] C, T, H $a_V \cdot A_S$&lt;$A_B$&lt;$a_V\cdot A_S$+(1-$a_V$) $M_P$uniform in \[0.1,10\] $M_{\rm{jup}}$ $M_P$ fixed for C free T $T_d$, $T_n$, $A_S$, $M_P$ $A_S$uniform in \[0,1\] C, T, H $T_d$, $T_n$uniform in \[500, 3000\] K $T_n \leq$$T_d$ $M_P$uniform in \[0.1,10\] $M_{\rm{jup}}$ $M_P$ fixed for C no scattering $\epsilon$, $A_B$ $A_B$,$\epsilon$uniform in \[0,1\] C $A_S$=0, @snellen2009 We use the Bayesian formalism to calculate posterior probability values $p(V_P | D)$ for the parameter vector $V_P$ in the model, given a set $D$ of observations. $$\label{bayes} p(V_P | D)\propto p(D | V_P) \cdot p(V_P).$$ The likelihood $p(D | V_P)$ is calculated assuming independent measurements and Gaussian errors for the individual data points. The priors $p(V_P) $ are taken to be uninformative over the entire parameter range allowed (for example, uniform over \[0,1\] for albedo and heat redistribution). MCMC algorithm -------------- To sample the full parameter space, we adopt a Markov Chain Monte Carlo (MCMC) approach. In this work, we use the emcee python package developed by @foreman2013, implementing an algorithm described in @goodman2010. emcee uses multiple chains (in this work, 500-1,000) to sample the parameter space. The algorithm proposes, for each chain, new positions based on the position of the entire ensemble of chains. Compared to more traditional MCMC approaches, emcee converges quicker and is less likely to be dependent on initial conditions. Also, when using a high number of chains, the algorithm is less likely to become stuck in local minima since it is possible to eliminate chains from the ensemble (e.g., [@hou2012]). To ensure good convergence and avoid any contamination by initial conditions, the chains were run for 500-2,000 steps ($>$10-20 auto-correlation lengths for each parameter). The first few auto-correlation lengths were considered as burn-in and discarded for the calculation of parameter uncertainties. Convergence was checked by inspecting visually the evolution of the mean of the entire ensemble and calculating the Gelman-Rubin test. Initial positions were obtained with a random sample within the assumed prior to allow the sampler to start by exploring the entire parameter space. ![Trace plots of model parameters for the HAT-P-7b “standard + asy” model (see Table \[corot1mcmcsummary\]). Blue line traces the ensemble median, red lines correspond to the \[0.16,0.84\] percentiles (the dark red region in Fig. \[uncertain\_illu\] below).[]{data-label="asym_conv"}](asym_convergence "fig:"){width="250pt"}\ Uncertainty ranges are calculated by marginalizing over the posterior distribution, thinned by 60 steps, covering roughly 1-3 auto-correlation lengths of the particular parameter in question. We then determine 68% and 95% credibility regions as the \[0.16,0.84\] and \[0.03,0.97\] median-centered percentiles, respectively, of the cumulative probability distributions (CDF). If the parameter distribution were to be Gaussian, these credibility regions would correspond to the 1 and 2$\sigma$ uncertainties, respectively. Figure \[uncertain\_illu\] illustrates our method to determine credibility regions and best-fit parameters. Best-fit parameters are determined as the maximum a-posteriori (MAP) set of parameters, i.e., the walker with the highest a-posteriori probability at the last step of the algorithm. Note that the MAP does not correspond to the median of the CDF in most cases. Furthermore, depending on the nature of the likelihood surface sampled by the chains (degeneracies, slope, etc.), the MAP, as determined from our sample, can occasionally even lie outside the \[0.03,0.97\] percentile (see Tables \[corot1mcmcresults\]-\[hat7mcmcresults\], Figs. \[corot1triangle\_2\]-\[hat7triangle\_3\]). ![Example of determination of uncertainty ranges via the cumulative probability distribution: Scattering albedo of CoRoT-1b in the “standard” scenario (see text for further details). 68% credibility region in dark red, 95% credibility region in light red. Dashed lines indicate maximum a-posteriori (MAP) value.[]{data-label="uncertain_illu"}](cumulative_alb "fig:"){width="250pt"}\ Goodness-of-fit criteria and model comparison --------------------------------------------- We use three standard quantities (e.g., [@feigelson2012]) to evaluate the best-fitting models obtained from the MCMC model: $\chi^2$, the reduced $\chi^2_{\rm{red}}$ and the Bayesian Information Criterion (BIC). These are evaluated for the MAP parameter set (not the median of the CDF). $\chi^2$, the sum of the weighted squared residuals, is defined as $$\label{chi2def} \chi^2=\sum_{\rm{j=1}}^{N_D}\left(\frac{M_{V_P,j}-D_j}{\sigma_j}\right)^2,$$ where $N_D$ is the number of data points, $\sigma$ the corresponding uncertainties and $M_{V_P}$ is the model prediction for the parameter vector $V_P$. The reduced $\chi^2_{\rm{red}}$ is generally calculated by $$\label{chi2reddef} \chi^2_{\rm{red}}=\frac{\chi^2}{N_D-N_P},$$ where $N_P$ is the number of parameters. In order to be considered a good fit, $\chi^2_{\rm{red}}$ should be of the order of unity. Note that by adding more parameters to the fitting model, the $\chi^2$ will decrease. Therefore, we use another criterion, the BIC (valid when $N_D\gg N_P$, which is the case for our calculations), defined as $$\label{bicdef} \rm{BIC}=\chi^2 + N_P \ln (N_D).$$ The model that minimizes the BIC is taken to be the preferred model. The BIC penalizes overly complex models when complexity or sophistication is not warranted by the data, and also allows direct model comparison. For instance, the model probability ratio $p_M$ (i.e., the probability that the model is preferred over another one), can be expressed as $$\label{bicdiff} p_M=e^{-\frac{\Delta \rm{BIC}}{2}},$$ where $\Delta \rm{BIC}=\rm{BIC}_{M1}-\rm{BIC}_{M0}$ is the difference between the model $M_1$ under consideration, and the model $M_0$ that minimizes the BIC. The BIC is an approximation to the evidence (or marginal likelihood) $E_{\rm{model}}$ of a given model: $$\label{evidencedef} E_{\rm{model}}=\int p(D | V_P) \cdot p(V_P) dV_P.$$ The calculation of the integral in eq. \[evidencedef\] is in most case computationally very expensive. In emcee, this integral is estimated using a so-called thermodynamic integration, based on an algorithm proposed by @goggans2004. However, the calculation is very time-consuming (up to $\sim$50 times longer than the MCMC sampling), and does not, in our cases, provide any qualitatively additional information, compared to the BIC. Therefore, we only report the BIC and base our discussions on eq. \[bicdiff\]. Model set-up {#setup} ============ Planets ------- ### CoRoT-1b CoRoT-1b is the first planet discovered from space [@barge2008] and the first planet with a detected optical phase curve [@snellen2009]. Secondary eclipses of CoRoT-1b have been detected in the optical [@alonso2009] and IR, both from ground (e.g., [@rogers2009] and [@gillon2009]) and from space (e.g., [@deming2011]). These studies suggested that CoRoT-1b would be a very dark planet, with a low albedo ($A_G \lesssim$0.1) and inefficient heat redistribution ($\epsilon \lesssim$0.2), leading to high dayside temperatures of the order of 2,300K. We use the binned CoRoT red-channel phase curve data presented by @snellen2009 (their Figure 1). All model parameters (stellar parameters, orbital period and inclination, planetary mass and radius) are taken from @barge2008, similarly to what was done in @snellen2009. The orbit is assumed to be circular. This is consistent with the analysis of secondary-eclipse timing (e.g., [@rogers2009], [@alonso2009], [@deming2011]) that constrain the eccentricity to values of $e \lesssim$0.03. Given the data quality of the optical phase curve, the results are not sensitive to eccentricity. When allowing $e$ and $\omega_P$ to be fitted (simulations that are not shown here), constraints on either $A_B$ or $\epsilon$ did not change appreciably compared to the circular case. ### TrES-2b TrES-2b is a transiting hot Jupiter [@odonovan2006] discovered by the TrES survey. It was the first transiting exoplanet discovered in the Kepler field prior to Kepler’s 2009 launch. Ground-based and Spitzer secondary eclipse photometry suggests moderately high temperatures around 1,500K (e.g., [@odonovan2010], [@croll2010]). The optical phase curve has been detected in the Kepler data (e.g., [@kipping2011], [@barclay2012], [@esteves2013]). Results indicate a very low geometric albedo ($A_G \leq 0.02$) and a rather efficient day-night redistribution of energy ($\epsilon$&gt;0.5), since dayside and nightside temperatures are quite similar (around 1,300-1,500K, [@esteves2013]). We use the binned Kepler phase curve data ([@barclay2012], their Figure 5). All model parameters (stellar parameters, orbital period and inclination, planetary radius) are taken from @barclay2012. The orbit is assumed to be circular, since secondary-eclipse timing provided constraints consistent with zero eccentricity (e.g., [@odonovan2010], [@croll2010]) and optical phase curve analysis by @esteves2015 did not find any significant eccentricity. ### HAT-P-7b As TrES-2b, HAT-P-7b is a very hot Jupiter discovered in the Kepler field [@pal2008] prior to the launch of the satellite. It was the first Kepler planet with a measured optical phase curve [@borucki2009]. Further analysis of the phase curve demonstrated a detection of ellipsoidal variations [@welsh2010]. @christiansen2010 used Spitzer secondary eclipse data to infer maximum brightness temperatures of more than 3,000K. @esteves2013 presented a new optical phase curve using full 3-year Kepler photometry. Measured phase curve amplitudes and secondary eclipse depths vary considerably between @borucki2009, @welsh2010 and @esteves2013, leading to differences in inferred dayside and nightside temperatures of the order of 500K which are most likely a consequence of the extended data set in @esteves2013. Similar brightness temperature differences of 300-500K for the Spitzer secondary eclipse data have been found by @cowan2011hot, compared to @christiansen2010, which they attribute to the use of different stellar models. Based on calculated brightness temperatures and optical phase curves, @christiansen2010 inferred a very inefficient heat redistribution ($\epsilon$ close to zero) between dayside and nightside and modest geometric albedos ($A_G$&lt;0.1). @esteves2013 found a geometric albedo of $A_G=0.18$ and a relatively homogenous temperature distribution, in contrast to @christiansen2010. @schwartz2015 found moderate albedos, slightly lower than @esteves2013 and moderate recirculation values, also in contrast to previous analysis [@christiansen2010]. We use the binned Kepler phase curve data ([@esteves2013], their Figure 3) . However, model parameters are not taken from @esteves2013 or @esteves2015. Instead, stellar parameters are taken from @eylen2012, and planetary parameters (inclination, radius) are taken from @eylen2013. We refer to Appendix \[phase\_transit\] for a discussion on our choice of parameters. Again, we assume a circular orbit, since detailed radial-velocity data is consistent with $e$=0 (e.g., [@pal2008], [@winn2009]). Also, previous phase curve analysis did not find a hint of significant eccentricity [@esteves2015] Photometric fits ---------------- @snellen2009 analyze the phase curve of CoRoT-1b in terms of the normalized flux $F_{\rm{norm}}$, normalized to the primary, instead of secondary, eclipse (i.e., without transit, the flux at zero-phase would be unity). Therefore, we calculate the forward model as: $$\label{corot1phase} F_{\rm{norm}}(\alpha)=\frac{1+C(\alpha)}{1+C(0)}.$$ Both @barclay2012 and @esteves2013 fit the phase curves in terms of variations in the photometric light curve, as in eq. \[contrastref\]. They however introduce another, non-physical parameter, a zero-point flux offset $f_0$. This parameter is related to the data reduction and not a priori linked to any physical characteristics of the star-planet system. The light curve $L_C$ is then described with the following equation: $$\label{f0phase} L_C(\alpha)=C(\alpha)+f_0.$$ We will use this equation for our analysis of TrES-2b and HAT-P-7b. Hence, $f_0$ is added to the tally of free parameters of the physical model (Table \[corot1mcmcsummary\]). Planetary scenarios ------------------- As summarized in Table \[corot1mcmcsummary\], we explore several different scenarios for the three planets. The main difference between these models lies in the treatment of the thermal component of the phase curve. The first scenario (“standard”) assumes scattering albedo $A_S$ and heat redistribution as fitting parameters and uses eqs. \[daytemp\] and \[nighttemp\] to calculate hemispheric temperatures. The Bond albedo $A_B$ is fixed to the scattering albedo ($A_S$=$A_B$). This allows our results to be compared directly to previous inferences of albedo and heat recirculation (e.g., [@snellen2009], [@schwartz2015]) and is consistent with previous studies (e.g., [@alonso2009]). The equality between Bond albedo and scattering albedo is motivated by the fact that a significant fraction $a_V$ of stellar light is emitted in the CoRoT red channel or the Kepler bandpass ($a_V\approx0.3$ for CoRoT-1). The second scenario (“free albedo”) relaxes this tight coupling between Bond albedo and scattering albedo. We allow both $A_B$ and $A_S$ to vary freely, but still calculate dayside and nightside temperatures from the Bond albedo, as in the first scenario. Based on energy conservation, however, we put some constraints on the Bond albedo: $$\label{abondconstraint} a_V \cdot A_S\leq A_B\leq a_V\cdot A_S+(1-a_V).$$ This equation takes into account the contribution of the scattering albedo to the overall radiative budget. The lower limit of the Bond albedo ($A_{B,\rm{low}}=a_V \cdot A_S$) is a hard lower limit since it implies zero albedo outside the bandpass. Similarly, when assuming an albedo of unity outside the bandpass, we obtain the strict upper limit of $A_{B,\rm{high}}=a_V \cdot A_S+(1-a_V)$. In a third approach (“free T”), we use $T_d$ and $T_n$ as fitting parameters (instead of Bond albedo and heat recirculation) and only impose $T_n \leq T_d$. To compare directly to the phase-curve analysis by @snellen2009, we then use a fourth model approach for CoRoT-1b. We set $A_S$=0, i.e., the phase curve is produced by thermal emission only (“no scattering”). This was done because the physical model used in @snellen2009 only accounts for thermal radiation (their eq. 1). All scenarios for CoRoT-1b assume symmetric phase curves and do not fit for planetary mass because of the relatively low signal-to-noise ratio and reduced phase resolution of the binned phase curve. For TrES-2b and HAT-P-7b, however, the data quality is good enough that ellipsoidal variations are clearly seen. Therefore, we take the planetary mass to be a fit parameter. We also fitted the data with asymmetric standard models (see Table\[corot1mcmcsummary\]), either with asymmetric scattering, a dayside offset or a combination of both. Energy balance -------------- The energy balance can also be expressed in terms of received and emitted flux. For this, we divide the planet in uniform hemispheres (see Sect. \[formodel\]). The Bond albedo determines the overall received flux which must then be re-emitted on the day and night hemispheres. $$\label{fluxbalance} (1-A_B) \cdot F_{\rm{\ast,p}}=2\left (\sum_j^{N_{\rm{day}}}F_{j,\rm{day}}+\sum_j^{N_{\rm{day}}}F_{j,\rm{night}}\right ),$$ where the sums contain the measured fluxes in the various wavelength bands (${N_{\rm{day}}}$ bands of the dayside spectrum, ${N_{\rm{night}}}$ of the nightside spectrum). Hence, under the assumption that outside the measured bands, no flux is emitted, we obtain an upper limit for the Bond albedo. This constraint is physically sound and does not rely on specific assumptions except radiative equilibrium and the hemispheric uniformity. Results ======= Convergence results ------------------- The results of the convergence tests and the trace plots for all parameters are shown in Appendix \[convergence\_res\]. All simulations seem to have reached a stationary distribution and thus converged. Most parameters also pass the Gelman-Rubin (GR) test (generally, for most MCMC tools, a value of less than 1.1-1.2 is considered acceptable). Note, however, that the GR test is a test for good mixing of the ensemble. Thus, even when a stationary distribution is reached because of a large number of chains used in the calculation, parameters might fail the GR test (values larger than 1.2). This usually means that the inter-chain variance is large compared to the variance of individual chains (in our cases, sometimes orders of magnitude). For strongly non-linearly correlated parameters (e.g., in the “standard + off” and “standard + both” scenarios of HAT-P-7b, see below), it will take a long time for a single chain to explore the entire permitted range. Thus, the GR statistic will be large, without necessarily impacting the convergence of the ensemble. This is clearly shown by the fact that the MCMC algorithm actually finds strong correlations. A particularly good example of this are the “standard + asy” and “standard + both” scenarios for TrES-2b (see Figs. \[tres2\_conv\_2\] and \[tres2\_conv\_3\], Table \[tres2gelman\]). Since the parameters describing the albedo asymmetry are tightly anti-correlated, the MCMC algorithm finds basically two separated solutions (see also Fig. \[tres2triangle\_3\]), resulting in “bad” GR values. When restricting the calculations to one of the solutions (not shown), the GR test is passed by all parameters (values less than 1.17 in the “both” scenario, less than 1.09 in the “asymmetric” scenario). CoRoT-1b -------- Figure \[corot1b\_phase\] shows the data and the best-fit model from @snellen2009 as well as our best-fit models from the various planetary scenarios (Table \[corot1mcmcsummary\]). Since the “free T” and the “free albedo” best-fit models are virtually identical, only the latter is shown, for clarity. Clearly, the standard and free-albedo best-fit models do not differ by much. It is also apparent that both the models presented in this work and the model of @snellen2009 provide reasonable fits to the data. ![CoRoT-1b red-channel phase curve: Comparison of best-fit models with data (red) and fit by @snellen2009 (yellow). Primary transit and secondary eclipse not shown.[]{data-label="corot1b_phase"}](corot1_compare "fig:"){width="250pt"}\ Best-fit parameters as well as goodness-of-fit criteria and MCMC posterior parameter distributions are reported in Appendix \[mcmcresults\] (Table \[corot1mcmcresults\], Figs. \[corot1triangle\_2\] and \[corot1triangle\_3\]). Based on the BIC value, the data seem to slightly favor the standard scenario, however, all models in Table \[corot1mcmcresults\] seem acceptable. Furthermore, fit results suggest that the phase curve is dominated by scattering rather than by thermal emission. This can be seen in Table \[corot1mcmcresults\] from the fact that the inferred value of the scattering albedo $A_S$ is largely unaffected by the choice of the thermal model (0.11&lt;$A_S$&lt;0.3 at 95% confidence in the “free A” model). The combined arithmetic mean of the scattering albedo in the three models (“standard”, “free A”, “free T”) is $A_S$=0.22. The independence of $A_S$ of the thermal model is illustrated in Fig. \[corot1b\_scatt\]. ![Marginalized posterior distributions for CoRoT-1b scattering albedo in different models. $A_S$ is approximately independent of the thermal model (0.11&lt;$A_S$&lt;0.3 at 95% confidence in the “free A” model).[]{data-label="corot1b_scatt"}](corot1_scattering_albedo "fig:"){width="250pt"}\ Constraints for Bond albedo are weak, and heat recirculation is essentially unconstrained. In addition, constraints on $\epsilon$ show some dependence on the choice of the thermal model employed (see Fig. \[corot1b\_bond\]). This suggests that indeed the optical phase curve does not contain much information on the thermal component, hence is dominated by reflected starlight rather than thermal emission. Independent circumstantial evidence for this can be drawn from the observed, flat transmission spectrum of CoRoT-1b [@schlawin2014] which can be interpreted as a hint for the presence of clouds (or, at least, a reflecting layer in the upper atmosphere). ![Marginalized posterior distributions for CoRoT-1b $\epsilon$ (left) and $A_B$ (right) in different models. Constraints on $A_B$ are weak. $\epsilon$ is unconstrained and depends on the thermal model.[]{data-label="corot1b_bond"}](corot1_epsilon "fig:"){width="120pt"} ![Marginalized posterior distributions for CoRoT-1b $\epsilon$ (left) and $A_B$ (right) in different models. Constraints on $A_B$ are weak. $\epsilon$ is unconstrained and depends on the thermal model.[]{data-label="corot1b_bond"}](corot1_bond_albedo "fig:"){width="120pt"}\ Figure \[corot1b\_temp\] shows the constraints on dayside and nightside temperatures in the “free T” model. Inferred dayside temperatures are much lower than the IR brightness temperatures derived from secondary-eclipse measurements. Essentially, results are consistent with zero phase curve contribution from thermal emission, in accordance with the “standard” and “free A” models. Also, in accordance with @snellen2009, we find that the nightside emission is consistent with zero. ![Marginalized posterior distributions for CoRoT-1b dayside and nightside temperatures in the “free T” model.[]{data-label="corot1b_temp"}](corot1_temp "fig:"){width="250pt"}\ Our results are somewhat contrary to the results from the IR secondary-eclipse measurements ([@schwartz2015]) and conclusions of the analysis of the optical phase curve by @snellen2009. Both studies used the same formalism as our “standard” scenario (i.e., using eqs. \[daytemp\] and \[nighttemp\]), and they conclude that CoRoT-1b is probably a low-albedo planet with inefficient heat recirculation. Note, however, that the phase curve model used by @snellen2009 only takes thermal radiation into account, hence the scattering albedo is, by default, zero. Our results suggest significant scattering and, depending on the thermal model, at least some recirculation. These contrasting results are illustrated in Fig. \[corot1b\_circulation\]. For this, we calculated the joint credibility regions in the two-dimensional $A_G$-$\epsilon$ space such that points simultaneously lie in the 95% credibility regions of each parameter. The joint credibility region thus corresponds to an approximately 90% probability. Figure \[corot1b\_circulation\] shows these credibility regions for both the “standard” and the “no scattering” scenario. It is clear that the 1$\sigma$ uncertainty region of @schwartz2015 and our joint credibility region barely overlap in the “standard” case. By contrast, the “no scattering” case yields approximately the same constraints as the analysis by @snellen2009 and @schwartz2015. However, the no-scattering case is equivalent to imposing a strong prior on the scattering albedo ($A_S$=0) that seems rather ad-hoc. Therefore, on physical grounds, we prefer our standard model over the no-scattering case, even though goodness-of-fit criteria could not be used to decide formally which model to prefer, given that the $\Delta$BIC is small (see Table \[corot1mcmcresults\]). ![Joint credibility regions of recirculation and geometric albedo (see eq. \[ageo\]) for CoRoT-1b in the “standard” (red dots) and “no scattering” (blue dots) scenarios. Orange contour: 1$\sigma$ uncertainty region in @schwartz2015. Our “standard” model strongly disagrees with previous work.[]{data-label="corot1b_circulation"}](Cowan_compare_Corot "fig:"){width="250pt"}\ Despite the lack of information on the thermal emission of CoRoT-1b from its optical phase curve, we can however still put some constraints on the Bond albedo using the estimated value of $A_S$ and eq. \[abondconstraint\]. Fit results for the free-T scenario (see Table \[corot1mcmcresults\]) translate, at 95% confidence, to 0.03&lt;$A_B$&lt;0.82, or, when using the best-fit values, 0.06&lt;$A_B$&lt;0.8, with $a_V \approx 0.26$ which is consistent with results from the “free A” scenario. When using the reported IR brightness temperatures of CoRoT-1b ([@rogers2009], [@deming2011]) and our optical brightness temperatures (${N_{\rm{day}}}$=4, ${N_{\rm{night}}}$=1 in eq. \[fluxbalance\]), we obtain $A_B$&lt;0.85, consistent with results from eq. \[abondconstraint\]. This is not a strong constraint, since the spectral coverage is not large. However, it is a mostly model-independent result. Especially, nightside emission measurements are missing, except for our optical phase curve analysis (resulting in a non-detection since the nightside emission is consistent with zero at the level of the measurement errors). TrES-2b ------- Figure \[tres2b\_phase\] shows the various best-fit models of the different MCMC scenarios. It is apparent that the main difference between symmetric and asymmetric models is in the second peak where asymmetric models provide a better fit to the data. For clarity, the “free A” and “free T” scenarios are not shown. In Table \[tres2mcmcresults\], we state best-fit parameters as well as 95% credibility regions for the parameters. Parameter posterior distributions are shown in Figs. \[tres2triangle\_1\]-\[tres2triangle\_3\] in the Appendix. ![TrES-2b phase curve: Comparison of best-fit models with data (red) and fit by @barclay2012 (orange). Primary transit and secondary eclipse not shown.[]{data-label="tres2b_phase"}](tres2b_compare "fig:"){width="250pt"}\ Results presented in Fig. \[tres2b\_alb\] confirm the very dark nature of TrES-2b, with scattering albedos $A_S$&lt;0.03 at 95% credibility, as already inferred by previous authors (e.g., [@kipping2011], [@barclay2012], [@esteves2013; @esteves2015]). ![Marginalized posterior distributions for TrES-2b $A_S$ in different models. TrES-2b is a dark planet in all models ($A_S$&lt;0.03 at 95% confidence).[]{data-label="tres2b_alb"}](tres2_scattering_albedo "fig:"){width="250pt"}\ Furthermore, as shown in Fig. \[tres2b\_eps\], the value of $\epsilon$ depends strongly on the choice of the planetary scenario (asymmetric vs. symmetric), as is the case for CoRoT-1b. For the asymmetric models, this is immediately obvious. With $\epsilon$ close to unity, there will be no strong contrast between dayside and nightside, hence no asymmetry can be produced in the “standard + off” scenario. In order to produce a noticeable effect on the phase curve, inefficient heat recirculation is required ($\epsilon$ close to zero). As can be seen in Fig. \[tres2b\_eps\], the distributions for $\epsilon$ in the “standard + off” and “standard + both” scenarios are close to each other, which suggests that the preferred mechanism to produce an asymmetric phase curve is probably thermal radiation, not reflected light (i.e., $\epsilon$ is probably small). ![Marginalized posterior distributions for TrES-2b $\epsilon$ in different models. $\epsilon$ depends strongly on the chosen model.[]{data-label="tres2b_eps"}](tres2_epsilon "fig:"){width="250pt"}\ Figure \[tres2b\_mass\] shows the marginalized posterior distributions for the inferred planetary mass. Also shown are results of previous phase curve modeling by @barclay2012 and @esteves2013 as well as RV mass determinations. It is obvious that the precision of the photometric mass is worse than that of the RV data. However, our mass values are consistent with previous studies. ![Marginalized posterior distributions for TrES-2b $M_P$ in different models. Mass from previous studies (including RV measurements) in gray.[]{data-label="tres2b_mass"}](tres2_mass "fig:"){width="250pt"}\ It is clear that our model provides a relatively good fit to the observed phase curve (Fig. \[tres2b\_phase\], $\chi^2_{\rm{red}}\approx$2-2.2, see Table \[tres2mcmcresults\]). Compared to the best-fit model by @barclay2012, our symmetric models consistently calculate a noticeably higher photometric contrast post-eclipse. However, as already noted by @esteves2015, this is simply because the model of @barclay2012 allows for a separate, independent fitting of the beaming and ellipsoidal amplitudes that are adjusted to compensate for the apparent decrease in the phase curve (see also discussion in [@faigler2015]). This is also the main reason why beaming and ellipsoidal masses do not agree with each other in @barclay2012 or @esteves2013. When allowing for asymmetric phase curves, the fit becomes slightly better. In terms of the respective BIC values (see Table \[tres2mcmcresults\]), the “standard + off” scenario is slightly favored, although a $\Delta \rm{BIC} \approx 3.5$ is not enough to detect firmly an asymmetry. Our tentative detection is therefore not in contradiction with conclusions of @esteves2015 who state that symmetric models are favored for TrES-2b. Figure \[tres2triangle\_2\] (right panel) shows some interesting correlations between $\epsilon$, $A_S$ and $\Theta_D$. For an increasing albedo, the offset of the dayside also increases. This is because, with increasing contribution of scattered light to the phase curve, the offset must become more pronounced to affect the phase curve and produce a visible asymmetry. Furthermore, for low albedos (e.g., high temperatures and low scattering contribution), inefficient heat recirculation is required to produce a phase curve at all. Upon increasing the scattering albedo, higher values of $\epsilon$ are allowed, but that reaches a maximum. Beyond this maximum, scattered light will dominate the phase curve, and again, $\epsilon$ must decrease to produce a significant thermal asymmetry (i.e., large day-night temperature differences). Figure \[tres2triangle\_3\] (left panel) illustrates a degeneracy in the “standard + asy” scenario, between $A_S$, $d_S$, $l_{\rm{start}}$ and $l_{\rm{end}}$. Since the asymmetric phase curve requires a lower post-eclipse amplitude to fit the data, the “evening” side must be brighter than the “morning” side. This can be achieved in two ways: either high $A_S$ and correspondingly $d_S$&lt;1 and $l_{\rm{start}}$ and $l_{\rm{end}}$ delimiting part of the “morning” side, or low $A_S$ and correspondingly $d_S$&gt;1 and $l_{\rm{start}}$ and $l_{\rm{end}}$ delimiting part of the evening side. However, note that from a physical standpoint, it is unclear how the albedo could be higher on the evening side. Mostly, it is assumed that clouds are responsible for the scattering. These are supposed to dissipate over time while circulating over the dayside hemisphere, therefore post-eclipse maxima are not generally attributed to clouds (e.g., [@demory2013inhomogen], [@esteves2015]). Another possibility for post-eclipse maxima being due to albedo changes would be the photodissociation of absorbers such as TiO or VO. In atomic form, the absorption would be much less efficient, hence the planetary albedo would increase. Investigating this possibility is, however, beyond the scope of this work. ![Joint credibility regions of recirculation and geometric albedo for TrES-2b in the “standard” (black dots) and “standard + off” (blue dots) scenarios. Green contour: 1$\sigma$ uncertainty region in @schwartz2015. Both this and previous work are consistent with each other.[]{data-label="tres2b_circulation"}](Cowan_compare_Tres "fig:"){width="250pt"}\ Figure \[tres2b\_circulation\] shows the constraints on recirculation and geometric albedo as inferred from our “standard” and “standard + off” scenarios, compared to results by @schwartz2015. As for CoRoT-1b, we use joint credibility regions to illustrate our inferred range in the $A_G$-$\epsilon$ plane. Both our results and the results of @schwartz2015 are consistent with each other and certainly agree better than for CoRoT-1b (see Fig. \[corot1b\_circulation\]). However, note that the 1$\sigma$ region of @schwartz2015 contains roughly 30% of the points of the “standard” model, but only about 8% of the points of the “standard + off” model. Similarly to CoRoT-1b, we use the calculated $A_S$ values to put constraints on the overall Bond albedo of TrES-2b (eqs. \[abondconstraint\] and \[fluxbalance\]). Using $a_V \approx 0.4$, we obtain, based on the optical phase curve, $A_B$&lt;0.6. Putting together the dayside brightness temperature measurements from IRAC and Ks bands ([@odonovan2010], [@croll2010]), we then derive $A_B$&lt;0.68. These constraints are somewhat tighter than the ones derived for CoRoT-1b because the spectral coverage is larger and TrES-2b is a cooler planet. HAT-P-7b -------- Figure \[hat7b\_phase\] shows our best-fit models of the different MCMC scenarios. In contrast to TrES-2b, the phase curve of HAT-P-7b is dominated by reflected light, rather than by the ellipsoidal variations (even though these are still clearly visible). Again, for clarity, the “free A” and “free T” scenarios are not shown. In Table \[hat7mcmcresults\], we state best-fit parameters as well as 95% credibility regions for the parameters. Parameter posterior distributions are shown in Figs. \[hat7triangle\_1\]-\[hat7triangle\_3\] in the Appendix. ![HAT-P-7b phase curve: Comparison of best-fit models with data (red) and fit by @esteves2013 (orange). Primary transit and secondary eclipse not shown.[]{data-label="hat7b_phase"}](hat7b_compare "fig:"){width="250pt"}\ Figure \[hat7\_mass\] shows the marginalized posterior distributions for the inferred planetary mass. Also shown are results of previous phase curve modeling by @esteves2013 and @esteves2015 as well as RV mass determinations. Again, as for TrES-2b, our estimated mass values are consistent with previous studies, and the determined planetary mass is not greatly affected by the choice of the phase curve model. Note that the formal uncertainties on planetary mass are somewhat smaller in our work than the RV uncertainties. This is mainly due to the excellent photometric quality of the phase curve and the fact that we fix stellar parameters, i.e., the stellar mass does not contribute to the final uncertainty on mass estimates. Note also the strong disagreement between mass estimates from @esteves2013 and @esteves2015 (plain and dashed gray lines in Fig. \[hat7\_mass\], respectively). This is because the former uses separate beaming and ellipsoidal amplitudes as fitting parameters, while the latter uses planetary mass as a fitting parameter, as we do here. The beaming amplitude is adjusted to account for the asymmetry of the phase curve, thus planetary mass estimates from beaming and ellipsoidal amplitudes do not agree (4.2 compared to 1.6 $M_J$, see Table 5 in [@esteves2013]). Hence, it is clearly demonstrated that a separate fitting of both amplitudes can potentially lead to incorrect mass estimates. ![Marginalized posterior distributions for HAT-P-7b $M_P$ in different models. Mass from previous studies (including RV measurements) in gray.[]{data-label="hat7_mass"}](hat7_mass "fig:"){width="250pt"}\ Figure \[hat7\_alb\] shows the inferred scattering albedo for the different scenarios. The values are broadly consistent with the values from @esteves2015 who find a geometric albedo of $A_G \approx$ 0.2, close to our values (recall $A_G$=$\frac{2}{3} A_S$, eq. \[ageo\]). The fact that $A_S$ is mostly independent of the specific planetary scenario suggests that the estimated value of $A_S$ (0.26&lt;$A_S$&lt;0.34 at 95% confidence) is robust. The arithmetic mean for the combined scenarios is $A_S$=0.28. ![Marginalized posterior distributions for HAT-P-7b $A_S$ in different models. $A_S$ is mostly independent of the adopted model. 0.26&lt;$A_S$&lt;0.34 at 95% confidence.[]{data-label="hat7_alb"}](hat7_scattering_albedo "fig:"){width="250pt"}\ As before, $\epsilon$ depends on the choice of the thermal model, as illustrated in Fig. \[hat7\_eps\]. Similar to what has been found for TrES-2b, the “standard + off” scenario requires a very different distribution for $\epsilon$ in order to produce a thermal offset in the phase curve. The fact that, for HAT-P-7b, the distribution of the “standard + both” scenario is closer to the “standard + asy” distribution, suggests that for HAT-P-7b, the asymmetry in the phase curve is better explained by scattered light than by thermal emission. This is supported by the BIC values (see Table \[hat7mcmcresults\]). Both the “standard +asy” and the “standard + both” scenarios are strongly favored by a probability of about 10$^5$ compared to the “standard + off” scenario ($\Delta$BIC$\approx$20). This result indicates that the preferred model explanation for the asymmetry is reflected light, rather than a thermal offset (but see above for a discussion of the physical problems of this solution). Our results quite clearly suggest asymmetric models rather than symmetric ones ($\Delta$ BIC&gt;400), again confirming previous phase curve analysis [@esteves2015]. The $\chi^2_{\rm{red}}$ values in Table \[hat7mcmcresults\] are somewhat high (3.7 for the preferred model). Usually, such a high value might indicate that either the model is not capturing correctly the physical behavior of the system, or that the errors are under-estimated. However, the $\chi^2_{\rm{red}}$ is dominated by a few points (7 outliers contribute 50% of the total $\chi^2_{\rm{red}}$), and one particular point contributes around 15%. Since this is found for all fit scenarios (i.e., always the same outliers), this suggests that the error bars are indeed under-estimated. When removing the apparently systematic outliers, the $\chi^2_{\rm{red}}$ is reduced to less than 2. This in turn suggests a relatively good fit. ![Marginalized posterior distributions for HAT-P-7b $\epsilon$ in different models. $\epsilon$ is strongly dependent on the adopted planetary scenario.[]{data-label="hat7_eps"}](hat7_epsilon "fig:"){width="250pt"}\ Figure \[hat7triangle\_2\] (right panel) shows the correlations for the “standard + off” scenario, as discussed above for TrES-2b. These correlations are much stronger and cleaner in this case, since the signal-to-noise ratio of the phase curve is much better for HAT-P-7b. Figure \[hat7\_circulation\] shows the constraints on recirculation and geometric albedo as inferred from our scenarios, compared to @schwartz2015. As above, we use joint credibility regions to illustrate our inferred range in the $A_G$-$\epsilon$ plane. Note that the “standard+asy” model results and the low-albedo part of the “standard+both” model overlap (see also Fig. \[hat7triangle\_3\]). It is clearly seen that our results and the results of @schwartz2015 strongly disagree, as was the case for CoRoT-1b. However, because of the good data quality of the HAT-P-7b phase curve, the disagreement is stronger than for CoRoT-1b. There is no overlap between our 90% joint credibility regions and the 1$\sigma$ uncertainty region of @schwartz2015. This is because of the very well-constrained scattering albedo, which is nearly independent of the thermal and asymmetry model that was chosen (see also Fig. \[hat7\_alb\]). ![Joint credibility regions of recirculation and geometric albedo for HAT-P-7b in different scenarios. Green contour: 1$\sigma$ uncertainty region in @schwartz2015. Both our and previous work strongly disagree on the inferred $\epsilon$ and $A_G$ values.[]{data-label="hat7_circulation"}](Cowan_compare_Hat "fig:"){width="250pt"}\ We use the calculated $A_S$ values and measured IR dayside emission spectrum to constrain $A_B$ for HAT-P-7b (eqs. \[abondconstraint\] and \[fluxbalance\]). As for TrES-2b, we have $a_V \approx 0.4$. Therefore, based on the optical phase curve, we find 0.11&lt;$A_B$&lt;0.72. The observed Spitzer spectrum translates into $A_B$&lt;0.87. These constraints are very loose because HAT-P-7b is a rather hot planet, compared to TrES-2b. Near-IR measurements covering the 1-3$\mu$m range (close to the Wien peak of the thermal radiation) would be highly desirable to further constrain the energy balance and add more constraints on the Bond albedo. Discussion {#discuss} ========== For Solar System objects, phase curves have been incredibly useful to determine the scattering properties of atmospheric particles (size distribution, vertical location and extent, composition). Ground-based data as well as spacecraft observations (e.g., Venus Express, Voyager 1 and 2, Pioneer 10 and 11) have been used to investigate, e.g., Venus (e.g., [@arking1968], [@garcia2014], [@petrova2015]), Titan (e.g., [@rages1983]), Jupiter (e.g., [@tomasko1978], [@smith1984]), Saturn (e.g., [@tomasko1984]) or Uranus ([@rages1991], [@pryor1997]). Large differences are found in the broadband phase curves of, e.g., Jupiter and Saturn (e.g., [@dyudina2005]) or Mars, Mercury and Venus (e.g., [@mallama2009]). These are of course attributable to differences in cloud structure and composition, the absence or presence of an atmosphere, topographic surface features or the amount of dust-covered or bare regolith, to name but a few factors influencing the phase curves. Sophisticated radiative transfer models in combination with cloud and aerosol models are needed to interpret these observations correctly and retrieve scattering properties. In comparison, exoplanet studies suffer from the incredibly crude data available at present (in terms of signal-to-noise ratio, spectral resolution or spectral coverage). Even though recent progress has been astonishing, we do not expect exoplanet data to approach Solar-System quality in the near future. Hence, interpretation of exoplanet observations does not require models of comparable complexity yet, although more complex models, which take into account, for example, cloud formation or temperature gradients, have recently been published (e.g., [@webber2015], [@hu2015]) and applied to, e.g., the well-characterized phase curve of Kepler-7b. However, most studies, including this work, rely on simpler models and make strong assumptions to infer planetary and atmospheric properties. For example, the model used in this work relies on the following two assumptions, in line with previous studies (e.g., [@snellen2009], [@cowan2011hot], [@schwartz2015]): - Day and night hemispheres are assumed to be respectively described by a single, uniform temperature, without any longitudinal or latitudinal gradients. In reality, this is unlikely to be true. Secondary-eclipse mapping of the hot Jupiter HD189733b has already demonstrated that the brightness distribution is far from uniform (e.g., [@dewit2012]). This can be interpreted as a non-uniform temperature distribution. IR phase curves also clearly show temperature gradients (e.g., [@knutson2007daynight_189733; @knutson2009daynight_189733], [@crossfield2010]). In the hypothetical no-recirculation limit ($\epsilon=0$), non-uniformity effects might produce a thermal beaming dominated by the sub-stellar point that could potentially enhance planetary emitted radiation (e.g., [@selsis2011], [@schwartz2015]). However, given the relatively low signal-to-noise ratios, the number of effects contributing to the optical phase curve and the large bandpass of Kepler and CoRoT, the optical phase curve is not expected to be very sensitive to the temperature distribution. - The observed brightness temperature in a given spectral bandpass equals the bolometric equilibrium temperature hence constrains the energy budget of the atmosphere and can be related to the Bond albedo. This is a fundamental assumption that is unlikely to hold once better spectral resolution becomes available. As suggested by, e.g., @barclay2012, the photospheres for optical and IR observations (as well as for day- and nightside emission) are probably located at different pressures. Hence, the observations would probe different temperatures and dynamical regimes. Depending on pressure, circulation and temperature regimes can be quite different (e.g., [@parmentier2013], [@agundez2014], [@showman2015]). Hence, observed brightness temperatures in either spectral domain would not be necessarily related to the bolometric equilibrium temperature. It is possible that these assumptions are not violated (or at least, not strongly) for many exoplanets and that they more or less hold. Our results, however, imply that optical and IR data lead to different conclusions for the same objects (in two out of three cases) when applying these assumptions. Therefore, it seems that they are too strong and overly simplified. It is a subject of future research to reconcile this finding with the current data quality, which does not necessarily warrant complex models or a level of sophistication much higher than the models presented here or in previous work. We point out, however, that in the case for CoRoT-1b, both our model results and the results by @schwartz2015 are marginally compatible, since their 1$\sigma$ uncertainty regions and our 90% credibility regions slightly overlap. Therefore, a re-analysis of the CoRoT-1b phase curve with the newly released, improved data pipeline might reduce the photometric uncertainties and provide a more decisive answer to resolve the apparent contradiction between phase curve and secondary eclipse analyses. Conclusions {#summary} =========== We have presented a simple, yet physically consistent, model of optical phase curves for exoplanets. It includes Lambertian scattering, thermal emission (under the assumption of uniform hemispheric temperatures), ellipsoidal variations and Doppler boosting. It can account for asymmetric phase curves by longitudinally asymmetric scattering albedos and an offset of thermal radiation compared to the sub-stellar point. This model has been used to re-analyze published phase-curve data of CoRoT-1b, TrES-2b and HAT-P-7b. Results are then compared to an analysis of secondary-eclipse data of these planets by @schwartz2015. We have shown that for CoRoT-1b and HAT-P-7b, inferred albedo and heat recirculation values from optical phase curves are different compared to previously published results. For TrES-2b, both methods yield similar results. We find that CoRoT-1b has a rather higher scattering albedo than previously found. We find 0.11&lt;$A_S$&lt;0.3 at 95% confidence, which is in slight contrast with previous analyses, which found $A_S$&lt;0.15 ([@snellen2009], [@schwartz2015]). Also, full phase curve analysis favors a strong redistribution of stellar incident energy to the nightside, contrary to previous studies, which suggested a very inefficient recirculation ([@snellen2009], [@schwartz2015]). These contradictions are mainly because previous optical phase curve analysis of CoRoT-1b by @snellen2009 considered only thermal emission. In line with previous studies on the optical phase curve of HAT-P-7b (e.g., [@esteves2013; @esteves2015]), we find an appreciable albedo ($A_S$$\approx$0.3), slightly higher than inferred from secondary eclipse data. In contrast to previous studies based on secondary eclipse data [@schwartz2015], the analysis of the optical phase curve favors moderate to efficient heat recirculation. Asymmetric models are found to best fit the observed phase curve. These differences between secondary eclipse and optical phase curve analyses occur most likely because optical and IR observations probe different atmospheric layers. Furthermore, our results suggest that some of the assumptions made (specifically, that observed brightness temperatures constrain the energy budget) are probably too strong and should be relaxed. Future work will aim, among others, at re-analyzing further planets with published optical phase curves and reconciling different observations in the optical and the IR for CoRoT-1b. This study has received financial support from the French State in the frame of the “Investments for the future” Programme IdEx Bordeaux, reference ANR-10-IDEX-03-02. P. G. acknowledges support from the ERC Starting Grant (3DICE, grant agreement 336474). We thank the anonymous referee for a very constructive and positive feedback. Computer time for this study was provided in parts by the computing facilities MCIA (Mésocentre de Calcul Intensif Aquitain) of the Université de Bordeaux and of the Université de Pau et des Pays de l’Adour. Model verification {#verification_model} ================== We verify that the implementation of model equations leads to the correct limits for reflected and emitted light. Equations \[lambert\_formula\] and \[z\_formula\] show the phase function for a standard Lambertian sphere. These equations have been used in most studies of optical phase curves so far. $$\label{lambert_formula} \Phi (z)=\frac{1}{\pi}\left(\sin (z)+(\pi-z)\cos(z)\right).$$ $$\label{z_formula} \cos (z) = -\sin(i) \cos (\alpha_0).$$ Note that $\alpha_0$ is 0 at opposition and $\pi$ at primary transit, contrary to $\alpha$ (see eq. \[obslat\]). For thermal radiation, the phase function is described by the illuminated fraction $L$ of the planetary disk, i.e., the dayside: $$\label{illu_formula} \Phi (z) = \frac{1}{2}(1- \cos (z)).$$ ![Model test: Reflected flux compared to exact Lambertian sphere.[]{data-label="fluxverif"}](veri_GEO "fig:"){width="250pt"}\ In Fig. \[fluxverif\], we show the difference between the exact formulation (eqs. \[lambert\_formula\] and \[z\_formula\]) and our model for the reflected component. The considered case is a hot Jupiter in a 10-day orbit around a Sun-like star, at varying orbital inclinations. The amplitude of the signal is of the order of a few ppm (10$^{-6}$). The difference is about three orders of magnitude less, which clearly indicates that the model correctly incorporates Lambertian scattering. The high-frequency structure in the residuals is due to the spatial discretization of the numerical model and has no effect on physical results. ![Model test: Emitted flux compared to exact solution.[]{data-label="planckverif"}](veri_THERMAL "fig:"){width="250pt"}\ Figure \[planckverif\] shows the comparison of our model to the exact solution for thermal radiation. In this case, we show a hot Jupiter in a 2-day orbit around a Sun-like star ($A_B$=0.15, $\epsilon$=0), in order to get an appreciable signal. The difference at peak amplitude is about 0.01ppm for a total amplitude of $\approx$1.6ppm. This amounts to an error of less than 1%, which we deem acceptable. Again, the high-frequency structure in the residuals is due to the spatial discretization of the numerical model and the time resolution. Phase curves of transiting planets {#phase_transit} ================================== If the data is of high enough quality, in terms of signal-to-noise ratio or time resolution, more and more parameters can be added to the fit. However, when analyzing phase curves of transiting planets, some parameters can be related to one another self-consistently, by the shape of the primary transit. For instance, the transit depth of the primary transit directly yields the radius ratio $k_r$ between planet and star: $$\label{radius_ratio} k_r=\frac{R_p}{R_{\ast}}.$$ Furthermore, for circular orbits, transit duration and transit shape can be related to the orbital inclination $i$ and the projected star-planet separation $k_p$ in units of stellar radii: $$\label{semi_ratio} k_p=\frac{a}{R_{\ast}}.$$ Assuming $M_P<<M_{\ast}$, we can write Kepler’s 3rd law as follows: $$\label{kepler3} \frac{P_{\rm{orb}}^2}{a^3}=\frac{4\pi^2}{GM_{\ast}}.$$ Since the orbital period $P_{\rm{orb}}$ of transiting planets is usually known to within a few minutes or better, and $k_p$ and $k_r$ are mostly determined to an accuracy of better than 1%, it is possible to calculate the stellar mass and the planetary radius, given a stellar radius. Equation \[kepler3\] yields, in this case, an analytic relation between stellar radius and stellar mass. Such a relation is shown in Fig. \[hat7b\_cons\] (blue line) for HAT-P-7, using a period of $P_{\rm{orb}}$=2.204days [@pal2008] and $k_p$=4.1512 [@esteves2013]. Also shown are stellar parameters taken from @pal2008 who used high-resolution spectroscopy and @eylen2012 who used asteroseismology (see Tables \[hatstellartable\] and \[hattransittable\] for a compilation). study $M_{\ast}$\[$M_{S}$\] $R_{\ast}$\[$R_{S}$\] ------------ ----------------------- ----------------------- -- @pal2008 1 .47$\pm$0.08 1.84$\pm$0.23 @eylen2012 1.361$\pm$0.021 1.904$\pm$0.01 : Stellar parameters for HAT-P-7. In case of asymmetric uncertainties in the original publication, the larger one is stated.[]{data-label="hatstellartable"} study $k_r$ $k_p$ -------------- ----------------------- ------------------- @pal2008 0.0763$\pm$0.001 4.35$\pm$0.38 @esteves2013 0.07749$\pm$0.000013 4.1512$\pm$0.0026 @eylen2013 0.077462$\pm$0.000034 4.1547$\pm$0.0042 : Planetary parameters for HAT-P-7b. In case of asymmetric uncertainties in the original publication, the larger one is stated.[]{data-label="hattransittable"} ![Consistency between reported radius and mass determinations for the star HAT-P-7, by using the determined orbital period $P_{\rm{orb}}$=2.204days, $a/R_{\ast}$ values from Table \[hattransittable\], blue line is eq. \[kepler3\][]{data-label="hat7b_cons"}](hat7_consistency "fig:"){width="250pt"}\ It is clearly seen that fixing stellar parameters at $R_{\ast}$=1.84$R_{S}$ and $M_{\ast}$=1.47$M_{S}$, as done by @esteves2013, results in inconsistent system parameters. These then introduce a significant error in estimating the planetary mass from the ellipsoidal variations. To illustrate the effects on planetary mass estimates, we performed inverse modeling of the HAT-P-7b phase curve, adopting the “standard” scenario from Table \[corot1mcmcsummary\], i.e., fitting for mass, albedo ($A_B$=$A_S$) and heat recirculation. Consistent models use the stellar parameters from @eylen2012, i.e., $R_{\ast}$=1.90$R_{S}$ and $M_{\ast}$=1.36$M_{S}$, whereas the inconsistent models use $R_{\ast}$=1.84$R_{S}$ and $M_{\ast}$=1.47$M_{S}$, as done in @esteves2013. Figure \[hat7b\_inconsistent\] shows the residuals $\Delta C=C_{\rm{con}}-C_{\rm{incon}}$ of the best-fit models. As is clearly seen, both scenarios result in virtually identical fits, which only differ by about 0.2ppm (compared to the roughly 75ppm amplitude of the phase curve). Furthermore, both best-fit models result in similar $\chi^2_{\rm{red}}$ values of 10.8 and 10.97, respectively. ![Residuals between consistent (stellar mass and radius from [@eylen2012]) and inconsistent (stellar mass and radius from [@pal2008]) best-fit models of the HAT-P-7b optical phase curve. []{data-label="hat7b_inconsistent"}](hat7b_inconsistent "fig:"){width="250pt"}\ All parameters except planetary mass are not affected by the choice of stellar parameters. Figure \[tricons\] shows the marginalized posterior distributions for the planetary mass, for both sets of stellar parameters. It is clear that the estimated planetary mass varies by as much as 30%. In case of inconsistent stellar parameters, the planetary mass is severely over-estimated, compared to RV results. Therefore, we chose the stellar parameters stated in @eylen2012 since these are consistent with parameters deduced from primary transit analysis. ![Marginalized posterior distributions for HAT-P-7b $M_P$, for different adopted stellar parameters in the standard model. []{data-label="tricons"}](mass_inconsistent){width="250pt"} Optional phase function choices {#rayappend} =============================== Empirical Solar System phase functions -------------------------------------- A few previous studies (e.g., [@collier2002], [@kane2010longperiod]) used an empirically derived phase function instead of the Lambert phase function in eq. \[lambert\_formula\]. This phase function was obtained from a fit to optical observations of Venus and Jupiter. $$\label{delta_m_emp} \Delta m (\alpha_0)=0.09\frac{\alpha_0}{100^{\circ}}+2.39\left (\frac{\alpha_0}{100^{\circ}}\right)^2-0.65\left(\frac{\alpha_0}{100^{\circ}}\right)^3.$$ $$\label{emp_phase} \Phi (\alpha_0)=10^{-0.4\cdot \Delta m (\alpha_0)}.$$ When fitting the phase curve of HAT-P-7b with the empirical phase function, we obtain a geometric albedo of about $A_G\approx$0.2, consistent with previous estimates using the Lambertian approximation. However, as shown in Fig. \[hat7b\_emp\], the estimated planetary mass is far larger. Even when changing from a uniform prior to a Gaussian prior based on RV measurements (1.78$\pm$0.08, [@esteves2015]), the fitted mass is greatly over-estimated. Hence, our results suggest that eq. \[emp\_phase\] as a particular choice of phase function is probably not correct for HAT-P-7b. Possible reasons to explain this include, e.g., the higher temperature (much higher than both Venus and Jupiter, for which this particular phase function was derived) and a consequently much different atmospheric chemistry. Also, cloud properties could play a role, since HAT-P-7b most likely has some form of silicate or iron clouds (see, e.g., comparison of Kepler-7b and Jupiter in [@webber2015]). Even for Jupiter and Saturn, cloud properties are thought to be responsible for the difference in observed phase functions (e.g., [@dyudina2005]). Such an impact of the choice of the phase function on mass estimates has also been discussed by @mislis2012. ![Constraints on geometric albedo and planetary mass, as derived from the standard model using the empirical phase function of eq. \[emp\_phase\].[]{data-label="hat7b_emp"}](hat7_empirical_sym "fig:"){width="250pt"}\ Rayleigh scattering ------------------- To investigate the influence of Rayleigh scattering, we incorporated H$_2$ and He Rayleigh scattering in the model. These two species are thought to form the major constituents of gas-giant atmospheres. The Rayleigh scattering cross sections of H$_2$ and He are calculated as $$\label{rayleigh} \sigma_{\rm{ray,i}}(\lambda)= \left (\frac{\lambda_{\rm{0,i}}}{\lambda}\right)^4 \cdot, \sigma_{\rm{0,i}}$$ where $\sigma_{\rm{ray,i}}$ of species $i$ is given in cm$^2$ per molecule, $\lambda$ in $\mu$m and $\lambda_{\rm{0,i}}$ is a reference wavelength where $\sigma_{\rm{0,i}}$ has been measured. This approach is used in many approximative treatments of Rayleigh scattering (see, e.g., [@lecav2008]). For the values of $\lambda_0$ and $\sigma_{\rm{0,i}}$ in eq. \[rayleigh\], measurements from @shardanand1977 were used in this work, as tabulated in Table \[rayleigh\_shard\]. Molecule $\lambda_{\rm{0,i}}$ \[$\mu$m\] $\sigma_{\rm{0,i}}$ \[cm$^2$\] ---------- --------------------------------- -------------------------------- H$_2$ 0.5145 1.17 $\times$ 10$^{-27}$ He 0.5145 8.6 $\times$ 10$^{-29}$ : Rayleigh scattering parameters for use in eq. \[rayleigh\][]{data-label="rayleigh_shard"} The optical depth in an atmospheric layer $j$ due to Rayleigh scattering, $\tau_{\rm{ray,j}}$, is obtained with the following equation: $$\label{raylayer} \tau_{\rm{ray,j}}=\sum_k\sigma_k C_{k,j},$$ where $\sigma_k$, $C_{k,j}$ are the Rayleigh cross section and the column density of species $k$ respectively. We calculate the column density as $$\label{columndens} C_{k,j}=c_{k,j} \cdot \frac{P_k-P_{k+1}}{\mu_{\rm{atm}} g_P},$$ where $c_{k,j}$ is the volume mixing ratio of species $k$ in layer $j$, $g_P$ planetary gravity, $\mu_{\rm{atm}}$ the mean molecular weight of the atmosphere ($\approx$2 for H$_2$-dominated atmospheres) and $P_k$ is the layer pressure. The atmospheric layers are approximately spaced evenly in $\log P$ from the “surface” pressure $P_S$ to 10$^{-4}$bar. From there, the total optical depth $\tau_{\rm{ray}}$ for use in eq. \[transmission\] is obtained by summing the optical depths of each layer from the surface to the model lid: $$\label{raytautotal} \tau_{\rm{ray}}=\sum_j \tau_{\rm{ray,j}}.$$ In Fig. \[h2\_relative\], the used Rayleigh scattering cross sections of H$_2$ are compared to measurements reported in the literature ([@shardanand1977]) as well as different approximations used in various models (Table II of [@penndorf1957], [@lecav2008]). For H$_2$, the agreement with measurements is very good, again to within the stated error bars of @shardanand1977. Also, the agreement with the approximation of @lecav2008 is very good. The comparison with the parametrization of H$_2$ Rayleigh scattering using @penndorf1957 data is less good. ![Comparison of H$_2$ Rayleigh scattering cross sections. Relative deviations $\frac{\sigma_{\rm{model}}-\sigma_{\rm{data}}}{\sigma_{\rm{model}}}$ in % between model and data sources (as indicated). Vertical lines show measurement uncertainties. Grey horizontal line indicates 0% deviation.[]{data-label="h2_relative"}](rayleigh_cross_relative_H2 "fig:"){width="250pt"}\ At the “bottom” of the atmosphere, at a prescribed “surface” pressure $P_S$, we impose a Lambertian surface with scattering albedo $A_S$. Hence, eq. \[cellflux\] is modified, $$\begin{aligned} \label{raycell} F_R &=& T_{\rm{ray}}\cdot F_l+ \\ \nonumber& & (1-T_{\rm{ray}}) \cdot \Phi_{\rm{ray}} \cdot \frac{\omega}{\cos z_s+\cos z_o} \\ \nonumber && \cdot \cos z_s \cdot \frac{S}{r(t)^2}\cdot \cos z_o \cdot \Delta \Omega \left(\frac{R_p}{d}\right)^2 ,\end{aligned}$$ where $\omega$ is the single-scattering albedo (set to unity in the all-scattering, zero-absorption approximation used here), $T_{\rm{ray}}$ is the transmission along the optical path calculated as $$\label{transmission} T_{\rm{ray}}=e^{-\tau_{\rm{ray}}(\frac{1}{\cos z_s}+\frac{1}{\cos z_o})},$$ with $\tau_{\rm{ray}}$ the (zenith) optical depth due to Rayleigh scattering (calculated at the mid-point of the spectral interval considered). The value of $\tau_{\rm{ray}}$ will depend critically on the choice of $P_S$ (see above, eqs. \[raylayer\] and \[raytautotal\]). $\Phi_{\rm{ray}}$ is the phase function of Rayleigh scattering $$\label{rayphase} \Phi_{\rm{ray}}=\frac{3}{16\pi}\left(1+(\cos\phi_o)^2\right),$$ with $\phi_o$ the angle between observer and incoming stellar light, i.e., $\cos\phi_o=$**s**$\cdot$**o**. Figure \[rayeffect\] shows the effect of Rayleigh scattering on the phase curve. Together with the standard Lambertian scattering approximation of eq. \[cellflux\], we show phase curves with varying values of $P_S$. As expected, with increasing $P_S$, hence increasing contribution of Rayleigh scattering to the reflected light, the phase curve changes. For $P_S$=1bar, the atmosphere starts to become visible, and at $P_S$=10bar, dominates over the “surface” contribution. ![Effect of Rayleigh scattering on the phase curve, for different values of $P_S$. 1ppm error bar to the left. See text for discussion.[]{data-label="rayeffect"}](max_amp_RAY "fig:"){width="250pt"}\ Atmospheric modeling of hot Jupiters predicts the formation of clouds around the 10$^{-2}$bar layer or even at lower pressures (e.g., [@parmentier2013], [@webber2015]). This implies that the reflecting “surface” is at $P_S$&lt;10$^{-2}$bar. Furthermore, optical absorption by, e.g., alkali metals or TiO/VO, greatly increases with pressure. Based on cross sections and solar abundances presented by @desert2008, Fig. \[tio\] shows the transmission due to TiO and VO absorption assuming $P_S$=10$^{-4}$bar. Figure \[tio\] suggests that not much radiation is expected to penetrate to levels where Rayleigh scattering becomes important. ![Transmission due to TiO and VO, as a function of wavelength. See text for details.[]{data-label="tio"}](tio.png "fig:"){width="250pt"}\ Hence, we would expect that Rayleigh scattering does not play a large role, and we neglect it in our phase-curve studies (equivalent to setting $P_S$=0). Therefore, phase curves are calculated with the Lambert approximation. MCMC results {#mcmcresults} ============ ![image](corot1_standard_triangle){width="250pt"} ![image](corot1_snellen_triangle){width="250pt"}\ ![image](corot1_free_alb_triangle){width="250pt"} ![image](corot1_free_temp_triangle){width="250pt"}\ scenario 95% credibility regions best-fit $V_P$ $\chi^2_{\rm{min}}$ $\chi^2_{\rm{red,min}}$ BIC $\Delta$ BIC $p_M$ --------------- ------------------------------ ------------------ --------------------- ------------------------- ------- -------------- ------- standard 0.06&lt; $A_S$&lt; 0.28 $A_S$= 0.22 19.36 1.38 24.91 0 1 0.18&lt; $\epsilon$&lt; 0.98 $\epsilon$= 0.86 free T 0.13&lt; $A_S$&lt; 0.31 $A_S$= 0.25 17.90 1.37 26.22 1.31 0.51 806&lt; $T_d$&lt;2169K $T_d$= 874 523&lt; $T_n$&lt;1634K $T_n$= 727K free albedo 0.11&lt; $A_S$&lt; 0.30 $A_S$= 0.25 17.93 1.37 26.25 1.34 0.51 0.11&lt; $A_S$&lt; 0.76 $A_B$= 0.77 0.04&lt; $\epsilon$&lt; 0.96 $\epsilon$= 0.77 no scattering 0&lt; $A_B$&lt; 0.18 $A_B$= 0.09 24.40 1.74 29.95 5.04 0.08 0&lt; $\epsilon$&lt; 0.31 $\epsilon$= 0.01 @snellen2009 - - 22.24 1.58 27.79 2.88 0.23 ![image](tres2_standard_triangle){width="250pt"} ![image](tres2_free_alb_triangle){width="250pt"}\ ![image](tres2b_free_temp_triangle){width="250pt"} ![image](tres2_standard_off_triangle){width="250pt"}\ ![image](tres2_standard_asym_triangle){width="250pt"} ![image](tres2_standard_both_triangle){width="250pt"}\ scenario 95% credibility regions best-fit $V_P$ $\chi^2_{\rm{min}}$ $\chi^2_{\rm{red,min}}$ BIC $\Delta$ BIC $p_M$ ----------------- ----------------------------------------------- ------------------------------------ --------------------- ------------------------- -------- -------------- -------------------- standard + off 0 &lt; $A_S$&lt; 0.035 $A_S$= 0.028 128.48 2.06 149.42 0 1 0.01&lt; $\epsilon$&lt; 0.82 $\epsilon$= 0.07 0.89&lt; $M_P$ &lt; 1.39 $M_{\rm{jup}}$ $M_P$ = 1.09$M_{\rm{jup}}$ 116&lt; $\theta_{\rm{day}}$&lt; 259$^{\circ}$ $\theta_{\rm{day}}$= 217$^{\circ}$ standard 0 &lt; $A_S$&lt; 0.026 $A_S$= 0.022 136.19 2.19 152.95 3.53 0.17 0.11&lt; $\epsilon$&lt; 0.98 $\epsilon$= 0.94 1.08&lt; $M_P$ &lt; 1.48 $M_{\rm{jup}}$ $M_P$ = 1.31$M_{\rm{jup}}$ standard + asy 0 &lt; $A_S$&lt; 0.06 $A_S$= 0.046 126.65 2.14 155.97 6.55 0.04 0.10&lt; $\epsilon$&lt; 0.98 $\epsilon$= 0.94 0.93&lt; $M_P$ &lt; 1.42 $M_{\rm{jup}}$ $M_P$ = 1.06$M_{\rm{jup}}$ 0.02&lt; $d_S$&lt; 8.87 $d_S$= 0.14 91&lt; $l_{\rm{start}}$&lt; 230$^{\circ}$ $l_{\rm{start}}$= 98$^{\circ}$ 126&lt; $l_{\rm{end}}$&lt; 267$^{\circ}$ $l_{\rm{end}}$= 192$^{\circ}$ free A 0 &lt; $A_S$&lt; 0.027 $A_S$= 0.02 135.97 2.22 156.92 7.5 0.02 0.03&lt; $A_B$&lt; 0.68 $A_B$= 0.68 0.04&lt; $\epsilon$&lt; 0.97 $\epsilon$= 0.86 1.09&lt; $M_P$ &lt; 1.49 $M_{\rm{jup}}$ $M_P$ = 1.31$M_{\rm{jup}}$ free T 0 &lt; $A_S$&lt; 0.028 $A_S$= 0.020 135.98 2.22 156.93 7.51 0.02 741&lt; $T_d$&lt;1854K $T_d$= 1100K 519&lt; $T_n$&lt;1681K $T_n$= 890K 1.09&lt; $M_P$ &lt; 1.48 $M_{\rm{jup}}$ $M_P$ = 1.29$M_{\rm{jup}}$ standard + both 0 &lt; $A_S$&lt; 0.085 $A_S$= 0.018 125.52 2.16 159.03 9.61 8$\cdot$ 10$^{-3}$ 0.01&lt; $\epsilon$&lt; 0.95 $\epsilon$= 0.05 0.84&lt; $M_P$ &lt; 1.39 $M_{\rm{jup}}$ $M_P$ = 1.21$M_{\rm{jup}}$ 0.03&lt; $d_S$&lt; 6.4 $d_S$= 3.7 92&lt; $l_{\rm{start}}$&lt; 234$^{\circ}$ $l_{\rm{start}}$= 185$^{\circ}$ 127&lt; $l_{\rm{end}}$&lt; 266$^{\circ}$ $l_{\rm{end}}$= 220$^{\circ}$ 81&lt; $\theta_{\rm{day}}$&lt; 318$^{\circ}$ $\theta_{\rm{day}}$= 253$^{\circ}$ @barclay2012 - - 125.87 2.06 146.82 -2.6 3.66 ![image](hat7_standard_consistent_triangle){width="250pt"} ![image](hat7_free_alb_consistent_triangle){width="250pt"}\ ![image](hat7_free_temp_triangle){width="250pt"} ![image](hat7_standard_off_consistent_triangle){width="250pt"}\ ![image](hat7_standard_asym_consistent_triangle){width="250pt"} ![image](hat7_standard_both_triangle){width="250pt"}\ scenario 95% credibility regions best-fit $V_P$ $\chi^2_{\rm{min}}$ $\chi^2_{\rm{red,min}}$ BIC $\Delta$ BIC $p_M$ ----------------- ----------------------------------------------- ------------------------------------ --------------------- ------------------------- -------- -------------- ---------------------- standard + asy 0.26&lt; $A_S$&lt; 0.31 $A_S$= 0.30 217.23 3.68 246.56 0 1 0.37&lt; $\epsilon$&lt; 0.99 $\epsilon$= 0.93 1.69&lt; $M_P$ &lt; 1.81 $M_{\rm{jup}}$ $M_P$ = 1.79$M_{\rm{jup}}$ 0.003 &lt; $d_S$&lt; 0.27 $d_S$= 0.002 90&lt; $l_{\rm{start}}$&lt; 102$^{\circ}$ $l_{\rm{start}}$= 93$^{\circ}$ 126&lt; $l_{\rm{end}}$&lt; 137$^{\circ}$ $l_{\rm{end}}$= 126$^{\circ}$ standard + both 0.26&lt; $A_S$&lt; 0.35 $A_S$= 0.34 214.74 3.70 248.26 1.7 0.42 0.19&lt; $\epsilon$&lt; 0.93 $\epsilon$= 0.34 1.64&lt; $M_P$ &lt; 1.84 $M_{\rm{jup}}$ $M_P$ = 1.81$M_{\rm{jup}}$ 0.01 &lt; $d_S$&lt; 0.66 $d_S$= 0.005 90&lt; $l_{\rm{start}}$&lt; 122$^{\circ}$ $l_{\rm{start}}$= 91$^{\circ}$ 122&lt; $l_{\rm{end}}$&lt; 148$^{\circ}$ $l_{\rm{end}}$= 121$^{\circ}$ 104&lt; $\theta_{\rm{day}}$&lt; 263$^{\circ}$ $\theta_{\rm{day}}$= 259$^{\circ}$ standard + off 0.21&lt; $A_S$&lt; 0.34 $A_S$= 0.34 247.34 4.05 268.29 21.73 1.9$\cdot$10$^{-5}$ 0.02&lt; $\epsilon$&lt; 0.44 $\epsilon$= 0.007 1.53&lt; $M_P$ &lt; 1.76 $M_{\rm{jup}}$ $M_P$ = 1.75$M_{\rm{jup}}$ 118&lt; $\theta_{\rm{day}}$&lt; 233$^{\circ}$ $\theta_{\rm{day}}$= 234$^{\circ}$ standard 0.27&lt; $A_S$&lt; 0.29 $A_S$= 0.29 669.74 10.80 686.50 439.94 0 0.78&lt; $\epsilon$&lt; 0.99 $\epsilon$= 0.99 1.69&lt; $M_P$ &lt; 1.79 $M_{\rm{jup}}$ $M_P$ = 1.75$M_{\rm{jup}}$ free A 0.27&lt; $A_S$&lt; 0.29 $A_S$= 0.29 669.64 10.97 690.59 444.03 0 0.18&lt; $A_B$&lt; 0.78 $A_B$= 0.75 0.14&lt; $\epsilon$&lt; 0.99 $\epsilon$= 0.96 1.69&lt; $M_P$ &lt; 1.79 $M_{\rm{jup}}$ $M_P$ = 1.74$M_{\rm{jup}}$ free T 0.28&lt; $A_S$&lt; 0.29 $A_S$= 0.29 669.68 10.97 690.62 444.06 0 739&lt; $T_d$&lt;2058K $T_d$= 1204K 521&lt; $T_n$&lt; 1834K $T_n$= 512K 1.70&lt; $M_P$ &lt; 1.79 $M_{\rm{jup}}$ $M_P$ = 1.75$M_{\rm{jup}}$ @esteves2013 - - 276.41 4.53 297.36 50.8 9.3$\cdot$10$^{-12}$ MCMC convergence diagnostics {#convergence_res} ============================ ![image](c1_standard_convergence){width="250pt"} ![image](c1_snellen_convergence){width="250pt"}\ ![image](c1_free_alb_convergence){width="250pt"} ![image](c1_free_temp_convergence){width="250pt"}\ ![image](t2_standard_convergence){width="250pt"} ![image](t2_free_alb_convergence){width="250pt"}\ ![image](t2_asym_convergence){width="250pt"} ![image](t2_free_temp_convergence){width="250pt"}\ ![image](t2_off_convergence){width="250pt"} ![image](t2_both_convergence){width="250pt"}\ ![image](standard_convergence){width="250pt"} ![image](free_alb_convergence){width="250pt"}\ ![image](asym_convergence){width="250pt"} ![image](free_temp_convergence){width="250pt"}\ ![image](off_convergence){width="250pt"} ![image](both_convergence){width="250pt"}\ scenario $A_S$ $A_B$ $\epsilon$ $T_d$ $T_n$ --------------- ------- ------- ------------ ------- ------- standard 1.026 - 1.036 - - free T 1.037 - - 1.061 1.055 free albedo 1.029 1.054 1.059 - - no scattering - 1.024 1.025 - - scenario $A_S$ $A_B$ $\epsilon$ $T_d$ $T_n$ $M_P$ $\theta_{\rm{day}}$ $l_{\rm{start}}$ $l_{\rm{end}}$ $d_S$ ------------- ------- ------- ------------ ------- ------- ------- --------------------- ------------------ ---------------- ------- standard 1.002 - 1.017 - - 1.002 - - - - free T 1.031 - - 1.059 1.09 1.002 - - - - free albedo 1.002 1.025 1.025 - - 1.002 - - - - asymmetric 1.056 - 1.0952 - - 1.001 - 1.534 1.251 1.2 offset 1.002 - 1.022 - - 1.001 1.022 - - - both 1.041 - 1.164 - - 1.001 1.148 1.319 1.193 1.218 scenario $A_S$ $A_B$ $\epsilon$ $T_d$ $T_n$ $M_P$ $\theta_{\rm{day}}$ $l_{\rm{start}}$ $l_{\rm{end}}$ $d_S$ ------------- ------- ------- ------------ ------- ------- ------- --------------------- ------------------ ---------------- ------- standard 1.006 - 1.013 - - 1.002 - - - - free T 1.093 - - 1.137 1.124 1.002 - - - - free albedo 1.01 1.061 1.037 - - 1.001 - - - - asymmetric 1.025 - 1.112 - - 1.002 - 1.009 1.011 1.021 offset 1.099 - 1.063 - - 1.002 1.167 - - - both 1.089 - 1.256 - - 1.002 1.332 1.167 1.257 1.186 [^1]: www.davidgsimpson.com/software/keplersoln\_f90.txt [^2]: In practice, we first calculate $M$, $T$ and $\alpha$ with $t_{\rm{peri}}=0$ and then interpolate such that $t=0$ occurs at $\alpha=0$. [^3]: Note that we calculated the star’s solid angle, as seen from the planet, in eq. \[stellarflux\] as $\pi \left(\frac{R_{\ast}}{r}\right)^2$. This assumes that $R_{\ast}\ll r$, a condition that is on the verge of breaking down for close-in planets such as the ones considered in this work. However, detailed modeling (not shown here), which took the large angular extent of the star (up to tens of degrees) into account, showed little to no influence on resulting phase curves. Therefore, we retain our simplifying assumption in the following. [^4]: http://www.user.oats.inaf.it/castelli/grids.html [^5]: retrieved from http://keplergo.arc.nasa.gov/Instrumentation.shtml [^6]: Note that for tidally-locked planets, such as assumed here, the star does not move across the celestial sphere for an observer on the planet. Therefore, “morning” (i.e., sunrise) and “evening” (i.e., sunset) as such don’t exist. Rather, for eastward circulation, “morning” is defined as the terminator over which an air parcel would enter the dayside from the nightside. For illustration purposes, we will retain “morning” and “evening” throughout the text.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Single–photon which is initially uncorrelated with atom, will evolve to be entangled with the atom on their continuous kinetic variables in the process of resonant scattering. We find the relations between the entanglement and their physical control parameters, which indicates that high entanglement can be reached by broadening the scale of the atomic wave or squeezing the linewidth of the incident single–photon pulse.' author: - Rui Guo - Hong Guo title: Entanglement of scattered single photon with atom --- [^1] . Introduction ============ Quantum entanglement is of fundamental importance in the theory of quantum nonlocality[@1; @nonlocality] as well as in quantum information[@2; @QIT]. Recently, photon–atom entanglement is frequently discussed in their finite Hilbert spaces[@3; @etagl; @fini], such as, the polarizations of photon or the internal states of atom. With the progress of micro–cavity quantum electrodynamics[@4; @CQED] and high coupling artificial atom[@5; @artificial; @atom], single photon raises its ability to affect considerably not only the atom’s internal state but also its external motion. As a result, it gives rise to some basic questions related to the photon–atom entanglement on their infinite kinetic degree of freedom.\ In recent studies[@6; @Singlephoton][@7; @scattering], entanglement in the continuous kinetic variables between single–photon and atom is mostly discussed in the process of single–photon emission with atomic recoil, where the atom is initially pumped to its excited level and the single–photon is prepared “intrinsically” by the atomic spontaneous emission. In our work, however, the resonant single–photon is initially injected from a tuneable single–photon generator[@8; @singlephoton; @generator], whereas an artificial atom is placed freely in vacuum on its steady state (“artificial” indicates that the atomic coupling to the single–photon is stronger than usual, which ensures the interaction observable[@statement]). We find that, after the interaction, the scattered single–photon will be entangled to the atom at a higher degree compared with the case of solely spontaneous emission. We explain this phenomena as the coherent pumping of the incident photon and evaluate it with a defined “entanglement pumping coefficient”.\ To describe the degree of entanglement, firstly, we use the ratio ($R$) between the conditional and unconditional variance in momentum to evaluate the two particles’ correlation in the probability amplitude of their wave function, which is experimentally accessible and can be seen as the “amplitude entanglement” in momentum space[@9; @photoionization][@10; @phase; @ent]; secondly, we use the standard Schmidt decomposition[@11; @Schmidt; @dec] and treat Schmidt number $K$[@12; @Schmidt; @num] as a criterion for the full entanglement contained both in amplitude and phase. For both criterions $R$ and $K$, we revealed their dependencies on the physical control parameters $\tau$ and $\eta$, and compare them in some region of interests, from which it is shown that: higher entanglement can be achieved by either broadening the scale of the atomic wave or squeezing the linewidth of the incident single–photon. Transmitted photon is also considered, which is different to the scattered photon, and exhibits little entanglement with the atom due to its interference with the transparent wave (initially incident photon wave profile).\ ![(a) Single–photon interacts resonantly with free two–level atom.\ (b) The incident photon is scattered by the atom, angle $\theta$ is fixed to determine the direction of the detection.\ (c) Schematic diagram for the absorption–emission process. The process of emission with atomic recoil will generate entanglement between the recoiled atom and the scattered photon due to the momentum conservation.](fig1.eps){height="6cm"} theoretical analysis ==================== As shown in Fig. 1 (a), the two–level atom with transition frequency $\omega_{a}$ and mass $m$ is placed freely in vacuum, the ground and excited states of which are denoted by $|1\rangle$ and $|2\rangle$, respectively. The incident single–photon from some generator is resonant with the atom and exhibits a superposed state of different fock states due to its linewidth. For the realistic consideration in some experiments[@13; @exp1], we fix the photon detector and atom detector in opposite directions and make them both in the $x$–$z$ plane for simplicity as in Fig. 1 (b), the angle $\theta$ can be chosen to observe the scattering in needed directions.\ Under the rotating wave approximation (RWA) the Hamiltonian can be written in Schrödinger picture as: $$\begin{aligned} \nonumber\hat{H}&=&\frac{(\hbar \hat{p})^{2}}{2m}+\sum_{\vec{k}}\hbar \omega_{\vec{k}}\hat{a}^{\dag}_{\vec{k}}\hat{a}_{\vec{k}}+\hbar \omega_{a}\hat{\sigma}_{22}\\&+&\hbar\sum_{\vec{k}}\left[g(\vec{k})\hat{\sigma}_{12}\hat{a}^{\dag}_{\vec{k}}e^{-i\vec{k}\cdot\vec{r}} +{\rm H.c.} \right],\end{aligned}$$ where $\hbar \hat{p}$ and $\vec{r}$ denote atomic center–of–mass momentum and position operators, $\hat{\sigma}_{ij}$ denotes the atomic operator $|i\rangle \langle j|$ ($i,j=1,2$), $\hat{a}_{\vec{k}}$ and $\hat{a}^{\dag}_{\vec{k}}$ are the annihilation and creation operators for the light mode with photonic wave vector $\vec{k}$ and frequency $\omega_{\vec{k}}=ck$, respectively. Note the summation is performed over all coupled modes in the continuous Hilbert space. We also suppress the polarization index in the summation as well as in photon state, since we can always choose a particular polarization to detect the photon. $g(\vec{k})$ is the dipole coupling coefficient.\ As there is only one photon in the interaction, the basis of the Hilbert space can be denoted as $|\vec{q},1_{\vec{k}},i\rangle\ \ (i=1,2)$, where the arguments in the kets denote, respectively, the wave vector of the atom, and of the photon, and the atomic internal state. At time $t$ the state vector can therefore be expanded as: $$\begin{aligned} |\psi\rangle=\sum_{\vec{q},\vec{k}}C_{1}(\vec{q},\vec{k},t)|\vec{q},1_{\vec{k}},1\rangle+\sum_{\vec{q}}C_{2}(\vec{q},t) |\vec{q},0,2\rangle .\end{aligned}$$ Substituting Eqs. (1) and (2) into Schrödinger equation yields: $$\begin{aligned} &&i \dot{A}(\vec{q},\vec{k},t)= g(\vec{k})B(\vec{q}+\vec{k},t)e^{i[ck-\omega_{a}-\frac{\hbar}{2m}(2\vec{q}+\vec{k})\cdot \vec{k}]t},\\ \nonumber &&i \dot{B}(\vec{q},t)=\sum_{k}g^{*}(\vec{k})A(\vec{q}-\vec{k},\vec{k},t)e^{i[\omega_{a}-ck+\frac{\hbar }{2m}(2\vec{q}-\vec{k})\cdot\vec{k}]t},\\ \\end{aligned}$$ where $A$, $B$ are the slowly varying parts of $C_{1}$ and $C_{2}$, i.e.: $$\begin{aligned} A(\vec{q},\vec{k},t)&=&C_{1}(\vec{q},\vec{k},t)e^{i(\frac{\hbar q^{2}}{2m}+ck)t},\\ B(\vec{q},t)&=&C_{2}(\vec{q},t)e^{i(\frac{\hbar q^{2}}{2m}+\omega_{a})t}.\end{aligned}$$ Suppose the atom is initially in the ground state and has zero average velocity, the initial condition can be set as: $$\begin{aligned} &&A(\vec{q},\vec{k},t=0)=\chi_{0}G(\vec{q})P(\vec{k}-\vec{k_{0}}),\\ &&B(\vec{q},t=0)=0,\end{aligned}$$ where $G(\vec{q})=G_{x}(q_{x})G_{y}(q_{y})G_{z}(q_{z})$ and $P(\vec{k}-\vec{k}_{0})=P_{x}(k_{x})P_{y}(k_{y})P_{z}(k_{z}-k_{0})$. In this case, functions $G_{i}(q_{i})$ and $P_{i}(k_{i})$ $(i=x,y,z)$ have zero center value and bandwidths $\delta q_{i}$ and $\delta k_{i}$ separately. The coordinates are chosen as in Fig. 1 (b), where we make the incident direction as $z$–axis. $\chi_{0}$ is the normalized factor and $\vec{k}_{0}=(0,0,\frac{\omega_{a}}{c})$ is the resonant wave vector.\ We proceed to solve the equations with Laplace transformation and single pole approximation[@14; @single; @pole] and yield: $$\begin{aligned} &&B(\vec{q},t)=-i\chi_{0}\sum_{\vec{k}}g^{*}(\vec{k})\times\\ \nonumber &&\frac{G(\vec{q}-\vec{k})P(\vec{k}-\vec{k}_{0})\left \{ e^{i[\omega_{a}-ck+\frac{\hbar}{2m}(2\vec{q}-\vec{k})\cdot \vec{k}]t}-e^{-iLt-\Gamma t} \right\}}{iL+\Gamma+i[ \omega_{a}-ck+\frac{\hbar}{2m}(2\vec{q}-\vec{k})\cdot\vec{k}]},\end{aligned}$$ where the frequency shift $L$ and atomic linewidth $\Gamma$ are given as: $$\begin{aligned} L&=&\sum_{\vec{k}}\frac{|g(\vec{k})|^{2}}{\omega_{a}-ck+\frac{\hbar}{2m}(2\vec{q}-\vec{k})\cdot \vec{k}}\ , \\ \Gamma&=&\pi \sum_{\vec{k}}|g(\vec{k})|^{2}\delta(\omega_{a}-ck).\end{aligned}$$ We can simplify Eq. (9), by replacing the term $G(\vec{q}-\vec{k})$ with $G(\vec{q}-\vec{k}_{0})$ since the momentum bandwidth $\delta q_{i}$ due to the recoil is normally much larger than the photon linewidth $\delta k_{i}$; also, we can replace $\vec{k}$ with $\vec{k}_{0}$ in the term $\frac{\hbar }{2m}(2\vec{q}-\vec{k})\cdot\vec{k}$. With these approximations, the first term in the curly bracket can be seen as the antifourier transform of the product of the photonic shape and Lorentzian shape, and will cause a decay at a time scale ${\rm max}\{\frac{1}{\Gamma},\frac{1}{c\delta k_i} \}$; the second decay term $e^{-iLt-\Gamma t}$ is due to the spontaneous emission. Then, one can directly find that $B(\vec{q},t\rightarrow \infty)\rightarrow 0$. In the further calculations, we ignore the frequency shift since it can be treated as a modification of the atomic transition frequency, and regard the slowly varying function $g(\vec{k})$ as a constant.\ With the approximations mentioned above, from Eqs. (3) and (9), we obtain the steady solution of $A(\vec{q},\vec{k},t\rightarrow \infty)$: $$\begin{aligned} \nonumber A(\vec{q},\vec{k},t\rightarrow \infty)=&&\chi_{0}G(\vec{q})P(\vec{k}-\vec{k}_{0}) +\frac{\chi_{0}|g|^{2} G(\vec{q}+\vec{k}-\vec{k}_{0})}{\Gamma-i\left[ ck-\omega_{a}-[\hbar(\vec{q}+\vec{k})]^{2}/2m\hbar+(\hbar \vec{q})^{2}/2m\hbar \right]}\times \\ &&\sum_{\vec{k}_{1}}\frac{P(\vec{k}_{1}-\vec{k}_{0})}{ i\left[ ck-ck_{1}-\frac{\hbar}{2m}(2\vec{q}+\vec{k})\cdot\vec{k}+\frac{\hbar}{m}(\vec{q}+\vec{k})\cdot\vec{k}_{0} -\frac{\hbar}{2m}k^{2}_{0} \right]}\ \ .\end{aligned}$$ From Eq. (10), one sees that the final state is a superposition of the transparent wave (initially incident photon wave profile, depicted by the first term on the r.h.s.) and scattering wave (second term on the r.h.s.). In the scattering part, the atom and the photon are entangled due to the process of photon absorption and emission with atomic recoil, which is sketched in Fig. 1 (c). One may find that the Lorentzian–Gaussian factor in the scattering part is very similar to that in the case of spontaneous emission with recoil[@6; @Singlephoton], where the Gaussian term is a reflection of momentum conservation and the Lorentzian term indicates the energy conservation.\ The general formula (10) can be used to analyze the photon scattered in different directions. Without loss of physical generality, we choose the initial conditions for the atom as $G_{i}(q_{i})=e^{-(q_{i}/\delta q_{i})^{2}}$, and for the photon $P_{i}(k_{i})=1/\left(k_{i}/\delta k_{i}+1\right)$ which is exactly the case if the incident single–photon is generated by spontaneous emission. As a remark, we point out that all the conclusions in the following keep available when the incident photon is chosen to be other shapes such as Gaussian or whatever. Amplitude Entanglement in Scattered photon ========================================== ![(a) and (b) are contour and density plots of $|A_{\frac{\pi}{2}}|^{2}$ with the condition $\tau_{z}=1$, $\eta_{x}=10$; (c) and (d) are contour and density plots of $|A_{0}|^{2}$ with the condition $\tau_{z}=1$, $\eta_{z}=10$. ](fig2.eps){height="8cm"} To make the physical results more evident and avoid unnecessary mathematical complexity, we focus our attention on the photon scattered perpendicular to the incident direction, i.e., $\theta=\frac{\pi}{2}$. Then we project Eq. (10) into the subspace $|(q_{x},0,0)\rangle \otimes |1_{(k_{x},0,0)}\rangle$, with the same approximations used in Eq. (9), and yield: $$\begin{aligned} \nonumber A_{\frac{\pi}{2}}&=&\frac{N\cdot {\rm exp}\left[-(\Delta q_{x}-\frac{\hbar k_{0}}{mc}\Delta k_{x})^{2}/\eta_{x}^{2}\right]}{(\Delta k_{x}+\Delta q_{x}+\frac{\hbar k_{0}^{2}}{2m\Gamma}+i)\left[(\Delta k_{x}+\Delta q_{x})/\tau_{z}+i\right]},\\ &\approx&\frac{N\cdot {\rm exp}\left[-(\Delta q_{x}/\eta_{x})^{2}\right]}{(\Delta k_{x}+\Delta q_{x}+i)\left[(\Delta k_{x}+\Delta q_{x})/\tau_{z}+i \right]},\end{aligned}$$ where $\Delta k_{i}\equiv \frac{k_{i}-k_{0}}{\Gamma/c}$, $\Delta q_{i}\equiv \frac{\hbar k_{0}}{m\Gamma}(q_{i}-k_{0})$, $\eta_{i}\equiv \frac{\delta q_{i}\hbar k_{0}}{m\Gamma}$, $\tau_{i}\equiv \frac{\delta k_{i}}{\Gamma/c}$ $(i=x,y,z)$ are all defined dimensionless parameters. Note that $\eta_{x}$ and $\tau_{z}$ contain all the physical parameters that determine the nature of the atom–photon system, thus can be treated as physical control parameters for the atom and the photon, respectively. We neglect tiny terms in Eq. (11) due to $\hbar k^{2}_{0}\ll m\Gamma$ and $\hbar k_{0}\ll mc$ in realistic conditions. $N$ is the normalization factor where $N^{2}=\sqrt{2}(1+\tau_{z})/\pi^{\frac{3}{2}}\tau_{z}\eta_{x}$.\ From Eq. (11) and Fig. 2, one sees that, variables $\Delta q_{x}$ and $\Delta k_{x}$ play the symmetric role in the two Lorentzian functions. It makes the probability amplitude $|A_{\frac{\pi}{2}}|^{2}$ localized along the diagonal of the momentum space, which implies the nonfactorization of the photon–atom wave function, and then will generate entanglement between the two particles. In fact, we can treat the ratio ($R$) of the conditional and unconditional variances for $\Delta q_{x}$ or $\Delta k_{x}$ as an evaluation of entanglement[@9; @photoionization]. This ratio, compared to the Schmidt number $K$, reveals more obvious analytic dependence for the entanglement on its control parameters $\eta_{x}$ and $\tau_{z}$ , and is also experimentally directly accessible[@15; @exp3].\ ![(a) Relation between $R$ and the two control parameters $(\tau_{z},\eta_{x})$. (b) Sectional views of (a), with $\eta_{x}=1,\ 5,\ 10,\ 20 $ from bottom to top. The ratio $R$ is calculated from variable $\Delta q_{x}$ with $\Delta k_{x}$ fixed at the origin.](fig3a.eps "fig:"){height="3.5cm"}![(a) Relation between $R$ and the two control parameters $(\tau_{z},\eta_{x})$. (b) Sectional views of (a), with $\eta_{x}=1,\ 5,\ 10,\ 20 $ from bottom to top. The ratio $R$ is calculated from variable $\Delta q_{x}$ with $\Delta k_{x}$ fixed at the origin.](fig3b.eps "fig:"){height="3cm"} We proceed to calculate the ratio for variable $\Delta q_{x}$, i.e., $R\equiv \delta \Delta q_{x}^{{\rm single}}/\delta \Delta q_{x}^{{\rm coinc}}$, where the unconditional variance is obtained from the single–particle observation as: $$\begin{aligned} \delta^{2} \Delta q_{x}^{{\rm single}}&=&\langle \Delta q_{x}^{2}\rangle-\langle \Delta q_{x} \rangle^{2}\\ \nonumber &=&\int {\rm d}\Delta k_{x}{\rm d}\Delta q_{x} \Delta q_{x}^{2}|A_{\frac{\pi}{2}}|^{2}\\ \nonumber &-&\left(\int {\rm d}\Delta k_{x}{\rm d}\Delta q_{x}\Delta q_{x}|A_{\frac{\pi}{2}}|^{2} \right)^{2},\end{aligned}$$ and coincidence measurement gives the conditional variance at some specified $\Delta k_{x}$: $$\begin{aligned} && \delta^{2} \Delta q_{x}^{{\rm coinc}}=\langle \Delta q_{x}^{2}\rangle_{\Delta k_{x}}-\langle \Delta q_{x} \rangle^{2}_{\Delta k_{x}}\\ \nonumber &&=\frac{\int {\rm d}\Delta q_{x} \Delta q_{x}^{2}|A_{\frac{\pi}{2}}|^{2}}{\int{\rm d}\Delta q_{x}|A_{\frac{\pi}{2}}|^{2}}-\left(\frac{\int {\rm d}\Delta q_{x}\Delta q_{x}|A_{\frac{\pi}{2}}|^{2}}{\int{\rm d}\Delta q_{x}|A_{\frac{\pi}{2}}|^{2}} \right)^{2}.\end{aligned}$$ Substituting Eqs. (11)–(13) into the definition of $R$, we yield $R(\eta_{x},\tau_{z})$ as a function of parameters $\eta_{x}$ and $\tau_{z}$, the result of which is illustrated in Fig. 3 with $\Delta k_{x}$ fixed at the origin. From that, one can see that the entanglement increases monotonously when $\eta_{x}$ increases or $\tau_{z}$ decreases, which indicates that higher entanglement can be achieved by squeezing the linewidth of the incident photon or broadening the wave packet of the atom. In particular, when $\eta_{x}>1$, we have: $$\begin{aligned} R\approx \frac{\eta_{x}+\sqrt{\frac{2}{\pi}}(1+\tau_{z})}{2\sqrt{\tau_{z}}},\end{aligned}$$ from which it is found that the entanglement increases linearly with $\eta_{x}$ and will be abruptly enhanced when $\tau_{z}$ tends to zero. As a remark, we emphasize that all the conclusions above hold qualitatively the same either if $\Delta k_{x}$ is specified otherwise or one calculate the ratio $R$ from the other variable $\Delta k_{x}$.\ The ratio $R$, which can be obtained experimentally by comparing the momentum dispersion variance, is an appropriate quantification for the entanglement contained in the probability amplitude correlation (thus can be seen as an evaluation of the “amplitude entanglement”). Next, we can see that it reveals a correct varying tendency for the entanglement with its control parameters. However, the definition of $R$ is dependent on its representation space and different choices for the basis of Hilbert space will cause distinct values of $R$. This is because we only use the amplitude of the wavefunction to construct $R$, and then all entanglements included in phase[@10; @phase; @ent] is lost.\ To obtain the “total entanglement”, we calculate the Schmidt number[@12; @Schmidt; @num] and compare it with the entanglement ratio $R$ in the following section.\ Full entanglement in scattered photon ====================================== Mathematically, for a bipartite system in pure state, the entanglement of an unfactorable wavefunction can be completely characterized by the Schmidt number, which is denoted by $K\equiv(\sum_{n=0}^{\infty}\lambda_{n}^{2})^{-1}$, where $\lambda_{n}'s$ are eigenvalues of the integral equation [@11; @Schmidt; @dec]: $$\begin{aligned} \int {\rm d}\Delta k_{x}' \rho^{{\rm P}}(\Delta k_{x},\Delta k_{x}')\phi_{n}(\Delta k_{x}')=\lambda_{n}\phi_{n}(\Delta k_{x}),\end{aligned}$$ the density matrix for photon is defined as: $$\begin{aligned} \nonumber \rho^{{\rm P}}(\Delta k_{x},\Delta k_{x}')\equiv \int {\rm d}\Delta q_{x} A_{\frac{\pi}{2}}(\Delta q_{x},\Delta k_{x})A^{*}_{\frac{\pi}{2}}(\Delta q_{x},\Delta k'_{x}), \\ \\end{aligned}$$ where, note that we have taken away the time–dependent phase in the density matrix since it does not contribute to entanglement. Although we do it with the photon, Schmidt number can be equally obtained through the atomic density matrix, and the eigenfunctions of atom $\left[\psi_{n}(\Delta q_{x})\right]$ can be related to those of photon through: $$\begin{aligned} \psi_{n}(\Delta q_{x})=\frac{1}{\sqrt{\lambda_{n}}}\int {\rm d}\Delta k_{x}A_{\frac{\pi}{2}}(\Delta q_{x},\Delta k_{x})\phi^{*}_{n}(\Delta k_{x}),\end{aligned}$$ where $\phi_{n}(\Delta k_{x})$ and $\psi_{n}(\Delta q_{x})$ $(n=1,2\cdot\cdot\cdot)$ form complete orthonormal sets for the photon and atom respectively. With these discrete modes, the unfactorable wavefunction can be expanded into a sum of factored products uniquely: $$\begin{aligned} A_{\frac{\pi}{2}(\Delta q_{x},\Delta k_{x})}=\sum_{n}\sqrt{\lambda_{n}}\psi_{n}(\Delta q_{x})\phi_{n}(\Delta k_{x}).\end{aligned}$$ Then, the Schmidt number $K$, which is an estimation of the number of modes that are “important” in making up the expansion of Eq. (18), serves as a quantitive measurement of entanglement[@7; @scattering][@12; @Schmidt; @num]. Note $K$ is independent from representation since all $\lambda's$ keep the same in different representations, thus can be seen as a quantity of the full entanglement information (both amplitude and phase entanglement) kept in the collective wavefunction.\ ![Schmidt number $K$ and the amplitude entanglement degree $R$ in dependence on $\tau_{z}$ with $\eta_{x}=10$. Spots are numerical results for $K$ whereas solid line is plotted for $R$. The inset shows them as functions of $\eta_{x}$ with $\tau_{z}$ fixed, lines from bottom to top are depicted as: $R(\tau_{z}=10)$, $K(\tau_{z}=10)$, $R(\tau_{z}=1)$, $K(\tau_{z}=1)$, $R(\tau_{z}=0.1)$, $K(\tau_{z}=0.1)$, respectively. ](fig4.eps){height="5cm"} Since Eq. (15) is not analytically solvable, we use a discrete eigenvalue equation to approximate the integral equation. Up to a reliable precision, we use $1000\times1000$ matrices to carry out the diagonalization, and collect some of the results in Fig. 4, where we also compare Schmidt number $K$ with the amplitude entanglement ratio $R$.\ From the numerical results, we find that, similar to the ratio $R$, $K$ rises linearly with parameter $\eta_{x}$ and will increase rapidly when the linewidth of incident photon is squeezed narrower to the atomic linewidth $\Gamma$, i.e, $\tau_{z}<1$; secondly, when $\tau_{z}$ is fixed, the slope of $K(\eta_{x})$ is always larger than that of $R(\eta_{x})$, which means that more entanglement information will transfer to phase when $\eta_{x}$ becomes larger, and this phenomena will become more evident when $\tau_{z}$ is reduced, e.g., when $\tau_{z}=0.1$, $R\approx 1.58\eta_{x}+1.39$ whereas $K\approx 3.44\eta_{x}+0.08$, which indicates that more than half of the entanglement information will be unavailable by momentum dispersion observation when $\eta_{x}$ goes large on this condition.\ Another phenomena is notable, when $\tau_{z}=1$, i.e., the linewidth of the incident photon is not squeezed and can be prepared directly by spontaneous emission from the same atom, we find $K\approx 0.75\eta_{x}+0.16$ $(\eta_{x} \gg 1)$ whereas in the case of spontaneous emission[@6; @Singlephoton] $K\approx 0.28\eta +0.72$ $(\eta \gg 1)$. This difference indicates that, although in both cases, entanglement is generated from momentum conservation in the process of photon emission with atomic recoil, the absorption of the incident photon will add some entanglement due to its coherent pumping effect. As $K$ is linear with $\eta$ (or $\eta_{x}$), we define the “entanglement pumping coefficient” as: $${\rm EPC}\equiv \frac{{\rm slope\ of\ } K(\eta_{x}) {\rm\ in\ scattering}}{{\rm slope\ of\ } K(\eta ){\rm\ in\ spontaneous\ emission}}\ ,$$ since the constant term in $K(\eta)$ plays a minor role when entanglement is large. The defined coefficient ${\rm EPC}$ shows the times that entanglement is increased by the coherent pumping of an incident photon. As it is independent on the atomic parameter, it reflects the ability of entanglement of the photon separately. We collect some numerical results in Fig. 5 and fit it with ${\rm EPC}\approx 1.1/\tau_{z}+1.5$ within $\tau_{z}\in(0,1)$, from which, one sees that ${\rm EPC}$ increases rapidly when $\tau_{z}$ diminishes, which also implies that, if the incident photon is prepared monochromatically on its limit condition, i.e., $\tau_{z}\rightarrow 0$, the scattered photon will be highly entangled to the recoiled atom.\ We plot the amplitude of the first three Schmidt modes for the photon with $\eta_{x}=10$ and $\tau_{z}=1$ in Fig. 6. We find that their number of peaks in momentum space is proportional to the Schmidt mode index, but the separations of different peaks are more distinct than in the case of spontaneous emission[@7; @scattering].\ transmitted photon ================== ![Entanglement pumping coefficient ${\rm EPC}$ as a function of $\tau_{z}$. The solid line is its fitted function $1.1/\tau_{z}+1.5$.](fig5.eps){height="4cm"} To consider the transmitted photon, we make the observation angle $\theta=0$, and yield the collective wavefunction from Eq. (10): $$\begin{aligned} A_{0}&=&-\chi_{0}G_{z}(q_{z})P_{z}(k_{z})\\ \nonumber &+&\chi_{0}\frac{\pi}{4}\left(\frac {\Gamma}{ck_{0}}\right)^{2} \frac{\tau_{x}\tau_{y}G_{z}(q_{z})P_{z}(k_{z})}{1-i(\Delta k_{z}+\Delta q_{z}+\frac{\hbar k_{0}^{2}}{2m\Gamma})}.\end{aligned}$$ One can see that, in Eq. (19), the first term describes that the two particles are free of interaction and keep their initial factorable wave form; the second term reflects the entanglement. Usually, the second term is much smaller than the first one since $(\frac {\Gamma}{ck_{0}})^{2}\ll1$, but one can enlarge it by choosing some special physical system, such as the artificial atom with low excited level and high coupling to its resonant modes. However, this improvement can add few entanglement between the transmitted photon and recoiled atom, because interference between the two terms in Eq. (19) will weaken the correlation of the two particles at a great deal. To make it clear, we show the contour and density plots for the probability amplitude of $A_{0}$ in Fig. 2 on an artificial condition $\frac{\pi}{4}(\frac{\Gamma}{ck_{0}})^{2}\tau_{x}\tau_{y}= 1$, and yield $ R \approx K<2$ in this situation.\ The eigenfunctions of transmitted photon for the first three modes with $\eta_{z}=10$ and $\tau_{z}=1$ are collected in Fig. 6, from which one can see that, due to the interference, the corresponding modes of the transmitted photon exhibit one peak less than that of the scattered photon.\ ![First three Schmidt modes for the scattered and transmitted photon. Left column is for the scattered photon with $\tau_{z}=1$ and $\eta_{x}=10$; right column is for the transmitted photon with $\tau_{z}=1$, $\eta_{x}=10$, and $\frac{\pi}{4}(\frac{\Gamma}{ck_{0}})^{2}\tau_{x}\tau_{y}= 1$ for illustration.](fig6.eps){height="7cm"} conclusion ========== We analyze the physically fundamental interaction between a single photon and a free artificial atom in vacuum. With a few physical approximations, the general solution of the photon–atom wave function is obtained, from which, it is found that the initially uncorrelated particles will evolve to be entangled due to momentum conservation in scattering. To evaluate the entanglement in the scattering, firstly, we use an experimentally accessible parameter $R$, which denotes the ratio between momentum variance in single–particle and in coincidence observations, and yield its simple dependences on the two physical control parameters $\eta_{x}\equiv \frac{\delta q_{x}\hbar k_{0}}{m\Gamma} $ and $ \tau_{z}\equiv \frac{\delta k_{z}}{\Gamma/c}$; secondly, we use standard Schmidt decomposition to reveal the full entanglement information and find out its varying tendency similar to that of $R$, which indicates that high entanglement can be achieved by either squeezing the linewidth of the incident photon or broadening the scale of atomic wave packet. Furthermore, compared with spontaneous emission, we defined a parameter ${\rm EPC}$ to evaluate the entanglement enhancement due to the coherent pumping effect of the resonant incident photon. In the end, we found out that, for the transmitted photon, one can expect little entanglement due to the interference between the transparent and scattered wave.\ ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== One of the authors (HG) acknowledges J. H. Eberly for his discussions when drafting this manuscript. This work is supported by the National Natural Science Foundation of China (Grant No. 10474004), and DAAD exchange program: D/05/06972 Projektbezogener Personenaustausch mit China (Germany/China Joint Research Program).\ A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. **47**, 777 (1935); S. Bell, Physics (Long Island City, N. Y.) **1**, 195 (1964). C. H. Bennett and D. P. Divincenzo, Nature (London) **404**, 247 (2000). M. Brune, and S. Haroche, Rev. Mod. Phys. **73**, 565 (2001); D. L. Moehring *et. al.*, Phys. Rev. Lett. **93**, 090410 (2004); Blinov, B. B. *et. al.*, Nature (London), **428**, 0028-0836 (2004); D. N. Matsukevich *et. al.*, Phys. Rev. Lett. **95**, 040405 (2005). Mabuchi, H. and Doherty, Science **298**, 1372 (2002);J. M. Raimond, M. Brune, S. Haroche, Rev. Mod. Phys. **73**, 565 (2001). A. Wallraff *et. al.*, Nature (London) **431**, 162 (2004). K. W. Chan, C. K. Law, and J. H. Eberly, Phys. Rev. Lett. **88**, 100402 (2002). K. W. Chan *et al.*, Phys. Rev. A **68**, 022110 (2003). Matthias Keller *et. al.*, Nature (London) **431**, 1075 (2004); McKeever, J. *et al.*, Science **303**, 1992 (2004); Brattke, S. *et. al.*, Phys. Rev. Lett. **86**, 3534 (2001). See, e.g., Ref\[5\], the artificial atom is implemented which can provide high coupling coefficient and so can potentially be applied in our proposed experimental scheme. M. V. Fedorov *et al.*, Phys. Rev. A **69**, 052117 (2004). arXiv: K. W. Chan and J. H. Eberly, quant–ph/0404093 v2 (2004). A. Ekert and P. L. Knight, Am. J. Phys. **63**, 415 (1995); S. Parker *et al.*, Phys. Rev. A **61**, 032305 (2000). R. Grobe *et al.*, J. Phys. B **27**, L503 (1994). Michael S. Chapman *et al.*, Phys. Rev. Lett. **75**, 3783 (1995); Christian Kurtsiefer *et al.*, Phys. Rev. A **55**, R2539 (1997). William H. Louisell, *Quantum Statistical Properties of Radiation*, (John Wiley & Sons, New York, 1973). M. D. Reid and P. D. Drummond, Phys. Rev. Lett. **60**, 2731 (1988). [^1]: Author to whom correspondence should be addressed. E-mail: hongguo@pku.edu.cn, phone: +86-10-6275-7035, Fax: +86-10-6275-3208.
{ "pile_set_name": "ArXiv" }
--- author: - 'Michel Brion and Lex E. Renner' title: 'Algebraic Semigroups are Strongly $\pi$-regular' --- Introduction {#sec:intro} ============ A fundamental result of Putcha (see [@Put Thm. 3.18]) states that any [*linear*]{} algebraic semigroup $S$ over an algebraically closed field $k$ is strongly $\pi$-regular. The proof follows from the corresponding result for $M_n(k)$ (essentially the Fitting decomposition), combined with the fact that $S$ is isomorphic to a closed subsemigroup of $M_n(k)$, for some $n>0$. At the other extreme it is easy to see that any [*complete*]{} algebraic semigroup is strongly $\pi$-regular. It is therefore natural to ask whether [*any*]{} algebraic semigroup $S$ is strongly $\pi$-regular. The purpose of this note is to provide an affirmative answer to this question, over an arbitrary field $F$; then the set $S(F)$ of points of $S$ over $F$ is an abstract semigroup (we shall freely use the terminology and results of [@Spr Chap. 11] for algebraic varieties defined over a field). The Main Results {#sec:result} ================ \[thm:main\] Let $S$ be an algebraic semigroup defined over a subfield $F$ of $k$. Then $S(F)$ is strongly $\pi$-regular, that is for any $x \in S(F)$, there exists a positive integer $n$ and an idempotent $e \in S(F)$ such that $x^n$ belongs to the unit group of $eS(F)e$. We may replace $S$ with any closed subsemigroup defined over $F$ and containing some power of $x$. Denote by $\langle x \rangle$ the smallest closed subsemigroup of $S$ containing $x$, that is, the closure of the subset $ \{ x^m, m >0 \}$; then $\langle x \rangle$ is defined over $F$ by [@Spr Lem. 11.2.4]. The subsemigroups $\langle x^n \rangle$, $n > 0$, form a family of closed subsets of $S$, and satisfy $\langle x^{mn} \rangle \subseteq \langle x^m \rangle \cap \langle x^n \rangle$. Thus, there exists a smallest such semigroup, say $\langle x^{n_0} \rangle$. Replacing $x$ with $x^{n_0}$, we may assume that $S = \langle x \rangle = \langle x^n \rangle$ for all $n > 0$. \[lem:semi\] With the above notation and assumptions, $x S$ is dense in $S$. Moreover, $S$ is irreducible. Since $S = \langle x^2 \rangle$, the subset $ \{ x^n, n \geq 2 \}$ is dense in $S$. Hence $x S$ is dense in $S$ by an easy observation (Lemma \[lem:dense\]) that we will use repeatedly. Let $S_1, \ldots,S_r$ be the irreducible components of $S$. Then each $x S_i$ is contained in some component $S_j$. Since $xS$ is dense in $S$, we see that $x S_i$ is dense in $S_j$. In particular, $j$ is unique and the map $\sigma : i \mapsto j$ is a permutation. By induction, $x^n S_i$ is dense in $S_{\sigma^n(i)}$ for all $n$ and $i$; thus $x^n S_i$ is dense in $S_i$ for some $n$ and all $i$. Choose $i$ such that $x^n \in S_i$. Then it follows that $x^{mn} \in S_i$ for all $m$. Thus, $\langle x^n \rangle \subseteq S_i$, and $S = S_i$ is irreducible. \[lem:mono\] Let $S$ be an algebraic semigroup and let $x \in S$. Assume that $S = \langle x \rangle$ (in particular, $S$ is commutative), $xS$ is dense in $S$, and $S$ is irreducible. Then $S$ is a monoid and $x$ is invertible. For $y \in S$, consider the decreasing sequence $$\cdots \subseteq \overline{y^{n+1} S} \subseteq \overline{y^n S} \subseteq \cdots \subseteq \overline{yS }\subseteq S$$ of closed, irreducible ideals of $S$. We claim that $$\overline{y^d S} = \overline{y^{d+1} S}= \cdots,$$ where $d := \dim(S) + 1$. Indeed, there exists $n \leq d$ such that $\overline{y^{n+1} S} = \overline{y^n S}$, that is, $y^{n+1} S$ is dense in $\overline{y^n S}$. Multiplying by $y^{m-n}$ and using Lemma \[lem:dense\], it follows that $y^{m+1} S$ is dense in $\overline{y^m S}$ for all $m \geq n$ and hence for $m \geq d$. This proves the claim. We may thus set $$I_y := \overline{y^d S} = \overline{y^{d+1} S} = \cdots$$ Then we have for all $y,z\in S$, $$\overline{y^d I_z} = I_{yz} \subseteq I_z,$$ since $y^d(z^d S) = (yz)^d S \subseteq z^d S$. Also, note that $I_x = S$, and $I_e = e S$ for any idempotent $e$ of $S$. By [@Br Sec. 2.3], $S$ has a smallest idempotent $e_S$, and $e_S S$ is the smallest ideal of $S$. In particular, $e_S S \subseteq I_y$ for all $y$. Define $$\mathscr{I} = \{ I \subseteq S\;|\; I = I_y\;\text{for some}\; y \in S\}.$$ This is a set of closed, irreducible ideals, partially ordered by inclusion, with smallest element $e_SS$ and largest element $S$. If $S = e_S S$, then $S$ is a group and we are done. Otherwise, we may choose $I \in \mathscr{I}$ which covers $e_S S$ (since $\mathscr{I} \setminus \{ e_S S \}$ has minimal elements under inclusion). Consider $$T = \{ y\in S\:|\; yI \;\text{is dense in}\; I \}.$$ If $y,z \in T$ then $\overline{yzI} = \overline{y\overline{zI}} = I$ and hence $T$ is a subsemigroup of $S$. Also, note that $T \cap e_S S = \emptyset$, since $e_S z I \subseteq e_S S$ is not dense in $I$ for any $z \in S$. Furthermore $x \in T$. (Indeed, $x S$ is dense in $S$ and hence $x y^d S$ is dense in $\overline{y^dS}$ for all $y \in S$. Thus, $x \; \overline{y^d S}$ is dense in $\overline{y^d S}$; in particular, $x I$ is dense in $I$). We now claim that $$T = \{ y \in S\;|\; y^d I \not \subseteq e_S S \}.$$ Indeed, if $y \in T$ then $y^d I$ is dense in $I$ and hence not contained in $e_S S$. Conversely, assume that $y^d I \not \subseteq e_S S$ and let $z \in S$ such that $I = I_z$. Since $\overline{y^d I} = \overline{y^d I_z} = I_{yz} \in \mathscr{I}$ and $\overline{y^d I} \subseteq I$, it follows that $\overline{y^d I} = I$ as $I$ covers $e_S S$. By that claim, we have $$S\setminus T = \{ y \in S \;|\; y^d I \subseteq e_S S\} = \{ y \in S \;|\; e_S y^d z = y^d z \; \text{for all} \; z \in I \}.$$ Hence $S \setminus T$ is closed in $S$. Thus, $T$ is an open subsemigroup of $S$; in particular, $T$ is irreducible. Moreover, since $x \in T$ and $xS$ is dense in $S$, it follows that $x T$ is dense in $T$; also note that $\{ x^n, n >0 \}$ is dense in $T$. Let $e_T \in T$ be the minimal idempotent, then $e_T \notin e_S S$ and hence the closed ideal $e_T S$ contains strictly $e_S S$. Since both are irreducible, we have $\dim(e_T T) = \dim(e_T S) > \dim(e_S S)$. Now the proof is completed by induction on $\kappa(S) := \dim(S) - \dim(e_S S)$. Indeed, if $\kappa(S) = 0$, then $S = e_S S$ is a group. In the general case, we have $\kappa(T) < \kappa(S)$. By the induction assumption, $T$ is a monoid and $x$ is invertible in $T$. As $T$ is dense in $S$, the neutral element of $T$ is also neutral for $S$, and hence $x$ is invertible in $S$. By Lemmas \[lem:semi\] and \[lem:mono\], there exists $n$ such that $\langle x^n \rangle$ is a monoid defined over $F$, and $x^n$ is invertible in that monoid. To complete the proof of Theorem \[thm:main\], it suffices to show that the neutral element $e$ of $\langle x^n \rangle$ is defined over $F$. For this, consider the morphism $$\phi: S \times S \longrightarrow S, \quad (y,z) \longmapsto x^n y z.$$ Then $\phi$ is the composition of the multiplication $$\mu : S \times S \longrightarrow S, \quad (y,z) \longmapsto yz$$ and of the left multiplication by $x^n$; the latter is an automorphism of $S$, defined over $F$. So $\phi$ is defined over $F$ as well, and the fiber $Z := \phi^{-1}(x^n)$ is isomorphic to $\mu^{-1}(e)$, hence to the unit group of $S$. In particular, $Z$ is smooth. Moreover, $Z$ contains $(e,e)$, and the tangent map $$d\phi_{(e,e)} : T_{(e,e)} (S \times S) \longrightarrow T_{x^n} S$$ is surjective, since $$d\mu_{(e,e)}: T_{(e,e)} (S \times S) = T_e S \times T_eS \longrightarrow T_e S$$ is just the addition. So $Z$ is defined over $F$ by [@Spr Cor. 11.2.14]. But $Z$ is sent to the point $e$ by $\mu$. Since that morphism is defined over $F$, so is $e$. \[lem:dense\] Let $X$ be a topological space, and $f : X \to X$ a continuous map. If $Y \subseteq X$ is a dense subset then $f(Y) \subseteq \overline{f(X)}$ is a dense subset. Let $U \subseteq \overline{f(X)}$ be a nonempty open subset. Then $f^{-1}(U) \subseteq X$ is open, and nonempty since $f(X)$ is dense in $\overline{f(X)}$. Hence $Y \cap f^{-1}(U) \neq \emptyset$. If $y \in Y\cap f^{-1}(U)$ then $f(y) \in f(Y)\cap U$. Hence $f(Y)\cap U \neq \emptyset$. \[rem:uni\] Given $x \in S$, there exists a [*unique*]{} idempotent $e = e(x) \in S$ such that $x^n$ belongs to the unit group of $e S e$ for some $n >0$. Indeed, we then have $x^n S x^n \subseteq e S e$; moreover, since there exists $y \in eSe$ such that $x^n y = y x^n = e$, we also have $e S e = x^n y S y x^n e \subseteq x^n S x^n$. Thus, $x^n S x^n = e S e$. It follows that $x^{mn} S x^{mn}$ is a monoid with neutral element $e$ for any $m > 0$, which yields the desired uniqueness. In particular, if $x \in S(F)$ then the above idempotent $e(x)$ is an $F$-point of the closed subsemigroup $\langle x \rangle$. We now give some details on the structure of the latter semigroup. For $x,e,n$ as above, we have $x^n = ex^n = (ex)^n$, and $y(ex)^n = e$ for some $y \in H_e$ (the unit group of $e \, \langle x \rangle$). But then $e x \in H_e$ since $(y(ex)^{n-1})(ex) = e$. Thus, $ex^m = (ex)^m \in H_e$ for all $m > 0$. But if $m \geq n$ then $x^m = ex^m$. Thus, if $x \notin H_e$ then there exists an unique $r > 0$ such that $x^r \notin H_e$ and $x^m \in H_e$ for any $m > r$. In particular, $x^r \in e \, \langle x \rangle$ for all $m \geq r$. Thus [*we can write $$\langle x \rangle = e \, \langle x \rangle \sqcup \{x,x^2,...,x^s \}$$ for some $s < r$*]{}. Notice also that [*these $x^i$’s, with $i \leq s$, are all distinct*]{} (if $x^i = x^j$ with $1 \leq i < j \leq s$, then $x^{i + s + 1 - j} = x^{s + 1} \in e \, \langle x \rangle$, a contradiction). Moreover, a similar decomposition holds for the semigroup of $F$-rational points. The set $\{ ex^m, m > 0 \}$ is dense in $e \,\langle x \rangle$ by Lemma \[lem:dense\]. But $ex^m = (ex)^m$, and $ex \in H_e$. So $e \, \langle x \rangle$ [*is a unit-dense algebraic monoid*]{}. Furthermore, if $\langle x^{m_0} \rangle$ is the smallest subsemigroup of $\langle x \rangle$ of the form $\langle x^m \rangle$, for some $m>0$, then $\langle x^{m_0} \rangle$ [*is the neutral component of*]{} $e \, \langle x \rangle$ (the unique irreducible component containing $e$). Indeed, $\langle x^{m_0} \rangle$ is irreducible by Lemma \[lem:semi\], and $y^{m_0} \in \langle x^{m_0} \rangle$ for any $y \in \langle x \rangle$ in view of Lemma \[lem:dense\]. Thus, the unit group of $\langle x^{m_0} \rangle$ has finite index in the unit group of $\langle x \rangle$, and hence in that of $e \, \langle x \rangle$. Finally, we show that Theorem \[thm:main\] is self-improving by obtaining the following stronger statement: \[cor:gen\] Let $S$ be an algebraic semigroup. Then there exists $n > 0$ (depending only on $S$) such that $x^n \in H_{e(x)}$ for all $x \in S$, where $e : x \mapsto e(x)$ denotes the above map. Moreover, there exists a decomposition of $S$ into finitely many disjoint locally closed subsets $U_j$ such that the restriction of $e$ to each $U_j$ is a morphism. We first show that for any irreducible subvariety $X$ of $S$, there exists a dense open subset $U$ of $X$ and a positive integer $n = n(U)$ such that $x^n \in H_{e(x)}$ for all $x \in U$, and $e\vert_U$ is a morphism. We will consider the semigroup $S(k(X))$ of points of $S$ over the function field $k(X)$, and view any such point as a rational map from $X$ to $S$; the semigroup law on $S(k(X))$ is then given by pointwise multiplication of rational maps. In particular, the inclusion of $X$ in $S$ yields a point $\xi \in S(k(X))$ (the image of the generic point of $X$). By Theorem \[thm:main\], there exist a positive integer $n$ and points $e, y \in S(k(X))$ such that $e^2 = e$, $\xi^n e = e \xi^n = \xi^n$, $y e = e y = y$ and $\xi^n y = y \xi^n = e$. Let $U$ be an open subset of $X$ on which both rational maps $e,y$ are defined. Then the above relations are equalities of morphisms $U \to S$, where $\xi$ is the inclusion. This yields the desired statements. Next, start with an irreducible component $X_0$ of $S$ and let $U_0$ be an open subset of $X_0$ such that $e\vert_{U_0}$ is a morphism. Now let $X_1$ be an irreducible component of $X_0 \setminus U_0$ and iterate this construction. This yields disjoint locally closed subsets $U_0,U_1,\ldots, U_j,\ldots$ such that $e\vert_{U_j}$ is a morphism for all $j$, and $X \setminus (U_0 \cup \cdots \cup U_j)$ is closed for all $j$. Hence $U_0 \cup \cdots \cup U_j = X$ for $j \gg 0$. [99]{} Brion, M.: On Algebraic Semigroups and Monoids, Available on the arXiv: <http://arxiv.org/pdf/1208.0675v4.pdf> Putcha, M.S.: Linear Algebraic Monoids. Cambridge University Press, Cambridge (1988) Springer, T.A.: Linear Algebraic Groups. Second edition. Birkhäuser, Boston (1998)
{ "pile_set_name": "ArXiv" }
--- abstract: 'By incorporating the zero-point energy contribution we derive simple and accurate extensions of the usual Thomas-Fermi (TF) expressions for the ground-state properties of trapped Bose-Einstein condensates that remain valid for an arbitrary number of atoms in the mean-field regime. Specifically, we obtain approximate analytical expressions for the ground-state properties of spherical, cigar-shaped, and disk-shaped condensates that reduce to the correct analytical formulas in both the TF and the perturbative regimes, and remain valid and accurate in between these two limiting cases. Mean-field quasi-1D and -2D condensates appear as simple particular cases of our formulation. The validity of our results is corroborated by an independent numerical computation based on the 3D Gross-Pitaevskii equation.' author: - 'A. Muñoz Mateo' - 'V. Delgado' title: 'Extension of the Thomas-Fermi approximation for trapped Bose-Einstein condensates with an arbitrary number of atoms' --- The experimental realization of Bose-Einstein condensates (BECs) of dilute atomic gases confined in optical and magnetic traps [@BEC1; @BEC2; @BEC3] has stimulated great activity in the characterization of these quantum systems. Of particular interest are the ground-state properties of trapped BECs with repulsive interatomic interactions [@Baym1]. These properties derive from the condensate wave function $\psi(\mathbf{r})$ which, in the zero-temperature limit, satisfies the stationary Gross-Pitaevskii equation (GPE) [@RevStrin] $$\left( -\frac{\hbar^{2}}{2m}\nabla^{2}+V(\mathbf{r})+gN\left\vert \psi\right\vert ^{2}\right) \psi=\mu\psi, \label{TF0}$$ where $N$ is the number of atoms, $g=4\pi\hbar^{2}a/m$ is the interaction strength, $a$ is the s-wave scattering length, $V(\mathbf{r})=\frac{1}{2}m(\omega_{\bot}^{2}r_{\bot}^{2}+\omega_{z}^{2}z^{2})$ is the harmonic potential of the confining trap, and $\mu$ is the chemical potential. Only in two limiting cases can Eq. (\[TF0\]) be solved analytically: in the Thomas-Fermi (TF) and perturbative regimes. When $N$ is sufficiently large that $\mu\gg\hbar\omega_{\bot},\,\hbar\omega_{z},$ one enters the TF regime. In this case the kinetic energy can be neglected in comparison with the interaction energy and the GPE reduces to a simple algebraic equation. Useful analytical expressions can then be obtained for the condensate ground-state properties [@Baym1]. In the simple case of a spherical trap characterized by an oscillator length $a_{r}=\sqrt{\hbar/m\omega}$, Eq. (\[TF0\]) leads in the TF limit to $$\frac{1}{2}m\omega^{2}r^{2}+gN\left\vert \psi(r)\right\vert ^{2}=\mu ,\hspace{0.6cm}0\leq r\leq R \label{TF1}$$ where the condensate radius $R=\sqrt{2\mu/\hbar\omega}\,a_{r}$ is determined from the condition $\left\vert \psi(r)\right\vert ^{2}\geq0$, and the chemical potential $\mu=\frac{1}{2}\left( 15Na/a_{r}\right) ^{2/5}\hbar\omega$ follows from the normalization of $\psi(r)$. In the opposite limit, when $N$ is small enough that the interaction energy can be treated as a weak perturbation, one enters the (ideal gas) perturbative regime. In this case, to the lowest order, $\psi(r)$ is given by the harmonic oscillator ground state, $\psi(r)=(\pi a_{r}^{2})^{-3/4}\exp(-r^{2}/2a_{r}^{2})$, and the chemical potential satisfies$$(3/2)\hbar\omega+g\bar{n}=\mu\label{TF2}$$ where $\bar{n}=N/(\sqrt{2\pi}a_{r})^{3}$ is the mean atom density. Away from these two limiting cases, in principle, one has to solve the GPE numerically. Very few theoretical works have addressed the question of looking for approximate analytical solutions valid in between the two analytically solvable regimes. The most relevant proposals are based on a variational trial wave function [@Fet1], or on the semiclassical limit of the Wigner phase-space distribution function of the condensate [@Vinas1]. However, the practical usefulness of these approaches turns out to be somewhat limited in comparison with the simple TF approximation. In this work we address the above question from a different point of view. We start from the usual TF approximation and modify it conveniently to account, in a simple manner, for the zero-point energy contribution. This enables us to derive simple and accurate extensions of the TF expressions that remain valid for an arbitrary number of atoms in the mean-field regime. Specifically, we obtain general analytical expressions for the ground-state properties of spherical, cigar-shaped, and disk-shaped condensates that reduce to the correct analytical formulas in both the TF and the perturbative regimes, and remain valid and accurate in between these two limiting cases. We begin by considering a BEC in a spherical trap. In principle, we start from the TF relation of Eq. (\[TF1\]). However, since we intend to apply this equation to arbitrarily small condensates, we introduce a lower cutoff radius $r_{0}$, defined through $\frac{1}{2}m\omega^{2}r_{0}^{2}=\frac{3}{2}\hbar\omega$, in order to be consistent with the fact that the contribution from the harmonic oscillator energy cannot be smaller than the zero-point energy. As for the small volume $V_{0}\sim a_{r}^{3}$ corresponding to $r\leq r_{0}$, we do not aspire to get a precise knowledge of the wave function therein. Instead, we content ourselves with an effective condensate density $\bar{n}_{0}$ in that region. As we shall see, this is all that is needed to obtain very accurate approximate formulas for most of the condensate ground-state properties. Thus we start from the ansatz \[eq1ab\]$$\begin{aligned} \frac{1}{2}m\omega^{2}r^{2}+gN\left\vert \psi(r)\right\vert ^{2} & =\mu,\hspace{0.5cm}r_{0}<r\leq R\label{eq1a}\\ \frac{3}{2}\hbar\omega+g\sqrt{6/\pi}\,\bar{n}_{0} & =\mu,\hspace{0.5cm}0\leq r\leq r_{0} \label{eq1b}$$ with $\psi(r)=0$ for $r>R$. A renormalization constant $\kappa^{-1}\equiv \sqrt{6/\pi}$ has been introduced in Eq. (\[eq1b\]) to guarantee the correct perturbative limit. In this limit $\mu\rightarrow\frac{3}{2}\hbar\omega$ and $R\rightarrow r_{0}=\sqrt{3}\,a_{r}$. Under these circumstances, only Eq. (\[eq1b\]) contributes significantly to the chemical potential, and in this case $\bar{n}_{0}=N/V_{0}$. This corresponds to a uniform spherical condensate, defined in the finite volume $V_{0}$. In order for this uniform density to produce the same chemical potential as the ground state of the harmonic oscillator over the volume of the entire space it is only necessary to renormalize the corresponding interaction strength by multiplying by $\sqrt{6/\pi}$. Equations (\[eq1ab\]) also yield the correct result in the TF regime. This is mainly a consequence of the direct relation existing between the number of particles and the size of a trapped BEC. For large condensates, such that $\mu\gg\hbar\omega$, one has $R\gg r_{0}$ and, as a result, the relative contribution from Eq. (\[eq1b\]) to the normalization integral that determines $\mu$ becomes negligible. Since we have renounced an explicit expression for $\psi(r)$ in $V_{0}$, in this respect, our approach cannot provide more information than the TF approach. Only when $R\gg r_{0}$ can we have a sufficiently precise knowledge of the wave function and, in this case, it coincides with the TF wave function. The chemical potential follows from the normalization of $\psi(r)$. After a straightforward calculation one obtains $$\frac{1}{15}\overline{R}^{5}+\frac{\sqrt{3}}{2}\left( \kappa-1\right) \overline{R}^{2}-\frac{3\sqrt{3}}{2}\left( \kappa-\frac{3}{5}\right) =N\frac{a}{a_{r}},\label{eq2}$$ where $\overline{R}\equiv R/a_{r}$, and $\overline{\mu}=\frac{1}{2}\overline{R}^{2}$ is the chemical potential in units of $\hbar\omega$. As Eq. (\[eq2\]) shows, the ground-state properties depend on the sole parameter $\chi_{0}\equiv Na/a_{r}$. When $\chi_{0}\gg1$ (TF limit) the above equation leads to $\overline{\mu}=\frac{1}{2}\left( 15\chi_{0}\right) ^{2/5}$, as expected. The $\chi_{0}\ll1$ limit corresponds to the perturbative regime and, in this case, one obtains $\overline{\mu}=3/2+\sqrt{2/\pi}\,\chi_{0}$, which is nothing but the perturbative result (\[TF2\]). For arbitrary $\chi_{0}$, in principle one has to solve numerically the above quintic polynomial equation (which has only one physically meaningful real solution). This is a simple task that can be carried out with standard mathematical software packages. We have found, however, a rather accurate approximate solution. It can be shown that the expression$$\overline{R}^{2}=3+\left( \frac{1}{\left( 15\chi_{0}\right) ^{\frac{2}{5}}+\frac{5}{2}}+\frac{1}{\frac{7}{2}\chi_{0}^{11/15}+10}+\frac{\sqrt{\pi/2}}{2\chi_{0}}\right) ^{-1}\label{eq2b}$$ satisfies Eq. (\[eq2\]) with a residual error [@Error] smaller than $0.7\%$ for any $\chi_{0}\in\lbrack0,\infty)$. Figures \[Fig1\](a) and \[Fig1\](b) show, respectively, the predicted chemical potential, $\overline{\mu}=\frac{1}{2}\overline{R}^{2}$, and condensate radius, obtained from Eq. (\[eq2b\]) (solid lines), along with the exact results obtained from the numerical solution of the 3D GPE (open circles). For the numerical calculation we have defined the radius through the condition $\left\vert \psi(R)\right\vert ^{2}=0.05\left\vert \psi(0)\right\vert ^{2}$. With this definition, Eq. (\[eq2b\]) reproduces the numerical $R$ with a relative error smaller than $3\%$ for any $\chi_{0}$. Most of the error, however, comes from the region where $\chi_{0}\gg1$ (TF limit) because in that region $R\rightarrow R_{\mathrm{TF}}$ and it rather satisfies $\left\vert \psi(R)\right\vert ^{2}=0$. The accuracy with respect to the numerical $\overline{\mu}$ is better than $0.5\%$. \[ptb\] [Fig1.eps]{} A straightforward calculation yields the mean-field interaction energy per particle, $\overline{\epsilon}_{\mathrm{int}}\equiv\epsilon_{\mathrm{int}}/\hbar\omega\equiv E_{\mathrm{int}}/N\hbar\omega$, $$\begin{aligned} \overline{\epsilon}_{\mathrm{int}} & =\frac{1}{8\chi_{0}}\left[ \frac {8}{105}\overline{R}^{7}+\sqrt{3}\left( \kappa-1\right) \overline{R}^{4}\right. \nonumber\\ & \left. -6\sqrt{3}\left( \kappa-\frac{3}{5}\right) \overline{R}^{2}+9\sqrt{3}\left( \kappa-\frac{3}{7}\right) \right] .\label{eq3a}$$ For $\chi_{0}\gg1$, one recovers the TF result, $\epsilon_{\mathrm{int}}=(2/7)\mu$. In the $\chi_{0}\ll1$ limit, using that $\overline{R}^{2}=3+2\sqrt{2/\pi}\,\chi_{0}-(1/\pi)(1/9+\sqrt{2/3\pi})\chi_{0}^{2}+O(\chi_{0}^{3})$ is a perturbative solution of Eq. (\[eq2\]), one obtains $\epsilon_{\mathrm{int}}=\chi_{0}\hbar\omega/\sqrt{2\pi}=g\bar{n}/2$, which again is the correct result. Finally, the kinetic and potential energies can be readily obtained in terms of the previous results by using the exact relations [@RevStrin] \[eq4ab\]$$\begin{aligned} \epsilon_{\mathrm{kin}} & \equiv E_{\mathrm{kin}}/N=\mu /2-(7/4)E_{\mathrm{int}}/N,\label{eq4a}\\ \epsilon_{\mathrm{pot}} & \equiv E_{\mathrm{pot}}/N=\mu /2-(1/4)E_{\mathrm{int}}/N.\label{eq4b}$$ In Fig. \[Fig1\](a) we show the theoretical prediction for $\overline {\epsilon}_{\mathrm{int}}$, $\overline{\epsilon}_{\mathrm{kin}}$, and $\overline{\epsilon}_{\mathrm{pot}}$, obtained from Eqs. (\[eq2b\])–(\[eq4ab\]) (solid lines), along with the exact numerical results (open circles). Next we consider a BEC confined in a cigar-shaped magnetic trap with oscillator lengths $a_{\bot}=\sqrt{\hbar/m\omega_{\bot}}$ and $a_{z}=\sqrt{\hbar/m\omega_{z}}$ and an aspect ratio $\lambda=\omega_{z}/\omega_{\bot}\ll2$. We shall restrict ourselves to the mean-field regime, which requires $N\lambda a_{\bot}^{2}/a^{2}\gg1$ [@Petrov1; @Dunj1; @Strin1]. As before, we start from the usual TF expression, which we assume to be valid up to a minimum radial distance $r_{\bot}^{0}=\sqrt{2}\,a_{\bot}$, determined from the condition that the contribution from the radial harmonic oscillator energy should not be smaller than $\hbar\omega_{\bot}$. This defines an outer region $V_{+}\equiv\left\{ (r_{\bot},z)\colon\,r_{\bot}^{2}/R^{2}+z^{2}/Z_{\mathrm{TF}}^{2}\leq1\;\wedge\;r_{\bot}>r_{\bot}^{0}\right\} $, which is nothing but the usual TF ellipsoidal density cloud, truncated at $r_{\bot}=r_{\bot}^{0}$. Note that unlike what happens with the condensate radius $R=\sqrt{2\mu/\hbar\omega_{\bot}}\,a_{\bot}$, which remains the same, now the axial condensate half-length $Z=\sqrt{2(\mu/\hbar\omega_{\bot}-1)}\,a_{z}/\sqrt{\lambda}$ coincides with the TF value $Z_{\mathrm{TF}}$ only in the limit $\mu/\hbar\omega_{\bot}\gg1$. For large condensates, when $\mu\gg\hbar\omega_{\bot}$ (TF regime), this is the only region that contributes significantly. On the contrary, in the perturbative regime, as $\mu\rightarrow\hbar\omega_{\bot}$ most of the contribution comes from the inner cylinder $V_{-}\equiv\left\{ (r_{\bot},z)\colon\,r_{\bot}\leq r_{\bot }^{0}\;\wedge\;|z|\leq Z\right\} $. In this case, if $a\ll a_{\bot}$, the transverse dynamics becomes frozen in the radial ground state of the harmonic trap and the condensate wave function can be factorized as $\psi(r_{\bot },z)=\varphi(r_{\bot})\phi(z)$, with $\varphi(r_{\bot})=(\pi a_{\bot}^{2})^{-1/2}\exp(-r_{\bot}^{2}/2a_{\bot}^{2})$. This corresponds to a mean-field quasi-1D condensate. Substituting then in Eq. (\[TF0\]) and integrating out the radial dynamics, one finds $$\hbar\omega_{\bot}+\frac{1}{2}m\omega_{z}^{2}z^{2}+g_{\mathrm{1D}}N|\phi(z)|^{2}=\mu,\label{eq5}$$ where $g_{\mathrm{1D}}=g/2\pi a_{\bot}^{2}$ [@Olsha2], and we have used that $\mu\sim\hbar\omega_{\bot}\gg\frac{1}{2}\hbar\omega_{z}$ to neglect the axial kinetic energy. Note that $g_{\mathrm{1D}}$ can be conveniently rewritten as $g\bar{n}_{2}$ with $\bar{n}_{2}=1/\pi(r_{\bot}^{0})^{2}$, indicating that one can account for the contribution from the radial ground state by using a uniform mean density per unit area normalized to unity in $V_{-}$. Guided by these simple ideas, we then propose the following ansatz: $$\begin{aligned} \frac{1}{2}m\omega_{\bot}^{2}r_{\bot}^{2}+\frac{1}{2}m\omega_{z}^{2}z^{2}+gN\left\vert \psi(r_{\bot},z)\right\vert ^{2} & =\mu,\hspace {0.5cm}\mathbf{r}\in V_{+}\\ \hbar\omega_{\bot}+\frac{1}{2}m\omega_{z}^{2}z^{2}+gN\bar{n}_{2}|\phi(z)|^{2} & =\mu,\hspace{0.5cm}\mathbf{r}\in V_{-}$$ with $\psi=0$ elsewhere. The normalization of $\psi$ leads to $$\frac{1}{15}(\sqrt{\lambda}\,\overline{Z})^{5}+\frac{1}{3}(\sqrt{\lambda }\,\overline{Z})^{3}=N\lambda\frac{a}{a_{\bot}}, \label{eq7}$$ where $\overline{Z}\equiv Z/a_{z}$ and $\overline{R}\equiv R/a_{\bot}$. The chemical potential $\overline{\mu}\equiv\mu/\hbar\omega_{\bot}$ is given by $\overline{\mu}=1+\frac{1}{2}(\sqrt{\lambda}\,\overline{Z})^{2}$. Now the relevant parameter determining the ground-state properties is $\chi_{1}\equiv N\lambda a/a_{\bot}$. When $\chi_{1}\gg1$ (TF regime), Eq. (\[eq7\]) leads to $\overline{\mu}=\frac{1}{2}(15\chi_{1})^{2/5}$ and $\overline{Z}=\lambda^{-1/2}(15\chi_{1})^{1/5}$. When $\chi_{1}\ll1$ (mean-field quasi-1D regime), one obtains $\overline{\mu}=1+\frac{1}{2}(3\chi_{1})^{2/3}$ and $\overline{Z}=\lambda^{-1/2}(3\chi_{1})^{1/3}$, in agreement with previous results [@Dunj1; @Strin1]. In general, for arbitrary $\chi_{1}$, an approximate solution satisfying Eq. (\[eq7\]) with a residual error less than $0.75\%$ for any $\chi_{1}\in\lbrack0,\infty)$ is given  by $$\sqrt{\lambda}\,\overline{Z}=\left( \frac{1}{\left( 15\chi_{1}\right) ^{\frac{4}{5}}+\frac{1}{3}}+\frac{1}{57\chi_{1}+345}+\frac{1}{(3\chi _{1})^{\frac{4}{3}}}\right) ^{-\frac{1}{4}}\label{eq8}$$ The mean-field interaction energy $\overline{\epsilon}_{\mathrm{int}}\equiv\epsilon_{\mathrm{int}}/\hbar\omega_{\bot}$ is $$\overline{\epsilon}_{\mathrm{int}}=\frac{1}{15\chi_{1}}\left( (\sqrt{\lambda }\,\overline{Z})^{5}+\frac{1}{7}(\sqrt{\lambda}\,\overline{Z})^{7}\right) .\label{eq9}$$ For $\chi_{1}\gg1$, Eq. (\[eq9\]) reduces to $\epsilon_{\mathrm{int}}=(2/7)\mu$, while for $\chi_{1}\ll1$, it leads to $\epsilon_{\mathrm{int}}=(2/5)(\mu-\hbar\omega_{\bot})$, which again are the correct analytical limits. As for the condensate density per unit length, $n_{1}(z)\equiv N\int2\pi r_{\bot}dr_{\bot}\left\vert \psi(r_{\bot},z)\right\vert ^{2}$, after a straightforward calculation one finds $$n_{1}(z)=\frac{(\sqrt{\lambda}\;\overline{Z})^{2}}{4a}\left( 1-\frac{z^{2}}{Z^{2}}\right) +\frac{(\sqrt{\lambda}\;\overline{Z})^{4}}{16a}\left( 1-\frac{z^{2}}{Z^{2}}\right) ^{2}\label{eq10}$$ The first term is the contribution from $V_{-}$ and thus it is the only one that contributes significantly in the $\chi_{1}\ll1$ limit. On the contrary, the second term, which is the contribution from $V_{+}$, gives the dominant contribution in the $\chi_{1}\gg1$ limit, in good agreement with previous results [@Strin1]. \[ptb\] [Fig2.eps]{} Figure \[Fig2\] shows the theoretical predictions for the ground-state properties of arbitrary cigar-shaped condensates with $\lambda\ll2$, obtained from Eqs. (\[eq4ab\]) and (\[eq8\])–(\[eq10\]) (solid lines), along with exact numerical results (open circles). Finally, we consider a BEC in a disk-shaped trap with $\lambda\gg2$ and $a_{z}\gg a$. In this case, in the mean-field peturbative regime, which occurs when $\chi_{2}\equiv Na/\lambda^{2}a_{z}\ll1$, the system reduces to a quasi-2D condensate satisfying $$\frac{1}{2}\hbar\omega_{z}+\frac{1}{2}m\omega_{\bot}^{2}r_{\bot}^{2}+g_{\mathrm{2D}}N|\varphi(r_{\bot})|^{2}=\mu,\label{eq11}$$ with $g_{\mathrm{2D}}=g/\sqrt{2\pi}\,a_{z}$ [@Petrov2]. We then rewrite $g_{\mathrm{2D}}$ as $g\kappa_{2}^{-1}\bar{n}_{1}$, where $\bar{n}_{1}=1/2a_{z}$ is a uniform mean density per unit length and $\kappa_{2}^{-1}\equiv\sqrt{2/\pi}$ is the appropriate renormalization factor, and propose the following ansatz: $$\begin{aligned} \frac{1}{2}m\omega_{z}^{2}z^{2}+\frac{1}{2}m\omega_{\bot}^{2}r_{\bot}^{2}+gN\left\vert \psi(r_{\bot},z)\right\vert ^{2} & =\mu,\hspace {0.5cm}\mathbf{r}\in V_{+}\nonumber\\ \frac{1}{2}\hbar\omega_{z}+\frac{1}{2}m\omega_{\bot}^{2}r_{\bot}^{2}+g\kappa_{2}^{-1}N\bar{n}_{1}|\varphi(r_{\bot})|^{2} & =\mu,\hspace {0.5cm}\mathbf{r}\in V_{-}\label{eq11b}$$ with $\psi=0$ elsewhere. In the above equations, $V_{+}\equiv\left\{ (r_{\bot},z)\colon\,r_{\bot}^{2}/R_{\mathrm{TF}}^{2}+z^{2}/Z^{2}\leq 1\,\wedge\,|z|>z_{0}\right\} $ and $V_{-}\equiv\left\{ (r_{\bot},z)\colon\,r_{\bot}\leq R\;\wedge\;|z|\leq z_{0}\right\} $, where $z_{0}=a_{z}$, $R_{\mathrm{TF}}=\sqrt{2\mu/\hbar\omega_{z}}\sqrt{\lambda }\,a_{\bot}$, $R=\sqrt{2(\mu/\hbar\omega_{z}-1/2)}\sqrt{\lambda}\,a_{\bot}$, and $Z=\sqrt{2\mu/\hbar\omega_{z}}\,a_{z}$. More precisely, one expects $\kappa_{2}^{-1}\rightarrow\sqrt{2/\pi}$ in the perturbative regime ($\chi _{2}\ll1$), while $\kappa_{2}^{-1}\rightarrow1$ in the TF regime ($\chi_{2}\gg1$). The final results are not very sensitive to the specific functional form of $\kappa_{2}^{-1}$. We thus propose one of the simplest possibilities: $$\begin{aligned} \kappa_{2}^{-1}(\chi_{2}) & \equiv\sqrt{2/\pi}+\Theta(\chi_{2}-0.1)\nonumber\\ \times & \left( 1-\sqrt{2/\pi}\right) \left( 1-\frac{R_{\mathrm{TF}}(\chi_{2}=0.1)}{R_{\mathrm{TF}}(\chi_{2})}\right) ,\label{eq12}$$ where $\Theta(x)$ is the Heaviside function and $R_{\mathrm{TF}}(\chi _{2})=(15\chi_{2})^{1/5}a_{\bot}$ is the TF radius. The normalization of $\psi$ yields $$\frac{1}{15}\overline{Z}^{5}+\frac{1}{8}(\kappa_{2}-1)\frac{\overline{R}^{4}}{\lambda^{2}}-\frac{\overline{R}^{2}}{6\lambda}-\frac{1}{15}=\frac {Na}{\lambda^{2}a_{z}}, \label{eq13}$$ where $\overline{Z}\equiv Z/a_{z}$, $\overline{R}\equiv R/a_{\bot}$, and $\overline{Z}^{2}-\overline{R}^{2}/\lambda=1$. The chemical potential is $\overline{\mu}\equiv\mu/\hbar\omega_{z}=\frac{1}{2}(1+\overline{R}^{2}/\lambda)$. For $\chi_{2}\gg1$ Eq. (\[eq13\]) leads to the usual TF results, while for $\chi_{2}\ll1$ (mean-field quasi-2D regime), one obtains $\overline{\mu }=1/2+(2\sqrt{2/\pi}\chi_{2})^{1/2}$ and $\overline{R}=\lambda^{1/2}(8\sqrt{2/\pi}\chi_{2})^{1/4}$. An approximate solution that satisfies Eq. (\[eq13\]) with a residual error less than $0.95\%$ for any $\chi_{2}\in\lbrack0,\infty)$ is given  by$$\overline{R}_{\lambda}\equiv\overline{R}/\sqrt{\lambda}=\left[ \left( 1/15\chi_{2}\right) ^{8/5}+\left( \kappa_{2}/8\chi_{2}\right) ^{2}\right] ^{-1/8} \label{eq14}$$ After some calculation one finds the following expressions for the mean-field interaction energy $\overline{\epsilon}_{\mathrm{int}}\equiv\epsilon _{\mathrm{int}}/\hbar\omega_{z}$ and the condensate density per unit area $n_{2}(r_{\bot})$: $$\overline{\epsilon}_{\mathrm{int}}=\frac{1}{8\chi_{2}}\left( \frac {8\overline{Z}^{7}}{105}+\xi\frac{\overline{R}_{\lambda}^{6}}{6}-\frac{\overline{R}_{\lambda}^{4}}{3}-\frac{4\overline{R}_{\lambda}^{2}}{15}-\frac{8}{105}\right) \label{eq15}$$ $$n_{2}(r_{\bot})=\frac{\xi\left[ 2\overline{\mu}_{z}(r_{\bot})-1\right] }{4\pi aa_{z}}+\frac{\left[ 2\overline{\mu}_{z}(r_{\bot})\right] ^{3/2}-1}{6\pi aa_{z}}, \label{eq16}$$ where $\xi\equiv(\kappa_{2}-1)$ and $2\overline{\mu}_{z}(r_{\bot})\equiv1+\overline{R}_{\lambda}^{2}\left( 1-r_{\bot}^{2}/R^{2}\right) $. \[ptb\] [Fig3.eps]{} In Fig. \[Fig3\] we show the ground-state properties of arbitrary disk-shaped condensates with $\lambda\gg2$, obtained from our analytical formulas \[Eqs. (\[eq4ab\]) and (\[eq14\])–(\[eq16\])\] (solid lines), along with exact numerical results (open circles). In conclusion, modifying the usual TF approximation conveniently to account for the zero-point energy contribution, we have derived general analytical expressions for the ground-state properties of spherical, cigar-shaped, and disk-shaped condensates that reduce to the correct analytical formulas in both the TF and the mean-field perturbative regimes and remain valid and accurate in between these two limiting cases. Mean-field quasi-1D and -2D condensates appear as simple particular cases of our formulation. This work has been supported by MEC (Spain) and FEDER fund (EU) (Contract No. Fis2005-02886). [99]{} M. H. Anderson *et al.*, Science **269**, 198 (1995). K. B. Davis *et al.*, Phys. Rev. Lett. **75**, 3969 (1995). C. C. Bradley *et al.*, Phys. Rev. Lett. **78**, 985 (1997). G. Baym and C. J. Pethick, Phys. Rev. Lett. **76**, 6 (1996). For a review see, for example, F. Dalfovo, S. Giorgini, L. P. Pitaevskii, and S. Stringari, Rev. Mod. Phys. **71**, 463 (1999). A. L. Fetter, J. Low Temp. Phys. **106**, 643 (1997). P. Schuck and X. Viñas, Phys. Rev. A **61**, 43603 (2000). Given $P(R)=\chi$, we define the residual error associated with the approximate solution $R_{\varepsilon}$ as $[P(R_{\varepsilon})-\chi]/\chi$. D. S. Petrov *et al.*, Phys. Rev. Lett. **85**, 3745 (2000). V. Dunjko, V. Lorent, and M. Olshanii, Phys. Rev. Lett. **86**, 5413 (2001). C. Menotti and S. Stringari, Phys. Rev. A **66**, 043610 (2002). M. Olshanii, Phys. Rev. Lett. **81**, 938 (1998). D. S. Petrov *et al.*, Phys. Rev. Lett. **84**, 2551 (2000).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We set up a connection between the theory of spherical designs and the question of minima of Epstein’s zeta function. More precisely, we prove that a Euclidean lattice, all layers of which hold a $4$-design, achieves a local minimum of the Epstein’s zeta function, at least at any real $s>\frac{n}{2}$. We deduce from this a new proof of Sarnak and Strömbergsson’s theorem asserting that the root lattices $\mathbb{D}_4$ and $\mathbb{E}_8$, as well as the Leech lattice $\Lambda_{24}$, achieve a strict local minimum of the Epstein’s zeta function at any $s>0$. Furthermore, our criterion enables us to extend their theorem to all the so-called *extremal modular lattices* (up to certain restrictions) using a theorem of Bachoc and Venkov, and to other classical families of lattices ([[*e.g.* ]{}]{}the Barnes-Wall lattices).' address: | Institut de Mathématiques de Bordeaux, Université Bordeaux I, 351, cours de la Libération\ 33405 Talence, France author: - Renaud Coulangeon title: 'Spherical designs and zeta functions of lattices.' --- Introduction. {#introduction. .unnumbered} ============= A spherical design is a finite set of points on a sphere which is *well-distributed*, in the sense that it allows numerical integration of functions on the sphere up to a certain accurracy, the so-called *strength* of the design. More precisely, $X \subset {\mathbb{S}}^{n-1}$ is a $t$-design if for all homogeneous polynomial of degree $\leq t$ $$\int_{{\mathbb{S}}^{n-1}}f(x)dx=\frac{1}{|X|}\sum _{x \in X}f(x)$$ where ${\mathbb{S}}^{n-1}$ stands for the unit sphere in ${\mathbb{R}}^n$ endowed with its canonical measure $dx$, normalized so as $\int_{{\mathbb{S}}^{n-1}}dx=1$. One classical way to build such designs is to consider the set of vectors of a given length in a Euclidean lattice $L$ in ${\mathbb{R}}^n$, rescaled so as to lie in ${\mathbb{S}}^{n-1}$. It has long been observed that there is a link between the ability of getting designs of high strength in that way and classical properties of the underlying lattice e.g. density, symmetries, theta series. In this connection one can quote Venkov’s remarkable theorem asserting that if the set of *minimal vectors* (non zero vectors of minimal length) of a lattice is a $4$-design, then the lattice is extreme in Voronoï’s sense [[*i.e.* ]{}]{}it achieves a local maximum of the packing density function [@V]. There are many examples of lattices for which not only the set of minimal vectors holds a design, but all sets of vectors of any given length actually do. This happens for instance with the so-called *extremal modular lattices*, as shown by Bachoc and Venkov using theta series with spherical coefficients (see[@BV] and Section \[ex\] below). Another instance of this phenomenon is when the automorphism group of the lattice is “big enough” to satisfy Goethals and Seidel’s theorem (see [@GS Théorème 6.1.]), or Section \[ex\] below). In all these cases, since not only minimal vectors, but all layers are involved, one would expect some more consequences on the associated packing than local optimality for the density. One aim of the present paper is to provide an interpretation of this phenomenon in terms of Epstein’s $\zeta$-function. The Epstein zeta function of a lattice $L$ is defined, for $s\in{\mathbb{C}}$ with ${\mathrm{Re}}(s) > \frac{n}{2}$, as $$\zeta(L,s):= \sum_{x \in L - \{0\}} ||x||^{-2s}$$ and admits a meromorphic continuation to the complex plane with a simple pole at $s=\frac{n}{2}$. The question as to which $L$, for fixed $s >0$ ($s \neq \frac{n}{2}$) minimizes $\zeta(L,s)$, has a long history, starting with Sobolev’s work on numerical integration ([@So]), and a series of subsequent papers by Delone and Ryshkov among others. Of course, this question makes sense only if one restricts to lattices of fixed covolume, say $1$, since $$\forall \lambda > 0 \ \ \zeta(\lambda L,s)= \lambda^{-2s}\zeta(L,s).$$ From now on, we denote $\mathcal L_n ^{\circ}$ the set of lattices of determinant (covolume) $1$. Another, undoubtedly more important, reason to investigate this question, is its connection with Riemannian geometry : if $X$ is a compact Riemannian manifold, its height $h(X)$ is defined as $\zeta_{X}'(0)$, where $\zeta_X(s)$ is the so-called *zeta regularisation* of the determinant of the Laplacian, [[*i.e.* ]{}]{}$\zeta_{X}(s)=\sum_{\lambda_j \neq 0}\lambda_j ^{-s}$, where the $\lambda_j$ are the eigenvalues of the Laplacian on $X$. When $X$ is a flat torus ${\mathbb{R}}^n/L$, with $L$ a full rank lattice in ${\mathbb{R}}^n$, then $\zeta_{X}(s)$ is the same as $\zeta(L^{*},s)$, up to a constant (see [@Ch] and [@Sa]). In this context, a natural question is to find lattices achieving a minimum of this height function restricted to flat tori (the existence of such a minimum is shown in [@Ch]), which amounts to minimize $\zeta'(L^{*},0)$ for $L \in \mathcal L_n ^{\circ}$. We say that a lattice $L_0 \in \mathcal L ^{\circ}$ is $\zeta$-*extreme* at $s \in {\mathbb{R}}$ if it achieves a strict local minimum of the function $L \mapsto \zeta(L,s)$, $L \in \mathcal L_n ^{\circ}$. Delone and Ryshkov’s obtained a characterization of lattices in $\mathcal L_n ^{\circ}$ that are $\zeta$-*extreme* at $s$ for any large enough $s$ ([@DR Theorem 4]). One of the conditions is that all layers of the lattice hold a $2$-design (see Section \[basics\]). One would naturally ask for an explicit value $s_0$ such that $L_0$ is $\zeta$-*extreme* at $s$ for any $s \geq s_0$, but unfortunately, it is not possible to derive such an $s_0$ from Delone and Ryshkov’s theorem, nor from its proof . However, using more sophisticated tools, Sarnak and Strömbergsson proved in [@Sa-Sa] the following theorem The $\mathbb{D}_4$ lattice (rescaled so as to have determinant $1$), the $\mathbb{E}_8$-lattice and the Leech Lattice $\Lambda_{24}$ are $\zeta$-*extreme* at $s$ for any $s > 0$, and the associated tori achieve a strict local minimum of the height function on the set of flat tori of covolume $1$ and dimension $4$, $8$ and $24$ respectively. Their proof relies essentially on a certain property of the automorphism group of those lattices which is shown to imply the desired property, at least for $s>\frac{n}{2}$ (the proof for $s$ in the “critical strip” $0<s<\frac{n}{2}$ is more involved and requires some extra arguments). Inspired by Delone and Ryshkov’s theorem, one may ask for an explanation of this result in terms of spherical designs. This is precisely the aim of our main theorem, which we now state \[mt\] Let $L \in \mathcal L_n ^{\circ}$ be such that all its hold a $4$-design. Then $L$ is $\zeta$-extreme at $s$ for any $s >\frac{n}{2}$, and the torus associated with its dual $L^{*}$ achieves a strict local minimum of the height on the set of $n$-dimensional flat tori of covolume $1$. If moreover $\zeta(L,s) <0$ for $0<s<\frac{n}{2}$, then $L$ is $\zeta$-extreme at $s$ for any $s>0$, $s \neq \frac{n}{2}$. This theorem applies to $\mathbb{D}_4$, $\mathbb{E}_8$ and $\Lambda_{24}$, and somehow clarifies Sarnak and Strömbergsson’s proof. Moreover, it should apply to a wider class of lattices, for which the group theoretic tools used in [@Sa-Sa] are not available, but for which one can prove however that the $4$-design properties hold for all layers. In particular we prove that essentially all the extremal modular lattices (up to certain restrictions on both the dimension and the level) share with $\mathbb{D}_4$, $\mathbb{E}_8$ and $\Lambda_{24}$ the property of being $\zeta$-extreme at $s$ for any $s> \frac{n}{2}$, $s \neq \frac{n}{2}$ (see proposition \[ml\] in section \[ex\]). The paper is organized as follows : in Section \[basics\] we collect some preliminary results about lattices, spherical designs and Epstein’s $\zeta$ function. Section \[proof\] contains the proof of the main theorem, of which we present several examples of application in Section \[ex\]. In Section \[mtheta\], we derive a statement about the minima of theta functions, similar to our main theorem for the Epstein zeta function. Notation {#notation .unnumbered} -------- We denote $x\cdot y$ the usual scalar product of the vectors $x$ and $y$ in ${\mathbb{R}}^n$, and $\Vert x\Vert$ the associated norm. We write vectors in ${\mathbb{R}}^n$ as column vectors. The transpose of a matrix is denoted by the superscript ${}'$. Also, if $A$ is a square $n$-by-$n$ matrix, and $x$ a vector in ${\mathbb{R}}^n$, the notation $A[x]$ stands for the product $x'Ax$. The set of $n$-by-$n$ symmetric matrices with real entries is denoted $S_n({\mathbb{R}})$. It is endowed with its canonical scalar product $$\left\langle A,B\right\rangle :={\operatorname{Tr}}AB \ \ \ A,B \in S_n({\mathbb{R}}).$$ If $A \in S_n({\mathbb{R}})$ is fixed, one associates with any $x \in {\mathbb{R}}^n$ such that $A[x] \neq 0$ the $n$-by-$n$ symmetric matrix $$\widehat{x} _A:=\dfrac{xx'}{A[x]}.$$ One has $\left\langle H,\widehat{x} _A\right\rangle =\dfrac{H[x]}{A[x]}$ for any $H \in S_n({\mathbb{R}})$. Basics. {#basics} ======= Lattices and quadratic forms. ----------------------------- Throughout the paper, we denote $\mathcal L_n$ (resp. $\mathcal L_n^{\circ} $) the set of Euclidean lattices (resp. of covolume $1$) in ${\mathbb{R}}^n$, and $\mathcal P_n$ (resp. $\mathcal P_n^{\circ} $) the cone of positive definite quadratic forms (resp. of determinant $1$) in $n$ variables. We identify a quadratic form in $\mathcal P_n$ with its matrix in the canonical basis of ${\mathbb{R}}^n$. The map $P \mapsto P'P$ induces a bijection from $O_n({\mathbb{R}})~\setminus~{\mathbf G\mathbf L}_n({\mathbb{R}})$ onto $\mathcal P_n$ and we thus identify these two sets. Similarly, we identify $\mathcal L_n$ with the quotient ${\mathbf G\mathbf L}_n({\mathbb{R}})/{\mathbf G\mathbf L}_n({\mathbb{Z}})$, associating the lattice $L=P{\mathbb{Z}}^n$ the coset $P {\mathbf G\mathbf L}_n({\mathbb{Z}}) \in {\mathbf G\mathbf L}_n({\mathbb{R}})/{\mathbf G\mathbf L}_n({\mathbb{Z}})$. There is a well-known “dictionary” between Euclidean lattices and positive definite quadratic forms which is summarized in the following diagram $$\xymatrix{& {\mathbf G\mathbf L}_n({\mathbb{R}})\ar[ld] \ar[rd]\\ P_n=O_n({\mathbb{R}}) \setminus {\mathbf G\mathbf L}_n({\mathbb{R}}) \ar[rd] && {\mathbf G\mathbf L}_n({\mathbb{R}})/{\mathbf G\mathbf L}_n({\mathbb{Z}}) =\mathcal L_n \ar[ld]\\&P_n/{\mathbf G\mathbf L}_n({\mathbb{Z}}) =O_n({\mathbb{R}}) \setminus \mathcal L_n &}$$ This enables to formulate every definition and statement in either of these languages but, depending on the context, one point of view is often better than the other. In particular, the proof of the main theorem is easier to write in terms of quadratic forms. Let $L=P{\mathbb{Z}}^n$ be a lattice in ${\mathbb{R}}^n$, and $A=P'P$ the corresponding quadratic form. We define the sequence $m_1(L) < m_2(L) < \cdots$ of squared lengths of non zero vectors in $L$, ranged in increasing order (this is not the same, in general, as the *successive minima* of $L$). The *$k$-th layer* of $L$, for $k \in {\mathbb{N}}\setminus \left\lbrace 0\right\rbrace$, is defined as $$M_k(L):=\left\lbrace x \in L \mid x\cdot x = m_k(L)\right\rbrace$$ and we set $$a_k(L)=\vert M_k(L) \vert.$$ One defines similarly the sequence $\left( m_k(A)\right) _{k \in {\mathbb{N}}\setminus \left\lbrace 0\right\rbrace}$ of successive values achieved by the quadratic form $A$ on ${\mathbb{Z}}^n \setminus \left\lbrace 0\right\rbrace$, as well as the associated layers $M_k(A):=\left\lbrace x \in {\mathbb{Z}}^n \mid A[x] = m_k(A)\right\rbrace$. Spherical designs. {#designs} ------------------ We collect in this section some properties that we will need in the sequel. We take as original definition of a spherical $t$-design the one we gave in the introduction of this text. For sake of completeness, we recall first a proposition to be found in [@V], which provides several characterizations of spherical designs. We recall that a polynomial $P(x_1, \cdots,x_n)$ is harmonic if $\Delta P =0$, where $\Delta$ is the usual laplace operator $\Delta=\sum_{i=1}^n\dfrac{\partial^2}{\partial x_i^2}$. \[Venkov [[@V Théorème 3.2.]]{}\]\[venkov\] Let $X$ be a finite subset of ${\mathbb{S}}^{n-1}$ and $t$ an even positive integer. Assume that $X$ is symmetric about $0$, [[*i.e.* ]{}]{}$X=-X$. Then the following properties are equivalent : 1. $X$ is a $t$-design. 2. For all non constant harmonic polynomial $P(x)$ of degree $\leq t$, $\sum_{x \in X}f(x)=0$. 3. There exists a constant $c$ such that $\forall \alpha \in {\mathbb{R}}^n \sum_{x \in X} (x \cdot \alpha) ^t=c (\alpha \cdot \alpha) ^{\frac{t}{2}}$. See [@V Théorème 3.2.]. The only difference is that the condition $\forall \alpha \in {\mathbb{R}}^n \sum_{x \in X} (x \cdot \alpha)^i =0$, where $i=t-1$, which should appear in (2), is automatically satisfied here since $X$ is symmetric about the origin. The next proposition is the key to the proof of our main theorem. It is a formulation of the property that all layers of a lattice $L$ hold a $2$- (resp. $4$-) design, in terms of its zeta function. \[crit\] Let $L=P{\mathbb{Z}}^n$ be a lattice in ${\mathbb{R}}^n$, and $A=P'P$. 1. The following conditions are equivalent 1. All layers of $L$ hold a $2$-design. 2. For all $s \in {\mathbb{C}}$ with ${\mathrm{Re}}s> \frac{n}{2}$, $\sum_{y \in L \setminus \{0\}}\dfrac{yy'}{\Vert y \Vert ^{2(s+1)}}=\dfrac{\zeta(L,s)}{n}I_n$. 3. For all $s \in {\mathbb{C}}$ with ${\mathrm{Re}}s> \frac{n}{2}$, $\sum_{x \in {\mathbb{Z}}^n \setminus \{0\}} \dfrac{\widehat{x} _A}{A[x]^s}=\dfrac{\zeta(A,s)}{n}A^{-1}$. 2. The following conditions are equivalent 1. All layers of $L$ hold a $4$-design. 2. For all $s \in {\mathbb{C}}$ with ${\mathrm{Re}}s> \frac{n}{2}$, for all $H \in S_ n ({\mathbb{R}})$, $$\sum_{y \in L \setminus \{0\}} \dfrac{H[y]^2}{\Vert y \Vert ^{2(s+2)}}=\dfrac{\zeta(L,s)}{n(n+2)} (({\operatorname{Tr}}H)^2 + 2 {\operatorname{Tr}}H^2).$$ 3. For all $s \in {\mathbb{C}}$ with ${\mathrm{Re}}s> \frac{n}{2}$, for all $H \in S_n ({\mathbb{R}}) $, $$\sum_{x \in {\mathbb{Z}}^n \setminus \{0\}} \dfrac{\left\langle H, \widehat{x} _A\right\rangle ^{2}}{A[x]^s}=\dfrac{\zeta(A,s)}{n(n+2)}(({\operatorname{Tr}}A^{-1}H)^2 + 2 {\operatorname{Tr}}(A^{-1}H)^2).$$ 1\. The equivalence between $(b)$ and $(c)$ is straightforward, using the dictionary lattices/quadratic forms, so it is enough to prove the equivalence between $(a)$ and $(b)$. Assume that all layers of $L$ are $2$-designs. From proposition \[venkov\], this means that for all $k \in {\mathbb{N}}\setminus \{0\}$, there exists a constant $c_k$ such that $$\label{2d} \forall \alpha \in {\mathbb{R}}^n \sum_{y \in M_k(L)} (y \cdot \alpha) ^2=c_k (\alpha \cdot \alpha) ^2$$ This equation may be viewed as an equality between quadratic forms in $\alpha$, which, once written matricially, reads $$\label{2dbis} \sum_{y \in M_k(L)} yy'=c_k I_n .$$ It remains to compute the constant $c_k$, which is achieved by taking the trace. This yields $$c_k=\dfrac{a_k(L) m_k(L)}{n}.$$ Adding up the contributions of all $k \in {\mathbb{N}}\setminus \{0\}$, we get the foreseen relation $$\sum_{y \in L \setminus \{0\}} \dfrac{yy'}{\Vert y \Vert ^{2(s+1)}}=\dfrac{\zeta(L,s)}{n}I_n.$$ Assume conversely that (b) holds. This is an equality between Dirichlet series, so it must hold coefficientwise. Identifying the coefficients of $m_k(L)^{-s}$ in both sides of (b) yields a relation similar to (\[2dbis\]), hence a $2$-design relation for the set of vectors of squared length $m_k(L)$. 2\. Again, it is enough to prove the equivalence between $(a)$ and $(b)$. Assume first that all layers of $L$ are $4$-designs. For fixed $H \in \mathcal T _A$ we set $P_H(x):=\left\langle H, xx' \right\rangle^2=H[x]^2$, $x \in {\mathbb{R}}^n$. It is a homogenous polynomial of degree $4$ in $x$. From [@V Théorème 2.1.], it decomposes as $$\label{harm} P_H(x)=P_4(x)+\Vert x \Vert ^2 P_2(x)+\Vert x \Vert ^4 P_0(x),$$ where $P_i(x)$ is an harmonic polynomial of degree $i$, [[*i.e.* ]{}]{}$\Delta P_i=0$, depending on $H$. From proposition \[venkov\](2), the property that each layer of $L$ holds a $4$-design implies that for $i=2,4$ and for all $k \in {\mathbb{N}}\setminus \{0\}$ $$\sum_{x \in M_k(L)}P_i(x) =0.$$ Consequently, using (\[harm\]) we obtain $$\forall k \in {\mathbb{N}}\setminus \{0\}, \sum_{x \in M_k(L)}P_H(x) = \sum_{x \in M_k(L)}\Vert x \Vert ^4 P_0(x)=m_k(L)^2 \# M_k(L) \cdot P_0$$ since $P_0(x)=P_0$ is a constant. So what we finally need is to compute $P_0$ in (\[harm\]). Applying $\Delta$ to the right-hand side of (\[harm\]) twice, we obtain since the $P_i$ are harmonic, $$\label{first} \Delta^2 P_H(x)= 8n(n+2)P_0.$$ On the other hand, one can compute $\Delta^2 P_H(x)$ directly from the definition of $P_H(x)=H[x]^2$. Easy calculations yield $$\Delta H[x] = 2 {\operatorname{Tr}}H \text{ and } \Delta H[x]^2= 4 ({\operatorname{Tr}}H) H[x] +8 H^2[x]$$ whence finally $$\label{second} \Delta^2 P_H(x)=\Delta ^2 H[x]^2= 8 ({\operatorname{Tr}}H)^2 + 16 {\operatorname{Tr}}H^2.$$ Comparing (\[first\]) and (\[second\]), we get $$P_0 = \dfrac{ ({\operatorname{Tr}}H)^2 + 2 {\operatorname{Tr}}H^2}{n(n+2)}.$$ Adding up the contributions of all $k \in {\mathbb{N}}\setminus \{0\}$ yields formula (b). Assume conversely that (b) holds. Applying it to $H=\alpha \alpha'$, for a given $\alpha \in {\mathbb{R}}^n$, we obtain $$\sum_{y \in L \setminus \{0\}} \dfrac{(y \cdot \alpha)^4}{\Vert y \Vert ^{2(s+2)}}=\dfrac{\zeta(L,s)}{n(n+2)} 3(\alpha \cdot \alpha)^2$$ and identifying the coefficient of $m_k(L)^{-s}$ on both sides for all $k \in {\mathbb{N}}\setminus \{0\}$, we get $$\sum_{y \in M_k(L)} (y \cdot \alpha)^4=\dfrac{3a_k(L) m_k(L)^2}{n(n+2)}(\alpha \cdot \alpha)^2$$ whence the conclusion. Lattices satisfying any of the equivalent conditions 1(a) and 1(b) in Proposition \[crit\] are called “strongly critical” in [@DR]. This property partly characterizes lattices that are $\zeta$-extreme at $s$ for any large enough $s$. More precisely, one may reformulate [@DR Theorem 4] as \[dr\] The following conditions for $L \in \mathcal L_n^{\circ} $ are equivalent. 1. There exists $s_0>0$ such that $L$ is $\zeta$-extreme at $s$ for any $s >s_o$. 2. $L$ is perfect, and all layers of $L$ hold a $2$-design. (recall that a lattice $L$ is [perfect]{} if $\sum_{x\in M_1(L)}{\mathbb{R}}xx'~=~S_n({\mathbb{R}})$). Proof of Theorem \[mt\]. {#proof} ======================== We view $\mathcal P_n^{\circ} $ as a differentiable submanifold of $S_n({\mathbb{R}})$. The tangent space $\mathcal T _A $ at any point $A$ identifies with the set $\{H \in S_n({\mathbb{R}}) \mid {\left<}A^{-1}, H {\right>}= 0\}$. Moreover, the exponential map $H \mapsto e_A(H)=A\exp (A^{-1}H)$ induces a local diffeomorphism from $\mathcal T _A $ onto $\mathcal P_n^{\circ} $. Consequently we have to study the local behaviour of the map $H \mapsto \zeta(e_A(H),s)$, $H \in \mathcal T _A $ for fixed $s>0$. A simple calculation, based on the Taylor expansion of the exponential, yields $$\begin{aligned} \label{taylor} \zeta(e_A(H),s)&=& \zeta(A,s)-s\left\langle H, \sum {}' \dfrac{\widehat{x} _A}{A[x]^s}\right\rangle \\ \nonumber & & {}+\frac{s}{2}\left[ (s+1)\sum {}'\dfrac{\left\langle H, \widehat{x} _A\right\rangle ^{2}}{A[x]^s}-\left\langle HA^{-1}H,\sum {}' \dfrac{\widehat{x} _A}{A[x]^s}\right\rangle \right] + o(\Vert H^2 \Vert),\end{aligned}$$ in which the abbreviated notation $\sum {}'$ stands for $\sum_{x \in {\mathbb{Z}}^n \setminus \{0\}}$. As explained in the previous section (Proposition \[crit\]), the property that all layers of $L$ are $2$-designs is equivalent to the relation $$\sum {}' \dfrac{\widehat{x} _A}{A[x]^s}=\dfrac{\zeta(A,s)}{n}A^{-1}.$$ For $H \in \mathcal T _A$ it implies $$\left\langle H, \sum {}' \dfrac{\widehat{x} _A}{A[x]^s}\right\rangle =0$$ and $$\left\langle HA^{-1}H,\sum {}' \dfrac{\widehat{x} _A}{A[x]^s}\right\rangle =\dfrac{\zeta(A,s)}{n} {\operatorname{Tr}}(A^{-1}H)^2.$$ Next we use the assumption that all layers of $L$ are $4$-designs to compute the term $\sum {}'\dfrac{\left\langle H, \widehat{x} _A\right\rangle ^{2}}{A[x]^s}$. From Proposition \[crit\] we have, for $H \in \mathcal T _A$, $$\sum {}' \dfrac{\left\langle H, \widehat{x} _A\right\rangle ^{2}}{A[x]^s}=\dfrac{\zeta(A,s)}{n(n+2)}(({\operatorname{Tr}}A^{-1}H)^2 + 2 {\operatorname{Tr}}(A^{-1}H)^2)= \dfrac{2\zeta(A,s)}{n(n+2)} {\operatorname{Tr}}(A^{-1}H)^2.$$ Inserting the last three formulas into (\[taylor\]), we obtain $$\label{finaleq} \zeta(e_A(H),s) =\zeta(A,s)\left[1+\dfrac{s(s-\frac{n}{2})}{n(n+2)}{\operatorname{Tr}}(A^{-1}H)^2\right] + o(\Vert H^2 \Vert).$$ Consequently, the assertion that $A$ achieves a strict local minimum on $\mathcal P_n^{\circ} $ of the map $A \mapsto \zeta(A,s)$ is equivalent to the fact that $\zeta(A,s)\dfrac{s(s-\frac{n}{2})}{n(n+2)}>0$. This is clearly the case if $s>\dfrac{n}{2}$, while for $0<s<\frac{n}{2}$ this is equivalent to $\zeta(A,s)<0$. As for the assertion on the height function, we just have to differentiate (\[finaleq\]) with respect to $s$ to get $$\begin{aligned} \frac{d}{ds}\zeta(e_A(H),s) _{\vert s=0}&=&\zeta'(A,0) +\zeta(A,0) \dfrac{-1}{n+2}{\operatorname{Tr}}(A^{-1}H)^2+ o(\Vert H^2 \Vert)\\&=&\zeta'(A,0) +\dfrac{1}{n+2}{\operatorname{Tr}}(A^{-1}H)^2+ o(\Vert H^2 \Vert),\end{aligned}$$ since $\zeta(A,0) =-1$, whence the conclusion. This finishes the proof of Theorem \[mt\]. Examples. {#ex} ========= In order to avoid rescaling systematically all the lattices appearing in the examples below to covolume $1$, we will use the slightly abusive formulation “$L$ is $\zeta$-extreme” to mean that “$L$ *rescaled to covolume* $1$ is $\zeta$-extreme”. Similarly, we say that “the torus associated with $L$ achieves a local minimum of the height function” to mean “a local minimum on the set of flat tori *of the same covolume* $\det L$”. Before we give some explicit examples, we wish to give a first comparison between our criterion for a lattice to be $\zeta$-extreme and Sarnak-Strömbgergsson’s one. For the proof of [@Sa-Sa Theorem 1], one considers the space ${\operatorname{Sym}}^f {\operatorname{Sym}}^2 ({\mathbb{R}}^n)$ for $f=0,1,2, \dots$ endowed with the standard action of $O(n)$, and define $f(L)$ to be the largest integer such that ${\operatorname{Sym}}^f {\operatorname{Sym}}^2 ({\mathbb{R}}^n)^{O(n)} = {\operatorname{Sym}}^f {\operatorname{Sym}}^2 ({\mathbb{R}}^n)^{{\operatorname{Aut}}(L)}$. Then it is proven that if $f(L) \geq 2$ then $L$ is $\zeta$-extreme for $s>\frac{n}{2}$. As noticed by the authors, the determination of $f(L)$ is related to the somewhat more classical problem of determining the largest integer $t(L)$ such that ${\operatorname{Sym}}^t({\mathbb{R}}^n)^{O(n)}={\operatorname{Sym}}^t({\mathbb{R}}^n)^{{\operatorname{Aut}}(L)}$, a question which is itself connected with the existence of spherical designs in lattices. To be more precise, one has the following result \[gs\] The following conditions for a finite subgroup $G$ of $O(n)$ are equivalent : 1. ${\operatorname{Sym}}^t({\mathbb{R}}^n)^{O(n)}={\operatorname{Sym}}^t({\mathbb{R}}^n)^{G}$. 2. Any orbit $G \cdot a$ of a point $a \in {\mathbb{S}}^{n-1}$ is a $t$-design. The combination of this with Theorem \[mt\] leads to the following corollary \[cmt\] If $t(L) \geq 4$ then $L$ is $\zeta$-extreme for $s>\frac{n}{2}$, and the torus associated with $L^{*}$ achieves a local minimum of the height function. For any $k \in {\mathbb{N}}\setminus \left\lbrace 0\right\rbrace$ and any $x \in M_k(L)$, the orbit of $x$ under ${\operatorname{Aut}}(L)$ is a $4$-design, so that $M_k(L)$ is itself a $4$-design, as a union of $4$-designs. Note that the assumption that $t(L) \geq 4$ is equivalent to $t(L) \geq 5$, since $t(L)$ is easily seen to be odd. Sarnak and Strömbergsson pointed out that $$\label{ft} f(L) \leq \dfrac{t(L)-1}{2},$$ so that $f(L) \geq 2$ actually implies $t(L) \geq 5$. However, they also observed that (\[ft\]) is in general a strict inequality, so that the assumption $f(L) \geq 2$ is in general stronger than the assumption of Corollary \[cmt\], which is itself stronger than the assumption of Theorem \[mt\]. A good illustration of the combination of our criterion with Goethal and Seidel’s Theorem, is obtained with the family of Barnes-Wall lattices. Let us briefly recall their definition : if $n=2^k$, we consider an orthonormal basis $\left( e_u \right) _{u \in {\mathbb{F}}_{2^k}}$ of ${\mathbb{R}}^n$ indexed by the elements of $F_{2^k}$ and set $$BW_n:=\left\langle 2^{\lfloor\frac{k-d+1}{2}\rfloor}\sum_{u \in U}e_u\right\rangle _{{\mathbb{Z}}} \subset {\mathbb{R}}^n$$ where $U$ runs through the set of affine subspaces of ${\mathbb{R}}^n$ and $d$ stands for the dimension of $U$. This defines for any $n$ an isodual lattice, [[*i.e.* ]{}]{}isometric to its dual. These lattices are very interesting inasmuch they form one of the very few infinite families of lattices for which explicit computations can be made (density, kissing number, automorphism group etc.), although they do not provide, in dimension $\geq 32$, the best known lattice packings. Various explicit descriptions of the automorphism group of $BW_n$ are known (see for instance [@BE]), and its polynomial invariants are computed in [@Ba]. Altogether, this leads to the following proposition : For $k\geq 3$, the Barnes-Wall lattice $BW_{2^k}$ is $\zeta$-extreme for any $s>\frac{n}{2}$ and the associated torus achieves a strict local minimum of the height function. From [@Ba Corollary 5.1.], we see that $t(BW_{2^k}) \geq 6$, whence the conclusion using corollary \[cmt\] (notice that for an isodual lattice $L$, the tori associated with $L$ or its dual are the same, up to scaling). Other examples of lattices $L$ satisfying $t(L) \geq 4$ (and thus our main theorem) are the root lattices $\mathbb E_6$ and $\mathbb E_7$, as well are their duals. Finally, a list of lattices in dimension $\leq 26$ to which this argument applies is provided in [@Ba table 1] (note that only the lattices pertaining to what is called case (1) and (3) there are suitable). In the examples below, we prove that some lattices $L$ are $\zeta$-extreme without computing neither $f(L)$ nor $t(L)$. Instead, we refer to the paper [@BV] by Bachoc and Venkov, where the existence of spherical designs in certain lattices is proven using modular forms. The lattices dealt with in that paper pertain to Quebbemann’s theory of modular lattices (see [@Q] or [@Sch-Sch]), from which we recall the main definitions and results. A lattice $L$ in $\mathcal L_n$ is $\ell$-*modular* ($\ell>0$) if 1. $L$ is *even*, [[*i.e.* ]{}]{}$x\cdot x \in 2{\mathbb{Z}}$ for all $x \in L$. 2. $L$ is isometric to $\sqrt{\ell}L^{*}$. Assume furthermore that $\ell \in \left\lbrace 1,2,3,5,7,11,23\right\rbrace$. Then ( [@Sch-Sch Theorem 2.1.]) the minimum of an $\ell$-modular lattice $L$ satisfies $$\label{extr} \min L \leq 2(1+\lfloor \dfrac{n(1+\ell)}{48}\rfloor)$$ An $\ell$-modular lattice $L$ for which equality holds in (\[extr\]) is called *extremal*. The theta series, and more generally the theta series with spherical coefficients of $\ell$-modular lattices belong to a a certain algebra of modular forms, which can be described explicitely when $\ell$ belongs to the set above. From this description, Bachoc and Venkov deduce various results about the existence of spherical designs in extremal modular lattices (see [@BV Corollary 3.1.]). Applying their result together with Theorem \[mt\], one easily derives the following proposition : \[ml\] Let $L$ be an extremal $\ell$-modular lattice of dimension $n$ such that $\ell=1$ and $n \equiv 0,8 \mod 24$, or $\ell=2$ and $n \equiv 0,4 \mod 16$, or $\ell=3$ and $n \equiv 0,2 \mod 12$. Then $L$ is $\zeta$-extreme for any $s>\frac{n}{2}$ and the associated torus achieves a strict local minimum of the height function. From [@BV Corollary 3.1.], all layers of such a lattice hold a $4$-design, whence the conclusion. The proof of this fact uses theta series with spherical coefficients. The previous proposition applies to the $\mathbb{E}_8$ and Leech lattices ($l=1$), as well as the $\mathbb{D}_4$ lattice ($l=2$), recovering Sarnak and Strömbergsson’s result, at least for $s>\frac{n}{2}$. But this also applies for instance to the hexagonal lattice $\mathbb{A}_2$, to all extremal even unimodular lattices in dimension $32$ and $48$, to the Coxeter-Todd lattice $\mathrm{K}_{12}$ or the Barnes-Wall lattice $\Lambda_{16}$ to cite a few. Note that the occurrence of the hexagonal lattice $\mathbb{A}_2$ in this list is not a surprise, since it is known to achieve a *global* minimum of the map $L \mapsto \zeta(L,s)$ at $s$ for any $s>0$, $s \neq \frac{n}{2}$ (see [@Di], [@En] and [@Ra]). Minima of theta functions. {#mtheta} ========================== In this section, we investigate the question of the minima of theta functions, which is closely related to the subject dealt with so far. Recall that the theta function of a lattice $L$ is defined as $$\label{theta} \Theta_L(z)=\sum _{l \in L}e^{\pi i z \Vert l \vert^2}, \ \text{ for } z \in {\mathbb{C}}, \ {\mathrm{Im}}z >0.$$ The theta an zeta functions of a lattice are related through Mellin transform, namely one has, for $s \in {\mathbb{C}}$ with ${\mathrm{Re}}s >\frac{n}{2}$, $$\label{mellin} \Gamma(s) \pi^{-s}\zeta(L,s)=\mathcal M (\Theta_L(iy)-1):=\int_{0}^{+\infty}(\Theta_L(iy)-1)y^{s-1}dy.$$ For fixed $y>0$, we ask for lattices in $\mathcal L_n^{\circ}$ minimizing $\Theta_L(iy)$. Sarnak and Strömbergsson proved [@Sa-Sa Proposition 2] that for any $y>0$, the $\mathbb D_4$ lattice (rescaled to covolume $1$), the $\mathbb{E}_8$ lattice and the Leech lattice achieve a strict local minimum of $\Theta_L(iy)$. Following the same line as in the proof of our main theorem, we can prove the following result : \[minitheta\] Let $L_0 \in \mathcal L_n ^{\circ}$ be such that all its hold a $4$-design. Then, for any fixed $y>\frac{\frac{n}{2}+1}{\pi m_1(L_0)}$, the map $L \mapsto \Theta(L,iy)$, $L \in \mathcal L_n ^{\circ}$, has a strict local minimum at $L_0$. As before, we give the proof in terms of positive definite quadratic forms. If $B$ is the positive definite symmetric matrix, defined up to ${\mathbf G\mathbf L}_n({\mathbb{Z}})$ equivalence, corresponding to a lattice $L$, one defines $\Theta_B(z)=\Theta_L(z)=\sum_{x \in {\mathbb{Z}}^n}e^{\pi i z B[x]}$, for $z \in {\mathbb{C}}$ with ${\mathrm{Im}}z >0$. Letting $A$ be the positive definite symmetric matrix associated to $L_0$, we parametrize locally the set $\mathcal P_n^{\circ}$ via the exponential map $e_A$ as in the proof of the main theorem, and we are led to study the local behaviour of the map $H \mapsto \Theta_{e_A(H)}(iy)$, $H \in \mathcal T _A$. Under the conditions of the proposition, equation (\[finaleq\]) holds. Applying inverse Mellin transform to this equation, we get $$\label{imellin} \Theta_{e_A(H)}(iy)=\Theta_{A}(iy)+\dfrac{{\operatorname{Tr}}(A^{-1}H)^2 }{n(n+2)}\mathcal M^{-1}\left( s(s-\frac{n}{2}) \Gamma(s) \pi^{-s}\zeta(A,s)\right) + o(\Vert H^2 \Vert).$$ Using elementary properties of the Mellin inverse transform, we thus find $$\Theta_{e_A(H)}(iy)=\Theta_{A}(iy)+\dfrac{{\operatorname{Tr}}(A^{-1}H)^2 }{n(n+2)}\sum_{x \in {\mathbb{Z}}^n}y\pi A[x]\left(y\pi A[x]-(\frac{n}{2}+1) \right) e^{-\pi y A[x]}+ o(\Vert H^2 \Vert).$$ In order to conclude, it is enough to show that the sum $\sum_{x \in {\mathbb{Z}}^n}y\pi A[x]\left(y\pi A[x]-(\frac{n}{2}+1) \right) e^{-\pi y A[x]}$ is positive, which is obviuosly the case if $y>\frac{\frac{n}{2}+1}{\pi m_1(L_0)}$, since then each term of the sum is positive. This proposition applies to all the examples dealt with in the previous section. A more careful analysis of the sign of the sum $\sum_{x \in {\mathbb{Z}}^n}y\pi A[x]\left(y\pi A[x]-(\frac{n}{2}+1) \right) e^{-\pi y A[x]}$ would allow to extend the range (in $y$) of validity of the proposition, as done by Sarnak and Strömbergsson in the case of $\mathbb D_4$, $\mathbb{E}_8$ and Leech lattice. Final remarks. {#fr} ============== We conclude with some remarks and open questions. 1. In the examples quoted above, we applied our main theorem to derive $\zeta$-extremality at $s$ for any $s>\frac{n}{2}$. To get the same result for $0<s<\frac{n}{2}$, one has to prove that $\zeta(L,s) <0$ in that range, which was done by Sarnak and Strömbergsson in the case of $\mathbb{D}_4$, $\mathbb{E}_8$ and $\Lambda_{24}$. Unfortunately, we don’t know of any ’uniform’ way to prove this property, so that a case-by-case proof would be necessary to deal with the examples of the previous section. However, as pointed out to me by P. Sarnak, it can be shown, using an argument due to A. Terras [@T Theorem 1] that the Epstein zeta function of the Barnes-Wall lattices $BW_n$ do have a zero in $(0,\frac{n}{2})$ for large enough $n$, so that the extremality does not hold for all $0<s<\frac{n}{2}$. The same argument would also apply to extremal modular lattices of large enough dimension, provided that they exist (for fixed level $\ell$, the dimension of an hypothetical extremal modular lattice is bounded, see [@Sch-Sch Theorem 2.1. (ii)]). 2. The condition that all the layers of a given lattice hold a $4$-design is rather strong. Lattices for which the first layer (minimal vectors) hold a $4$-design, the so-called ’strongly perfect lattices’, have been classified in dimensions up to $12$ (see [@V], [@NV1], [@NV2]). In turns out that in dimension $3$, $5$, and $9$, for instance, such a lattice (and *a fortiori* a lattice all the layers of which hold a $4$-design) does not exist. As for the weaker condition that all the layers of a given lattice hold a $2$-design, which is necessary for the lattice to be $\zeta$-extremal at $s$ for all big enough $s$, according to Delone and Ryshkov’s theorem, it is easy to find examples in any dimension : for instance, all irreducible root lattices hold this property. However, it is still not clear that a lattice achieving a global minimum of the function $L \mapsto \zeta(L,s)$ for any $s > \dfrac{n}{2}$ (or even for all large enough $s$) should exist. 3. The situation for the height function is perhaps more intriguing. Indeed, it is known (see [@Ch]) that in a given dimension $n$, the height function, restricted to flat tori, achieves a global minimum. On the other hand, in dimension $3$, $5$, and $9$ for instance, there is no hope to find this minimum using the criterion of Theorem \[mt\]. Consequently, the *right* characterization of local minima of the height function is still to be found (our condition is too strong). Recall that such a characterization for the local maxima of the density of lattice-sphere packings is known, due to Voronoï (see [@Vor] or [@M Chapter 3]). Acknowledgements. {#acknowledgements. .unnumbered} ================= I would like to thank Peter Sarnak for his comments on a preliminary version of this work, which led in particular to the statement of Proposition \[minitheta\]. [11]{} , Designs, groups and lattices. J. Théor. Nombres Bordeaux [**17**]{} (2005), no. 1, 25–44. , Modular forms, lattices and spherical designs. [*Réseaux euclidiens, designs sphériques et formes modulaires*]{}, 10–86, Monogr. Enseign. Math., 37, Enseignement Math., Geneva, 2001. , Une famille infinie de formes quadratiques entières; leurs groupes d’automorphismes., Ann. Sci. École Norm. Sup. (4) **6** (1973), 17–51. , On a problem of Rankin about the Epstein zeta function, Proc. Glasgow Math. Assoc. [**4**]{} (1959), 73–80, [**6**]{} (1963), 116. , Height of flat tori. Proc. Amer. Math. Soc. [**125**]{} (1997), no. 3, 723–730. , A contribution to the theory of the extrema of a multi-dimensional $\zeta$-function. Dokl. Akad. Nauk SSSR 173 991–994 (Russian); translated as Soviet Math. Dokl. 8 1967 499–503. , Notes on two lemmas concerning the Epstein-zeta function, Proc. Glasgow Math. Assoc. [**6**]{} (1964), 202–204. , A lemma about the Epstein-zeta function, Proc. Glasgow Math. Assoc. [**6**]{} (1964), 198–201. , Spherical designs. Proc. Sympos. Pure Math. [**34**]{} (1979), 255–272. , [*Perfect lattices in Euclidean spaces.*]{} Grundlehren der Mathematischen Wissenschaften, 327. Springer-Verlag, Berlin, 2003. , The strongly perfect lattices of dimension 10. Colloque International de Théorie des Nombres (Talence, 1999). J. Théor. Nombres Bordeaux [**12**]{} (2000), no. 2, 503–518. , Low-dimensional strongly perfect lattices. I. The 12-dimensional case. Enseign. Math. (2) 51 (2005), no. 1-2, 129–163. , Modular lattices in Euclidean spaces. J. Number Theory 54 (1995), no. 2, 190–202. , A minimum problem for the Epstein zeta function, Proc. Glasgow Math. Assoc. [**1**]{} (1953), 149–158. , Determinants of Laplacians; heights and finiteness. [*Analysis, et cetera*]{}, 601–622, Academic Press, Boston, MA, 1990. , Minima of Epstein’s Zeta Function and Heights of Flat Tori , Invent. Math. 165 (2006), 115–151. , Extremal lattices. Algorithmic algebra and number theory (Heidelberg, 1997), 139–170, Springer, Berlin, 1999. , Formulas for mechanical cubatures in $n$-dimensional space, Dokl. Akad. Nauk SSSR 137 (1961) 527–530. , The minima of quadratic forms and the behavior of Epstein and Dedekind zeta functions, J. Number Theory 12 (1980), no. 2, 258–272. , Réseaux et designs sphériques. [*Réseaux euclidiens, designs sphériques et formes modulaires*]{}, 10–86, Monogr. Enseign. Math., 37, Enseignement Math., Geneva, 2001. , Nouvelles applications des paramètres continus à la théorie des formes quadratiques : 1 Sur quelques propriétés des des formes quadratiques parfaites, J. Reine angew. Math. [**133**]{} (1908), 97-178.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Intercluster filaments negligibly contribute to the weak lensing signal in general relativity (GR), $\gamma_{N}\sim 10^{-4}-10^{-3}$. In the context of relativistic modified Newtonian dynamics (MOND) introduced by Bekenstein, however, a single filament inclined by $\approx 45^\circ$ from the line of sight can cause substantial distortion of background sources pointing towards the filament’s axis ($\kappa=\gamma=(1-A^{-1})/2\sim 0.01$); this is rigorous for infinitely long uniform filaments, but also qualitatively true for short filaments ($\sim 30$Mpc), and even in regions where the projected matter density of the filament is equal to zero. Since galaxies and galaxy clusters are generally embedded in filaments or are projected on such structures, this contribution complicates the interpretation of the weak lensing shear map in the context of MOND. While our analysis is of mainly theoretical interest providing order-of-magnitude estimates only, it seems safe to conclude that when modeling systems with anomalous weak lensing signals, e.g. the “bullet cluster" of Clowe et al., the “cosmic train wreck" of Abell 520 from Mahdavi et al., and the “dark clusters" of Erben et al., [*filamentary structures might contribute*]{} in a significant and likely complex fashion. On the other hand, [*our predictions of a (conceptual) difference in the weak lensing signal could, in principle, be used to falsify MOND/TeVeS*]{} and its variations.' author: - 'Martin Feix, Dong Xu, HuanYuan Shan, Benoit Famaey, Marceau Limousin, HongSheng Zhao and Andy Taylor' bibliography: - 'ref.bib' title: 'Is Gravitational Lensing by Intercluster Filaments Always Negligible?' --- Introduction {#intro} ============ Without resorting to cold dark matter (CDM), the modified Newtonian dynamics (MOND) paradigm [@Mond3; @mondnew] is known to reproduce galaxy scaling relations like the Tully-Fisher relation [@tully], the Faber-Jackson law [@faber] and the fundamental plane [@fundamental]) as well as the rotation curves of individual galaxies over five decades in mass [@spiral1; @mondref1; @mondref2; @mondref3; @mondref4; @mondref5; @escape]. In particular, the recent kinematic analysis of tidal dwarf galaxies by [@debris] is very hard to explain within the classical CDM framework while it is in accordance with MOND [@tidal1; @tidal2]. In addition, observations of a tight correlation between the mass profiles of baryonic matter and dark matter in relatively isolated (field) galaxies at all radii [@insight2; @insight] are most often interpreted as supporting MOND. Nevertheless, in rich clusters of galaxies, the MOND prescription is not enough to explain the observed discrepancy between visible and dynamical mass [@neutrinos2; @tevesfit; @asymmetric]. At very large radii, the discrepancy is about a factor of $2$, meaning that there should be as much dark matter (mainly in the central parts) as observed baryons in MOND clusters. One solution is that neutrinos have a mass at the limit of detection, i.e. $\sim2$ eV, which can solve the bulk of the problem of the missing mass in galaxy clusters, but other issues remain [@group]. These $2$ eV neutrinos have also been invoked to fit the angular power spectrum of the cosmic microwave background (CMB) in relativistic MOND [@tevesneutrinocosmo], and are thus part of the only consistent MOND cosmology presented so far. In the following, we will refer to this model as the MOND hot dark matter ($\mu$HDM) cosmology [@tevesfit]. On the other hand, strange features have recently been discovered in galaxy clusters, which are hard to explain, such as the “dark matter core" devoid of galaxies at the center of the “cosmic train wreck" cluster Abell 520 [@abell520] and others [@darkcluster; @bullet]. Here, we consider the possibility that this kind of features could be due to the gravitational lensing effects generated by an intercluster filament in a universe based on tensor-vector-scalar gravity [TeVeS; @teves], one possible relativistic extension of MOND [cf. @tv1; @tv2; @vector]. However, we are not performing a detailed lensing analysis of any particular cluster in the presence of filaments, but rather provide a proof of concept that the influence of filaments could be much less negligible in a MONDian universe than within the framework of general relativity (GR). Filaments are among the most prominent large-scale structures of the universe. From simulations in $\Lambda$CDM cosmology, we know that almost every two neighboring clusters are connected by a straight filament with a length of approximately $20-30$ Mpc [@LCDMfilament]. For instance, the dynamics of field galaxies, which are generally embedded in such filaments, as well as their weak lensing properties are persistently influenced by this kind of structures, generally encountering accelerations of about $0.01-0.1\times 10^{-10}$ m s$^{-2}$. Filaments also cover a fair fraction of the sky, much larger than the covering factor of galaxy clusters. Thus, there is a good chance that filaments might be superimposed with other objects on a given line of sight, hence affecting the analysis of observational data like, for example, weak lensing shear measurements. Such recent studies prompted us to investigate the possibility that, in the context of MOND, end-on filamentary structures could be responsible for creating anomalous features in reconstructions of weak lensing convergence maps such as the peculiar “dark matter core" devoid of galaxies in Abell 520 [@abell520]. Short straight filaments are structures which, at the best, are partially virialized in two directions perpendicular to their axis. According to [@LCDMfilament], a filament generally corresponds to an overdensity of about $10-30$, having a cigar-like shape. Furthermore, filamentary structures tend to have a low-density gradient along their axis and, in the perpendicular directions, they have a nearly uniform core which tapers to zero at larger radii, usually about $2-5$ times their core radius. Since filaments are typically much longer than their diameter, we shall approximately treat them as infinite uniform cylinders of radius $R_{f}=2.5$ $h^{-1}$ Mpc. Lacking a MOND/TeVeS structure formation $N$-body simulation (with or without substantially massive neutrinos), we shall adopt the naive assumption that filamentary structures have roughly the same properties in MOND and in CDM, which will be justified in §\[app\]. Deriving expressions for the TeVeS deflection angle and setting up a cosmological background, we conclude that the order of magnitude of the TeVeS lensing signal caused by filaments is compatible with that of the previously mentioned observed anomalous systems. In addition, we find that there is fundamental difference between GR and MOND/TeVeS for cylindrically symmetric lens geometries (see Fig. \[fig1\]); in contrast to GR, the framework of MOND/TeVeS allows us to have image distortion and amplification effects where the projected matter density is equal to zero. As for a more realistic approach, we also consider a model where the filament has a fluctuating density profile perpendicular to its axis. Compared to the uniform model, we find that the lensing signal in this case is smaller, but still of the same order, taking into account that the filamentary structures may be inclined to the line of sight by rather small angles ($\theta\lesssim 20^{\circ}$). Finally, we demonstrate the impact of filaments onto the convergence map of other objects by considering superposition of such structures with a toy cluster along the line of sight. Again, our results show an additional contribution comparable to that of a single isolated filament. Modeling a Filamentary Lens {#model} =========================== We investigate the effect of gravitational lensing caused by a straight filament connecting two galaxy clusters in both GR and TeVeS gravity, henceforth using units with $c=1$. As a first simple approach, we shall take the filament’s matter density profile to equal an infinitely elongated and uniform cylinder which is illustrated in Figure \[fig1\]. The cylinder’s line density, $$\lambda = M/L = \rho \pi R_f^2, \label{eq:linedef}$$ is taken to be constant, where $M$ is the total mass, $L$ denotes the length along the symmetry axis, $R_f$ is the cylinder’s radius, and $\rho$ is the volume density. A photon traveling perpendicular to the filament’s axis will change its propagation direction when passing by the cylinder due to the local gravitational field which is assumed to be a weak perturbation to flat spacetime, i.e. all further calculations may be carried out within the non-relativistic approximation. In this case, it is well-known [@gl] that the photon’s deflection angle can be expressed as $$\vec{\hat{\alpha}} = 2\int\limits_{-\infty}^{\infty}{\vec\nabla}_{\bot}\Phi dl, \label{eq:0}$$ where $\Phi$ is the total gravitational potential, $\vec\nabla_{\bot}$ denotes the two-dimensional gradient operator perpendicular to light propagation and integration is performed along the unperturbed light path (Born’s approximation). In our example (see Fig. \[fig1\]), the filament’s axis is aligned with the $x$-axis, and light rays propagating along the $z$-direction are dragged into the $\pm y$-directions due to the symmetry of the resulting gravitational field. Keeping this configuration and introducing cylindrical coordinates, we may rewrite equation as $$\hat\alpha(y) = 4y\int\limits_{y}^{\infty}\dfrac{\Phi^{'}}{\sqrt{r^{2}-y^{2}}}dr, \label{eq:0a}$$ where the prime denotes the derivative with respect to the cylindrical radial coordinate $r$, i.e. $A^{'}=dA/dr$. Note that even in the context of MOND/TeVeS, we may still assume that most of the light bending occurs within a small range around the lens compared to the distances between lens and source and observer and source, thus enabling us to fully adopt the GR lensing formalism. In gravitational lensing, it is convenient to introduce the deflection potential $\Psi(\vec\theta)$ [@gl]: $$\Psi(\vec\theta) = 2\frac{D_{ls}}{D_{s}D_{l}}\int\Phi(D_{l}\vec\theta,z)dz, \label{eq:0b}$$ where we have used $\vec\theta=\vec\xi/D_{l}$. Here $\vec\xi$ is the two-dimensional position vector in the lens plane, and $D_{s}$, $D_{l}$, and $D_{ls}$ are the (angular diameter) distances between source and observer, lens and observer, and lens and source, respectively. If a source is much smaller than the angular scale on which the lens properties change, the lens mapping can locally be linearized. Thus, the distortion of an image can be described by the Jacobian matrix $$\mathcal{A}(\vec\theta) = \frac{\partial\vec\beta}{\partial\vec\theta} = \begin{pmatrix} 1-\kappa-\gamma_{1} & -\gamma_{2}\\ -\gamma_{2} & 1-\kappa+\gamma_{1} \end{pmatrix}, \label{eq:0c}$$ where $\vec\beta$=$\vec\eta/D_{s}$ and $\vec\eta$ denotes the 2-dimensional position of the source. The convergence $\kappa$ is directly related to the deflection potential $\Psi$ through $$\kappa = \frac{1}{2}\Delta_{\vec\theta}\Psi \label{eq:0d}$$ and the shear components $\gamma_{1}$ and $\gamma_{2}$ are given by $$\begin{split} \gamma_{1} &= \frac{1}{2}\left(\frac{\partial^{2}\Psi}{\partial\theta_{1}^{2}}-\frac{\partial^{2}\Psi}{\partial\theta_{2}^{2}}\right),\quad\gamma_{2} = \frac{\partial^{2}\Psi}{\partial\theta_{1}\partial\theta_{2}},\\ \gamma &= \sqrt{\gamma_{1}^{2}+\gamma_{2}^{2}}. \end{split} \label{eq:0e}$$ Because of Liouville’s theorem, gravitational lensing preserves the surface brightness, but it changes the apparent solid angle of a source. The resulting flux ratio between image and source can be expressed in terms of the amplification $A$, $$A^{-1} = (1-\kappa)^{2}-\gamma^{2}. \label{eq:0f}$$ Considering the symmetry properties of our cylindrical lens model and the configuration in Figure \[fig1\], equation further simplifies to $$\kappa (y) = \frac{1}{2}\frac{D_{l}D_{ls}}{D_{s}}\frac{{\partial\hat\alpha (y)}}{{\partial y}}, \label{eq:0g}$$ with the convergence $\kappa$ being related to the quantities $\gamma$ ($\gamma^{2}=\gamma_{1}^{2}$ and $\gamma_{2}=0$) and $A$ as follows: $$\kappa = \gamma =\frac{1-A^{-1}}{2}. \label{eq:0h}$$ Furthermore, let us introduce the complex reduced shear $g$ given by $$g = \frac{\gamma_{1}+i\gamma_{2}}{1-\kappa}. \label{eq:0i}$$ This quantity is the expectation value of the ellipticity $\chi$ of galaxies weakly distorted by the lensing effect, thus corresponding to the signal which can actually be observed. In our case, we find that the absolute value of the reduced shear is $|g|=\gamma/(1-\kappa)$, and assuming that $\kappa=\gamma\ll 1$, we obtain $|g|\sim\kappa=\gamma$. Note that the above result is independent of the particular law of gravity. Newtonian Case -------------- The Newtonian gravitational field of our filament model is given by $$g_N (r) = \lvert\vec\nabla\Phi_{N}(r)\rvert = \left\{ \begin{array}{ll} {\dfrac{{G\lambda}}{{2\pi}}\dfrac{r}{{R_f^2 }},} & \hbox{$r < R_f$,} \\ &\\ {\dfrac{{G\lambda}}{{2\pi}}\dfrac{1}{r},} & \hbox{$r \ge R_f$,} \\ \end{array}\right . \label{eq:1}$$ with $\lambda$ being the previously defined line density given by equation . ![image](figures/rho_uni_muhdm.pdf){width="85.00000%"}\ ![image](figures/alpha_uni_muhdm.pdf){width="85.00000%"} ![image](figures/g_uni_muhdm.pdf){width="85.00000%"}\ ![image](figures/kappa_uni_muhdm.pdf){width="85.00000%"} ![image](figures/rho_uni_bary.pdf){width="85.00000%"}\ ![image](figures/alpha_uni_bary.pdf){width="85.00000%"} ![image](figures/g_uni_bary.pdf){width="85.00000%"}\ ![image](figures/kappa_uni_bary.pdf){width="85.00000%"} **(I)** For $ R_f \le y$, evaluating integral yields $$\hat\alpha_N (y) = G\lambda ={\rm const}. \label{eq:1a}$$ Inserting the above into equation , we may obtain the corresponding convergence field. As expected, $\kappa_{N}$ equals zero outside the cylinder’s projected matter density. **(II)** For $ y< R_f$, the deflection angle has to be calculated from $$\hat\alpha_N (y) = \frac{2G\lambda y}{\pi}\left\{\int\limits_{y}^{R_{f}} { {\frac{rdr}{R_{f}^{2}\sqrt{r^{2}-y^{2}}}} } + \int\limits_{R_{f}}^{\infty} { {\frac{dr}{r \sqrt{r^{2}-y^{2}}}} } \right\}. \label{eq:2}$$ Carrying out the integrations in equation , we finally end up with the following expression: $$\hat\alpha_N (y) = \frac{2G\lambda}{\pi}\left (\dfrac{y\sqrt{R_{f}^{2}-y^{2}}}{R_{f}^{2}}+\arcsin\left (\dfrac{y}{R_{f}}\right )\right ). \label{eq:3}$$ Using equation , the convergence in this case turns out to be $$\kappa_N (y) = 2\frac{{D_l D_{ls} }}{{D_s }}\frac{G\lambda}{{\pi R_f^2 }}\sqrt {R_f^2 - y^2 }. \label{eq:4}$$ MONDian Case ------------ Now we shall consider light deflection within the framework of TeVeS gravity. Choosing a certain smooth form of the free interpolating function $\mu$ [for further details see @teves] which has been used by [@lenstest], the total gravitational acceleration may be written in the following way: $$g_M (r) = \lvert\vec\nabla\Phi_{M}(r)\rvert = g_N (r) + \sqrt {g_N (r)a_0 }, \label{eq:5}$$ with $r$ again being the cylindrical radial coordinate and $\Phi_{M}(r)$ the total non-relativistic gravitational potential in TeVeS. The constant $a_0 = 1.2\times 10^{-10}$ m s$^{-2}$ characterizes the acceleration scale at which MONDian effects start to become important compared to Newtonian contributions. Since filaments are the most low-density structures within the universe, their internal (Newtonian) gravity is very small. Therefore, the MONDian influence yields an enhancement of the gravitational field which is on the order of $a_{0}/g_{N}$, being extremely large in such objects. For this reason, we may expect a substantial difference concerning the lensing signal caused by filamentary structures in TeVeS. Equipped with equations , and we are ready to proceed with the analysis of our cylindrical filament model: **(I)** For $ R_f \le y$, the deflection angle is given by $$\begin{split} \hat\alpha _M (y) &= \hat\alpha _N (y) +\sqrt{\dfrac{8G\lambda a_{0}}{\pi}}y\int\limits_{y}^{\infty}\frac{dr}{\sqrt{r}\sqrt{r^{2}-y^{2}}} \\ &= G\lambda + \frac{{\Gamma (1/4)}}{{\Gamma (3/4)}}\sqrt{2G\lambda a_{0}y}. \end{split} \label{eq:6}$$ In this case, the convergence reads as follows: $$\kappa _M (y) = \frac{{D_l D_{ls} }}{{D_s }}\frac{{\Gamma (1/4)}}{{\Gamma (3/4)}}\sqrt{\frac{{G\lambda a_{0}}}{{8y}}}. \label{eq:7}$$ **(II)** For $ y< R_f$, integral has to be split in several parts, similarly to equation . Using elementary calculus, we finally arrive at $$\begin{split} &\hat\alpha _M (y) = \hat\alpha_{N}(y)\\ & + \sqrt{\dfrac{2G\lambda a_{0}}{\pi}}\dfrac{y^{3/2}}{R_{f}}\left\lbrack 4\sqrt{\dfrac{R_{f}^{2}-y^{2}}{R_{f}y}}-\mathcal{B}_{\left (y^{2}/R_{f}^{2},1\right )}\left (\frac{3}{4},\frac{1}{2}\right )\right\rbrack \\ & +\sqrt{\dfrac{2G\lambda a_{0}y}{\pi}}\mathcal{B}_{\left (0,y^{2}/R_{f}^{2}\right )}\left (\frac{1}{4},\frac{1}{2}\right ), \end{split} \label{eq:8}$$ where $\hat\alpha_{N}(y)$ is the Newtonian deflection angle given by equation and $\mathcal{B}_{(p,q)}(a,b)$ is the generalized incomplete Beta function defined by $$\mathcal{B}_{(p,q)}(a,b) = \int\limits_{p}^{q}t^{a-1}(1-t)^{b-1}dt, \quad Re(a),Re(b)>0. \label{eq:9}$$ As the expression for the convergence $\kappa_{M}$ turns out to be quite lengthy, we will drop it at this point. From equations and , we find that the deflection angle $\alpha_{M}$ outside the cylinder’s projection increases with the square root of the impact parameter $y$ ($\alpha_{N}=const$), the convergence $\kappa_{M}$ decreases with the inverse square root of $y$ ($\kappa_{N}=0$). In fact, this reveals quite a fundamental difference between MOND/TeVeS and GR; since $\kappa_{N}=0$, we also have $\gamma_{N}=0$ and $A_{N}=1$ according to equation , meaning that there will be no distortion effects, as well as no change in the total flux between source and image, i.e. wherever the projected matter density is zero, the lens mapping will turn into identity. However, this is no longer true in the context of MOND/TeVeS as the convergence and the shear field do not vanish (see Fig. \[fig2a\]). Obviously, this is a case where the MONDian influence does not only enhance effects that are already present in GR, but rather creates something new, which, in principle, could be used to distinguish between laws of gravity (see §\[discussion\]). Varying the inclination angle $\theta$ of the filament’s axis to the line of sight, the lensing properties derived in this section have to be rescaled by a factor of $\sin^{-1}\theta$ in both GR and TeVeS. Model Application {#app} ================= In their $\Lambda$CDM large-scale structure simulation, [@LCDMfilament] have shown that there are close cluster pairs with a separation of $5 h^{-1}$ Mpc or less which are always connected by a filament. At separations between $15$ and $20$ $h^{-1}$ Mpc, still about a third of cluster pairs are connected by a filament. On average, more massive clusters are connected to more filaments than less massive ones. In addition, the current simulation indicates that the most massive clusters form at the intersections of the filamentary backbone of large-scale structure. For straight filaments, the radial profiles show a fairly well-defined radius $R_f$ beyond which the profiles closely follow an $r^{-2}$ power law, with $R_f$ being around $2.0$ $h^{-1}$ Mpc for the majority of filaments. The enclosed overdensity within $R_f$ varies from a few times up to $25$ times the mean density, independent of the filament’s length. Along the filaments’ axes, material is not distributed uniformly. Towards the clusters, the density rises, indicating the presence of the cluster infall regions. ![image](figures/rho_osci_muhdm.pdf){width="85.00000%"}\ ![image](figures/alpha_osci_muhdm.pdf){width="85.00000%"} ![image](figures/g_osci_muhdm.pdf){width="85.00000%"}\ ![image](figures/kappa_osci_muhdm.pdf){width="85.00000%"} ![image](figures/rho_osci_bary.pdf){width="85.00000%"}\ ![image](figures/alpha_osci_bary.pdf){width="85.00000%"} ![image](figures/g_osci_bary.pdf){width="85.00000%"}\ ![image](figures/kappa_osci_bary.pdf){width="85.00000%"} As previously stated, we will assume that filamentary structures have similar properties in MOND/TeVeS and in a CDM dominated universe based on GR. Our assumption is based on the $\mu$HDM cosmology (see §\[intro\]) and on the fact that filaments are generic and have similar characteristics in hot dark matter (HDM) and CDM scenarios [@knebe1; @knebe2; @wdm]. For instance, neutrinos are known to collapse into sheets and filaments in HDM simulations. Concerning our filament model introduced in Sec. \[model\], we therefore take the filament’s radius as $R_{f}=2.5$ $h^{-1}$ Mpc, and taking the overdensity within the filament to be $20$ times the intergalactic mean density $\rho_{0}$, we set $\delta=20$, with $\delta$ being the density contrast defined by $$\delta = \frac{\rho-\rho_{0}}{\rho_{0}}. \label{eq:contrast}$$ Again, note that choosing the $\mu$HDM cosmology implies that filaments do not solely consist of baryonic matter but need an additional matter component, i.e. neutrinos, within the MOND paradigm, which is inferred from the previously mentioned discrepancies between dynamical and visible mass on galaxy cluster scale [@neutrinos2] as well as from the need for such a component to explain the CMB [@tevesneutrinocosmo]. On the other hand, analyzing the Perseus-Pisces segment, [@MONDfilaments] concluded that a MONDian description of filaments would not need any additional non-baryonic mass component. Due to rather large systematic uncertainties, however, this result remains highly speculative and does not rule out our approach where filamentary structures have higher densities. Nevertheless, we will also include this case, where filaments consist of baryonic matter only, into our analysis. Since the absolute density of a filament in this situation is approximately by a factor $10-100$ smaller than in $\mu$HDM, we do expect the MONDian influence to become even more important. Encouraged by the MOND simulation of [@knebe3], we shall stick to the assumption that both shapes and relative densities of filaments are similar to the CDM case when considering a universe made out of baryonic matter only, thus keeping the choice $\delta=20$. In order to calculate the intergalactic mean density and the necessary angular diameter distances for lensing, we still need to set up a cosmological model in TeVeS. Depending on whether or not we assume massive neutrinos to be present in our universe, there will be a different cosmological background. If neutrinos are taken into account, then, for simplicity, we shall use the flat $\mu$HDM cosmology based on the parameters of [@tevesneutrinocosmo], $$\Omega_{m}=0.22,\quad \Omega_{\Lambda}=0.78. \label{eq:10}$$ Considering a universe with baryons only, we choose a flat minimal-matter cosmology which is described by $$\Omega_{m}=0.05,\quad \Omega_{\Lambda}=0.95. \label{eq:10a}$$ Furthermore, we shall set $h=0.7$ and calculate the model-dependent intergalactic mean density $\rho_{0}$ according to $$\rho_{0} = \Omega_{m}\rho_{c}(1+z_{l})^{3}, \label{eq:10b}$$ where $\rho_{c}=3H_{0}^{2}/8\pi G$ is the critical density and $z_{l}$ is the lens redshift, i.e. the filament’s redshift. Although equation has problems in explaining CMB observations due to its prediction of the last scattering sound horizon, both of the above cosmological models will be sufficient for assigning the distances of lenses and sources at redshifts $z \lesssim 3$ in the context of gravitational lensing. However, note that the simplicity of these models does not affect the upcoming analysis as we will limit ourselves to order-of-magnitude estimates only. Concerning the framework of standard Newtonian gravity, we shall use a flat $\Lambda$CDM cosmology with $\Omega_{m}=0.3$ and $\Omega_{\Lambda}=0.7$, allowing a consistent comparison to our results in MOND/TeVeS. The $\mu$HDM Scenario {#appne} --------------------- Using the cosmological parameters specified in equation within MOND/TeVeS and considering a filament which is inclined by an angle $\theta=90^{\circ}$ to the line of sight, both the Newtonian and the MONDian deflection angle as well as the corresponding convergence are plotted in the bottom left and bottom right panel of Figure \[fig2a\], with the filament placed at redshift $z_{l}=1$ and background sources at $z_{s}=3$. Whereas the Newtonian signal is rather small, $\kappa_{N}\lesssim 10^{-3}$, the filament can create a convergence on the order of $\kappa\sim 0.01$ in MOND/TeVeS, even in the outer regions where $\kappa_{N}=0$ if we take into account that it can have other orientations, i.e. a different inclination angle $\theta$. For example, a nearly end-on filament, $\theta=10^\circ$, has a lensing power $6$ times larger than that of a face-on filament, $\theta=90^\circ$. Using equation , we therefore infer that a single MOND/TeVeS filament may generate a shear signal which is on the same order as the convergence, $\gamma\sim 0.01$, as well as an amplification bias at a $2\%$ level, $A^{-1}\sim 1.02$. In addition, we present the density $\rho(r)$ and the radial evolution of the total gravitational acceleration $g(r)$ for MOND and Newtonian dynamics in the top left and top right panel of Figure \[fig2a\], respectively. Note again that, for consistency, the Newtonian results are based on a flat $\Lambda$CDM cosmology with $\Omega_{m}=0.3$ and $\Omega_{\Lambda}=0.7$. The Baryons-only Scenario {#appba} ------------------------- Now let us switch to the baryonic cosmological background given by equation . Keeping all remaining parameters exactly the same as in the last section, the corresponding results are presented in Figure \[fig2b\]. Although the convergence is slightly smaller than in the $\mu$HDM case, roughly by a factor of $1.5-2$, we find that also in this case single filamentary structures are capable of producing a lensing signal which is of the same order, $\kappa\sim\gamma\sim 0.01$. Again, this is even true outside the “edges" of the filament’s projected matter density, accounting for the fact that the inclination angle $\theta$ may vary, $0^{\circ}\leq\theta\leq 90^{\circ}$. Oscillating Density Model {#oscillator} ========================= Matter density fluctuations are steadily present throughout the universe. Thus, as a more realistic approach, we shall use a fluctuating density profile to describe a filament and its surrounding area including voids, i.e. regions in the universe where the local matter density is below the intergalactic mean density. To keep our analysis on a simple level, let us write the density fluctuation as ($r$ still denotes radial coordinate in cylindrical coordinates) $$\delta(r) = \left\{ \begin{array}{ll} {\delta_0 \left (\dfrac{\pi r}{R_f}\right )^{-1}\sin\left (\dfrac{\pi r}{R_f}\right ),} & \hbox{$r < 2R_f$,} \\ &\\ {0,} & \hbox{$r \ge 2R_f$,} \\ \end{array}\right . \label{eq:12}$$ where $\delta(r)$ is the density contrast defined in equation , $\delta_0=4$ is the density fluctuation amplitude (this value ensures a positive overall matter density), and $R_{f}=2.5$ $h^{-1}$ Mpc is again the filament’s characteristic radius. Multiplying with the mean density $\rho_{0}$ and integrating along the radial direction, we find that the mass per unit length enclosed by an infinite cylinder of radius $r$ reads as (note that we neglect the contribution due to the mean density background) $$\frac{M(r)}{L} = \left\{ \begin{array}{ll} {\dfrac{2\rho_0 \delta_0 R_f^2}{\pi}\left\lbrack 1-\cos\left (\dfrac{\pi r}{R_f}\right )\right\rbrack ,} & \hbox{$r < 2R_f$,} \\ &\\ {0,} & \hbox{$r \ge 2R_f$,} \\ \end{array}\right . \label{eq:13}$$ where $\rho_0$ is the mean intergalactic matter density given by equation . From equation , we directly see that the Newtonian gravitational acceleration in this case is $$g_N(r)=\frac{GM(r)}{2 \pi L}\frac{1}{r}. \label{eq:14}$$ Using equations , and , we are now able to numerically calculate the lensing properties of this configuration. Choosing lens and source redshift again as $z_l=1$ and $z_s=3$, respectively, and assuming the cosmological background models previously introduced in Sec. \[app\], the resulting deflection angle as well as the convergence are shown in the bottom panel of Figures \[fig3\] (flat $\mu$HDM cosmology) and \[fig4\] (flat minimal-matter cosmology), assuming $\theta=90^{\circ}$. Here the occurrence of negative $\kappa$-values simply reflects the fact that our model generates a local underdensity, $1+\delta(r)<1$, with the overall matter density $\rho$ being non-negative at any radius. Compared to the Newtonian case where $\kappa_{N}\lesssim 10^{-4}$, we again find that a face-on TeVeS filament may cause a significantly larger lensing signal, which is now on the order of $\kappa\sim\gamma\sim 10^{-3}$ within both TeVeS cosmologies. As the results of the $\mu$HDM and the minimal-matter cosmology approximately differ by a factor $1.5-2$ just as in §\[app\], the order-of-magnitude lensing effects caused by TeVeS filaments are also in this case more or less cosmologically model-independent. Close to the filament’s axis, where $\kappa\sim 4\times 10^{-3}$, one can actually have a lensing signal $\kappa =\gamma =0.01$ assuming that the inclination angle is small, $\theta\lesssim 20^{\circ}$. Although such angles correspond to rather special configurations, we may conclude that also for our simple oscillation model, single TeVeS filaments [*can*]{} generate a lensing signal $\sim 0.01$, which is similar to our result in §\[app\]. However, note that the above discussion is based on the choice of equation and $\delta_{0}=4$. Considering a higher overdensity along its axis, even a face-on filament described by a similar fluctuating profile could easily create a shear field $\gamma\sim 0.01$ for $y\lesssim R_{f}$. Superimposing Filaments with Other Objects {#superpos} ========================================== ![image](figures/super1.pdf){width="90.00000%"}\ ![image](figures/super3.pdf){width="90.00000%"} ![image](figures/super2.pdf){width="90.00000%"}\ ![image](figures/super4.pdf){width="90.00000%"} [cccccc]{} & P.A. & Incl. & Shift from Origin &\ Plane & (deg) & (deg) & (kpc) & Redshift $z$\ 2 & 90 & 12 & (0,-150) & 0.25\ 3 & 45 & 45 & (600,0) & 0.30\ To demonstrate the contribution of filamentary structures to the lensing map of other objects, e.g. galaxy clusters, we superimpose two differently orientated filaments with a toy cluster along the line of sight, assuming the previously introduced $\mu$HDM cosmology and different redshifts for each component. If all objects are sufficiently far away from each other ($\gtrsim 100$Mpc), we may approximately treat them as isolated lenses at a certain redshift slice, i.e. the corresponding deflection angles can be calculated separately. Thus, we may resort to the well-known multiplane lens equation [@blandford; @gl]: $$\vec\eta = \frac{d_{s}}{d_{1}}\vec\xi_{1}-\sum_{i=1}^{n}d_{is}\vec{\hat\alpha}_{i}(\vec\xi_{i}), \label{eq:15}$$ where $n$ is the number of lens planes, $d_{ij}$ corresponds to the angular diameter distance between the $i$-th and the $j$-th plane, and $\vec\xi_{i}$ is recursively given by $$\vec\xi_{i} = \frac{d_{i}}{d_{1}}\vec\xi_{1}-\sum_{j=1}^{i-1}d_{ji}\vec{\hat\alpha}_{j}(\vec\xi_{j}),\quad 2\leq i \leq n. \label{eq:16}$$ Comparing equation to the lens equation for a single lens plane, we identify the total deflection angle, $$\vec{\hat\alpha}_{tot}(\vec\xi_{1}) = \vec{\hat\alpha}_{1}(\vec\xi_{1})+\sum_{i=2}^{n}\frac{d_{is}}{d_{1s}}\vec{\hat\alpha}_{i}(\vec\xi_{i}) =\vec{\hat\alpha}_{c}+\vec{\hat\alpha}_{f}, \label{eq:17}$$ where $\vec{\hat\alpha}_{c}$ and $\vec{\hat\alpha}_{f}$ are the deflection angle of an isolated cluster at $z_{1}$ and an additional contribution due to the superimposed filaments, respectively. Analog to the case of a single plane, further lensing quantities such as the total convergence and the total shear can be calculated from equation , using the general relations introduced in §\[model\]. For simplicity, we shall assume that the cluster’s TeVeS potential follows the “quasi-isothermal" profile of [@tevesfit]: $$\Phi(\vec r) = v^{2}\log\sqrt{1+\frac{|\vec r-\vec r_{0}|^{2}}{p^{2}}}, \label{eq:18}$$ where $v$ is the asymptotic circular velocity, $p$ is a scale length, and $\vec{r}_{0}$ is the center’s position. Concerning the numerical setup, we set $v^2=2\times 10^{6}$ (km s$^{-1})^2$ and $p=200$ kpc, fixing the cluster’s redshift to $z_{1}=0.2$. Furthermore, we choose the uniform filament model discussed in §\[model\] and assume that filaments have a constant overdensity of $\delta = 20$ as well as the same characteristic radius $R_{f}=2.5$ $h^{-1}$ Mpc. While the cluster is centered at the origin ($\xi_{x}=\xi_{y}=0$), the two filaments are set up according to the parameters given in Table \[table1\]. Finally, we place the source plane at a redshift of $z_{s}=1$. Note that this specific setting corresponds to a more realistic lensing configuration compared to our analysis in the sections above, with our choice again being motivated by results based on a $\Lambda$CDM universe. From the top right panel of Figure \[fig5\], we see that the filaments’ contribution to the total convergence map, $\Delta\kappa=\kappa_{tot}-\kappa_{c}$ ($\kappa_{c}$ is the cluster’s convergence map in absence of any filamentary structures along the line of sight) is comparable to our previous findings, with the signal again being on the order of $0.01$. Also, note the distortion effects caused by the cluster and the peak close to the region where the two filaments overlap. Obviously, the contribution pattern depends on the actual configuration as well as on the type and amount of the considered objects along the line of sight, and can generally be quite complex. In addition, we present the changes in the reduced shear components, $\Delta g_{1}=\gamma_{tot,1}/(1-\kappa_{tot})-\gamma_{c,1}/(1-\kappa_{c})$ and $\Delta g_{2}=\gamma_{tot,2}/(1-\kappa_{tot})-\gamma_{c,2}/(1-\kappa_{c})$, due to the filaments’ presence in the bottom panel of Figure \[fig5\]. At this point, we should emphasize that we have considered the impact of filamentary structures alone. Depending on their particular position along the line of sight, additional (foreground) objects such as galaxies, galaxy clusters and/or voids might (locally) contribute on a comparable level or even exceed the signal caused by filaments. Of course, this further complicates the interpretation of the corresponding lens mapping, and we conclude that, in general, extracting the filaments’ contribution can pose quite a challenge. Conclusions {#discussion} =========== In this work, we have analyzed the gravitational lensing effect by filamentary structures in TeVeS, a relativistic formulation of the MOND paradigm. For this purpose, we have set up two different cosmological models in TeVeS: the so-called $\mu$HDM cosmology including massive neutrinos on the order of $2$ eV which have already been proposed as a remedy for the discrepancies between dynamical and visible mass on cluster scales [@neutrinos2] as well as for the CMB [@tevesneutrinocosmo], and a simple minimal-matter cosmology accounting for a universe which is made up of baryons alone. Encouraged by several HDM simulations and the fact that filamentary structures are generic, we have assumed that the properties of such structures, i.e. their shape and relative densities, are similar in CDM and MOND/TeVeS scenarios independent of the particularly used cosmological background. Modeling these filaments as infinite uniform mass cylinders, we have derived analytic expressions for their lensing properties in MOND/TeVeS and Newtonian/GR gravity. Regardless of the actual used cosmological background, we have shown that TeVeS filaments can account for quite a substantial contribution to the weak lensing convergence and shear field, $\kappa \sim \gamma \sim 0.01$, as well as to the amplification bias, $A^{-1}\sim 1.02$, which is even true outside but close $(y\sim 2R_{f})$ to the projected “edges" of the filament’s matter density. Exploring a simple oscillating density model of a filament and its surrounding area, we have found that the lensing signal in this case is generally smaller, but can still be of the same order, taking into account that the filamentary structures may be inclined to the line of sight by rather small angles ($\theta\lesssim 20^{\circ}$). In addition, we find that there is fundamental difference between GR and MOND/TeVeS considering idealized cylindrically symmetric lens geometries: wherever the projected matter density is zero, there will be no distortion as well as no amplification effects, i.e. image and source will look exactly the same. In the context of MOND/TeVeS, however, this changes as one can have such effects in these regions. Finally, we have demonstrated the impact of filaments onto the convergence map of other objects by considering superposition with a toy cluster along the line of sight. Again, our results have shown an additional contribution comparable to that of a single isolated filament and that the contribution pattern of filaments can be generally quite complex. Here we have considered the lensing signal generated by single filaments alone. Simulating the cosmic web in a standard $\Lambda$CDM cosmology, [@dolag] have found a shear signal $\gamma\sim 0.01-0.02$ along filamentary structures, which seems quite similar to what MOND/TeVeS can do. Note, however, that this signal is entirely dominated by the simulation’s galaxy clusters, with the filament’s signal being much smaller, approximately on the order of $10^{-4}-10^{-3}$. Although our analysis is mainly of theoretical interest, the above result points to an interesting possibility concerning recent measurements of weak lensing shear maps. For instance, the weak shear signal in the “dark matter peak" of Abell 520 [@abell520] is roughly at a level of $0.02$, which is comparable to what filaments could produce in MOND/TeVeS, but not in Newtonian gravity (also cf. [@wedding]).Therefore, we conclude that filamentary structures might actually be able to cause such anomalous lensing signals within the framework of MOND/TeVeS. In principle, the predicted difference in the weak lensing signal could also be used to test the validity of modified gravity. As several attempts to detect filaments by means of weak lensing methods have failed so far, e.g. the analysis of Abell $220$ and $223$ by [@dietrich], this might already be a first hint to possible problems within MOND/TeVeS gravity. On the other hand, shear signals around $\gamma\sim 0.01$ are still rather small to be certainly detected by today’s weak lensing observations, and lacking $N$-body structure formation simulations in the framework of MOND/TeVeS, we cannot even be sure about how filaments form and how they look like in a MONDian universe compared to the CDM case. Clearly, more investigation is needed to gain a better understanding about the impact of filamentary structures. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Johan Fynbo, Jens Hjorth and other members of the Dark Cosmology Centre for very stimulating discussions, special thanks go to Steen Hansen for organizing a Dark Matter Workshop in Copenhagen which inspired us to write this paper. D.X. thanks Kristian Pedersen and Yi-Peng Jiang for helpful discussion. M.F. thanks Matthias Bartelmann and Cosimo Fedeli for valuable comments on the manuscript. We also thank the anonymous referee for useful suggestions. B.F., M.F. and H.S.Z. acknowledge hospitality at the Dark Cosmology Centre. B.F. is a research associate of the FNRS, H.S.Z. acknowledges partial support from the National Natural Science Foundation of China (NSFC; Grant No. 10428308) and a UK PPARC Advanced Fellowship. M.F. is supported by a scholarship from the Scottish Universities Physics Alliance (SUPA). The Dark Cosmology Centre is funded by the Danish National Research Foundation.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Transitions in atoms and molecules provide an ideal test ground for constraining or detecting a possible variation of the fundamental constants of nature. In this Perspective, we review molecular species that are of specific interest in the search for a drifting proton-to-electron mass ratio $\mu$. In particular, we outline the procedures that are used to calculate the sensitivity coefficients for transitions in these molecules and discuss current searches. These methods have led to a rate of change in $\mu$ bounded to $6 \times 10^{-14}$/yr from a laboratory experiment performed in the present epoch. On a cosmological time scale the variation is limited to $|\Delta\mu/\mu| < 10^{-5}$ for look-back times of 10-12 billion years and to $|\Delta\mu/\mu| < 10^{-7}$ for look-back times of 7 billion years. The last result, obtained from high-redshift observation of methanol, translates into $\dot{\mu}/\mu = (1.4 \pm 1.4) \times 10^{-17}$/yr if a linear rate of change is assumed.' author: - Paul Jansen - 'Hendrick L. Bethlem' - Wim Ubachs title: 'Perspective: Tipping the scales - search for drifting constants from molecular spectra' --- Introduction\[sec:introduction\] ================================ The fine-structure constant, $\alpha \approx 1/137$, which determines the overall strength of the electromagnetic force, and the proton-to-electron mass ratio, $\mu=m_p/m_e \approx 1836$, which relates the strengths of the forces in the strong sector to those in the electro-weak sector[@Flambaum2004], are the only two dimensionless parameters that are required for the description of the gross structure of atomic and molecular systems [@Born1935]. The values of these two constants ensure that protons are stable, that a large number of heavy elements could form in the late evolution stage of stars, and that complex molecules based on carbon chemistry exist [@Hogan2000]. If these constants would have had only slightly different values, even by fractions of a percent, our Universe would have looked entirely different. The question whether this *fine tuning* is coincidental or if the constants can be derived from a – yet unknown – theory beyond the Standard Model of physics, is regarded as one of the deepest mysteries in science. One solution to this enigma may be that the values of the fundamental constants of nature may vary in time, or may obtain different values in distinct parts of the (multi)-Universe. Searches for drifting constants are motivated by this perspective. Theories that predict spatial-temporal variations of $\alpha$ and $\mu$ can be divided into three classes. The first class comprises a special type of quantum field theories that permit variation of the coupling strengths. Bekenstein postulated a scalar field for the permittivity of free space; this quintessential field then compensates the energy balance in varying $\alpha$ scenarios to accommodate energy conservation as a minimum requirement for a theory [@Bekenstein1982]. Based on this concept various forms of dilaton theories with coupling to the electromagnetic part of the Lagrangian were devised, combined with cosmological models for the evolution of matter (including dark matter) and dark energy under the assumptions of General Relativity. Such scenarios provide a natural explanation for variation of fundamental constants over cosmic history, i.e., as a function of red-shift parameter $z$. The variation will freeze out under conditions, where the dark energy content has taken over from the matter content in the Universe, a situation that has been reached almost completely [@Sandvik2002]. These theories provide a rationale for searches of drifting constants at large look-back times toward the origin of the Universe, even if laboratory experiments in the modern epoch were to rule out such variations. The second class of theories connects drifting constants to the existence of high order dimensions as postulated in many versions of modern string theory [@Aharony2000]. Kaluza-Klein theories, first devised in the 1920s, showed that formulations of electromagnetism in higher dimensions resulted in different effective values of $\alpha$, after compactification to the four observed dimensions. Finally, the third class of theories, known as Chameleon scenarios, postulate that additional scalar fields acquire mass depending on the local matter density [@Khoury2004]. Experimental searches for temporal variation of fundamental constants were put firmly on the agenda of contemporary physics by the ground-breaking study by Webb *et al.*[@Webb1999] An indication of a varying $\alpha$ was detected by comparing metal absorptions at high redshift with corresponding transitions that were measured in the laboratory. As the observed transitions have in general a different dependence on $\alpha$, a variation manifests itself as a frequency shift of a certain line with respect to another. This is the basis of the Many-Multiplet-Method for probing a varying fine structure constant.[@Dzuba1999] The findings triggered numerous laboratory tests that compare transitions measured in different atoms and molecules over the course of a few years and thus probe a much shorter time scale for drifting constants. In later work, Webb and co-workers found indication for a spatial variation of $\alpha$ in terms of a dipole across the Universe.[@Webb2011; @King2012] Spectroscopy provides a search ground for probing drifts in both $\alpha$ and $\mu$. While electronic transitions, including spin-orbit interactions, are sensitive to $\alpha$, vibrational, rotational and tunneling modes in molecules are sensitive to $\mu$. Hyperfine effects, such as in the Cs-atomic clock[@FlambaumTedesco2006; @Berengut2011] and the 21-cm line of atomic hydrogen[@Tzanavaris2005], depend on both $\alpha$ and $\mu$, as do $\Lambda$-doublet transitions in molecules[@Kozlov2009]. The same holds for combined high-redshift observations of a rotational transition in CO and a fine structure transition in atomic carbon,[@Levshakov2012] placing a tight constraint on the variation of the combination $\alpha^2\mu$ at a redshift as high as $z=5.2$. Within the framework of Grand Unification schemes theories have been developed that relate drifts in $\mu$ and $\alpha$ via $$\frac{\Delta\mu}{\mu} = R \frac{\Delta\alpha}{\alpha} \label{GUT}$$ where the proportionality constant $R$ should be large, on the order of $20-40$, even though its sign is not predicted.[@Calmet2002; @Flambaum2004] This would imply that $\mu$ is a more sensitive test ground than $\alpha$ when searching for varying constants. The sensitivity of a spectroscopic experiment searching for a temporal variation of $\mu$ (and similarly for $\alpha$) can be expressed as $$\left (\frac{\partial \mu}{\partial t} \right )\left / \mu = \left (\frac{\partial\nu}{\nu} \right ) \right / \left (K_\mu \Delta t\right ), \label{eq:detect_variation_int}$$ assuming a linear drift. Here $({\partial{\mu}}/{\partial t})/{\mu}$ is the fractional rate of change of $\mu$, ${\partial \nu}/{\nu}$ is the fractional frequency precision of the measurement, $K_\mu$ is the inherent sensitivity of a transition to a variation of $\mu$, and $\Delta t$ is the time interval that is probed in the experiment. For a sensitive test, one needs transitions that are observed with a good signal to noise and narrow linewidth, and that exhibit high $K_{\mu}$. In order to detect a possible variation of $\mu$ at least two transitions possessing a different sensitivity are required. Note that, for detecting a variation of $\mu$, it is not necessary to actually determine its value. In fact, in most cases this is impossible, since the exact relation between the value of $\mu$ and the observed molecular transitions is not known. Only for the most simple systems such as and , recently it became feasible to directly extract information on the value of $\mu$ from spectroscopic measurements [@SchillerKorobov2005; @KorobovZhong2012]. So far, the numerical value of the proton-electron mass ratio, $\mu =1836.152 672 45 \, (75)$, is known at a fractional accuracy of $4.1 \times 10^{-10}$ and included in CODATA[@Mohr2012; @FootnoteA], while constraints on the fractional change of $\mu$ are below $10^{-14}$/yr, as will be discussed in this paper. The most stringent independent test of the time variation of $\mu$ in the current epoch was set by comparing vibrational transitions in with a cesium fountain over the course of two years. The transitions were measured with a fractional accuracy of $\sim$10$^{-14}$ and have a sensitivity of $K_\mu=-\tfrac{1}{2}$, whereas the sensitivity coefficient of the transition is $K_\mu\approx -1$[@FlambaumTedesco2006; @Berengut2011], resulting in a limit on the variation of $\Delta \mu/\mu$ of $5.6 \times 10^{-14}$/yr.[@Shelkovnikov2008] In order to improve the constraints – or to detect a time-variation – attention has shifted to molecular species that possess transitions with greatly enhanced sensitivity coefficients. Unfortunately, the transitions that have an enhanced sensitivity are often rather exotic, i.e., transitions involving highly exited levels in complex molecules that pose considerable challenges to experimentalists and are difficult or impossible to observe in galaxies at high red-shift. Nevertheless, a number of promising systems have been identified that might lead to competitive laboratory and astrophysical tests in the near future. In this Perspective we review the current status of laboratory and astrophysical tests on a possible time-variation of $\mu$. In particular we outline the procedures for determining the sensitivity coefficients for the different molecular species. Reviews on the topic of varying constants were presented by Uzan [@Uzan2003], approaching the subject from a perspective of fundamental physics, and by Kozlov and Levshakov [@KozlovLevshakov2013], approaching the topic from a molecular spectroscopy perspective. Definition of sensitivity coefficients ====================================== The induced frequency shift of a certain transition as a result of a drifting constant is – at least to first order – proportional to the fractional change in $\alpha$ and $\mu$ and is characterized by its sensitivity coefficients $K_\alpha$ and $K_\mu$ via $$\frac{\Delta\nu}{\nu} = K_{\alpha}\frac{\Delta\alpha}{\alpha}+K_{\mu}\frac{\Delta\mu}{\mu}, \label{eq:KlphaKmu}$$ where $\Delta\nu/\nu=(\nu_\text{obs}-\nu_0)/\nu_0$ is the fractional change in the frequency of the transition and $\Delta\mu/\mu=(\mu_\text{obs}-\mu_0)/\mu_0$ is the fractional change in $\mu$, both with respect to their current-day values. From Eq.  we can derive an expression for $K_\mu$ (and similarly for $K_\alpha$) $$K_\mu = \frac{\mu}{E_e-E_g}\left (\frac{d E_e}{d\mu} -\frac{d E_g}{d\mu}\right ), \label{eq:Kmu}$$ where $E_g$ and $E_e$ refer to the energy of the ground and excited state, respectively. Note that the concept of a ground state may be extended to any lower state in a transition, even if this corresponds to a metastable state or a short-lived excited state in a molecule. This definition of $K_\mu$ yields opposite signs to that used in Refs. \[\]. Although electronic transitions in atoms are sensitive to $\alpha$, they are relatively immune to a variation of $\mu$. For instance, the frequency of the radiation emitted by a hydrogen-like element with nuclear charge $Ze$ and mass number $A$ in a transition between levels $a$ and $b$ is given by $$\nu_{ab}=Z^2\frac{\mu_\text{red}}{m_e}R_\infty \left (\frac{1}{n_a^2}-\frac{1}{n_b^2} \right ), \label{eq:Rydberg}$$ where $\mu_\text{red}={{A m_p}{m_e}}/(A m_p+m_e)$ and $R_\infty$ is the Rydberg constant. In order to find the sensitivity coefficients of these transitions we apply Eq.  and obtain $$K_\mu = \frac{1}{1+A\mu}, \label{eq:RydbergKmu}$$ resulting in sensitivity coefficients of $5.4\times 10^{-4}$ for the transitions of the Lyman series in atomic hydrogen ($A=1$). Let us now turn to transitions in molecules. Within the framework of the Born-Oppenheimer approximation, the total energy of a molecule is given by a sum of uncoupled energy contributions, hence, we may rewrite Eq.  as $$K_\mu \approx \frac{\sum\nolimits_i K_\mu^i\Delta E_i}{\sum\nolimits_i \Delta E_i}, \label{eq:toy}$$ where the summation index $i$ runs over the different energy contributions, such as electronic, vibrational, and rotational energy. It is generally assumed that the neutron-to-electron mass ratio follows the same behavior as the proton-to-electron mass ratio and no effects depending on quark structure persist[@Dent2007]. Under this assumption all baryonic matter may be treated equally and $\mu$ is proportional to the mass of the molecule. Hence, from the well-known isotopic scaling relations we find $K_\mu^\text{el}=0$, $K_\mu^\text{vib}=-\tfrac{1}{2}$, and $K_\mu^\text{rot}=-1$. The inverse dependence of the sensitivity coefficient on the transition frequency suggests that $K_\mu$ is enhanced for near-degenerate transitions, i.e., when the different energy contributions in the denominator of Eq.  cancel each other. This enhancement is proportional to the energy that is being cancelled and to the difference in the sensitivity coefficients of the energy terms. Since in general $E_\text{el}\gg E_\text{vib}\gg E_\text{rot}$, cancellations between electronic, vibrational and rotational energies are unexpected. Nevertheless, transitions with enhanced sensitivity due to a cancellation of vibrational and electronic energies have been identified in  [@DeMille2008] and  [@Beloy2011]. Whereas cancellations between electronic and vibrational energies are purely coincidental, near-degeneracies occur as a rule in more complex molecules such as molecular radicals or poly-atomic molecules. These molecules have additional energy contributions that are comparable in magnitude to rotational and vibrational energies and exhibit a different functional dependence on $\mu$. For instance, molecules in electronic states with non-zero electronic angular momentum have fine-structure splittings that are comparable to vibrational splittings in heavy molecules[@FlambaumKozlov2007] and to rotational splittings in light molecules [@BethlemUbachs2009]. Likewise, molecules that possess nuclear spin have hyperfine splittings that can be comparable to rotational splittings[@Flambaum2006]. In polyatomic molecules, splittings due to classically-forbidden large-amplitude motions, such as inversion [@vanVeldhoven2004; @FlambaumKozlov2007NH3; @KozlovLevshakov2011] or internal rotation[@Jansen2011PRL; @KozlovLevshakov2013], can be comparable to rotational splittings. Finally, the Renner-Teller splitting, that originates from the interaction between electronic and vibrational angular momenta in linear polyatomic molecules, can be comparable to rovibrational splittings[@Kozlov2013].\ As discussed in the introduction, the sensitivity of a test depends both on the sensitivity coefficient and the fractional precision of the measured transition (see Eq. (\[eq:detect\_variation\_int\]). For enhancements originating from cancellations between different modes of energy the sensitivity scales as the inverse frequency – i.e., when two energy terms in the numerator of Eq. (\[eq:Kmu\]) are very similar the sensitivity coefficient becomes large while the transition frequency becomes small. The resolution of astrophysical observations are usually limited by Doppler broadening which implies that the fractional precision, $\delta \nu/\nu$, is independent of the frequency. Thus, for astrophysical tests the advantage of low frequency transitions with enhanced sensitivity is evident. For laboratory tests, the motivation for choosing low frequency transitions is less obvious. Due to the advances in frequency comb and optical clock techniques, the fractional precision of optical transitions has become superior to those in the microwave domain.[@Chou2010; @Nicholson2012] It was therefore argued by Zelevinsky *et al.*[@Zelevinsky2008] and others, that the best strategy for testing the time-variation of fundamental constants is to measure an as large as possible energy interval and accept the rather limited sensitivity coefficient that is associated with it. It may be true that optical clocks have a better fractional accuracy but microwave measurements still have a smaller absolute uncertainty. For instance, the most accurate optical clock based on a transition in Al$^+$ at 267 nm (1.12 PHz) has a fractional accuracy of $2.3 \times 10^{-17}$, which corresponds to an absolute uncertainty of 27 mHz [@Rosenband2008], while the most accurate microwave clock, based on a transition in Cesium at 9.2 GHz has a fractional accuracy of $2 \times 10^{-16}$ corresponding to an absolute uncertainty of 2 $\mu$Hz [@Bize2005]. It thus make sense to measure transitions in the microwave region, but only if favorable enhancement schemes are available. An additional advantage is that, in some well-chosen cases, transitions with opposite sensitivity coefficients can be used to eliminate systematic effects. The remainder of this paper can be divided into two parts. In the first part, consisting of Secs. \[sec:hydrogen\] and \[sec:radicals\], the use of diatomic molecules in studies of a time-varying $\mu$ is discussed. In particular, Sec. \[sec:hydrogen\] reviews the calculation of sensitivity coefficients for rovibronic transitions in molecular hydrogen and carbon monoxide and describes how these transitions are used to constrain temporal variation of $\mu$ on a cosmological time scale. Section \[sec:radicals\] shows that the different mass dependence of rotational and spin-orbit constants results in ‘accidental’ degeneracies for specific transitions. The second part of the paper consists of Secs. \[sec:inversion\] to \[sec:internal\_rotation\] and discusses the use of polyatomic molecules, in particular those that possess a classically-forbidden tunneling motion. Testing the time independence of $\mu$ using diatomic molecules {#sec:diatomics} =============================================================== Transitions in molecular hydrogen and carbon monoxide\[sec:hydrogen\] --------------------------------------------------------------------- Molecular hydrogen has been the target species of choice for $\mu$ variation searches on a cosmological time scale, in particular at higher redshifts ($z > 2$). The wavelengths of the Lyman and Werner absorption lines in and can be detected in high-redshifted interstellar clouds and galaxies in the line of sight of quasars and may be compared with accurate measurements of the same transitions performed in laboratories on earth. While Thompson proposed using high-redshift lines as a search ground for a varying proton-electron mass ratio [@Thompson1975], Varshalovich and Levshakov first calculated $K_{\mu}$ sensitivity coefficients for the molecule[@Varshalovich1993]. Later updated values for sensitivity coefficients of were obtained in a semi-empirical fashion, based on newly established spectroscopic data [@Reinhold2006; @Ubachs2007], and via *ab initio* calculations.[@Meshkov2006] In the semi-empirical approach, rovibrational level energies of the relevant electronic states are fitted to a Dunham expansion[@Dunham1932] $$E(\nu,J)= \sum_{k,l}Y_{kl}\left (\nu +\tfrac{1}{2} \right )^k \left [J\left (J+1 \right ) - \Lambda^2 \right ]^l, \label{eq:Dunhamex}$$ where $\Lambda$ is the projection of the orbital angular momentum on the molecular axis, i.e., $\Lambda = 0$ and $1$ for $\Sigma$ and $\Pi$ states, respectively, and $Y_{kl}$ are the fitting parameters. The advantage of the Dunham representation of molecular states is that the coefficients scale to first order as $Y_{kl}\propto \mu_\text{red}^{l-k/2}$, with $\mu_\text{red}$ the reduced mass of the molecule[@Dunham1932; @Ubachs2007]. The coefficients from the Dunham expansion can thus be used to determine the sensitivity coefficients through $$\begin{gathered} \frac{d E}{d \mu} = \sum_{k,l}\frac{d Y_{kl}}{d\mu}\left (\nu +\tfrac{1}{2} \right )^k \left [J\left (J+1 \right ) - \Lambda^2 \right ]^l,\\ \text{with } \frac{d Y_{kl}}{d\mu}\approx -\frac{Y_{kl}}{\mu} \left (l+\frac{k}{2} \right ). \label{eq:Danhamder}\end{gathered}$$ By inserting Eqs.  and  into Eq. , sensitivity coefficients are obtained within the Born-Oppenheimer approximation. The mass dependence of the potential minima of ground and excited states is partly accounted for by including the adiabatic correction. Neglecting the dependence on the nuclear potential, its effect is approximated to that of the normal mass shift or Bohr shift[@Bohr1913], $R_\text{H}/R_\infty=m_p/(m_p+m_e)$, on the levels of an electron bound to an core, due to the finite mass of the latter $$\Delta E_\text{ad} = -\frac{\Delta E_\infty}{2(\mu+1)}=-\frac{\Delta E(\mu)}{2\mu+1}, \label{eq:DEad}$$ where $\Delta E_\mu$ is the difference of the empirical $Y_{00}$ values of the (deperturbed) $B ^1\Sigma^+_u$ or $C ^1\Pi_u$ state and the $X ^1\Sigma^+_g$ ground state. The mass dependence of Eq.  introduces an additional term that should be included in the parenthesis of Eq.  representing the adiabatic correction $$\frac{d}{d\mu}\Delta E_\text{ad} = -\frac{\Delta E_\text{ad}}{\mu+1}. \label{eq:DEadder}$$ In order to account for nonadiabatic interaction, mixing between different electronic states should be included. In Refs. \[\] a model is adopted in which the multi-dimensional problem is approximated by incorporating only the interaction of the dominant electronic states. The values for the resulting interaction matrix elements are obtained from a fit to the experimental data. This procedure provides both the deperturbed level energies to which the Dunham coefficients are fitted, as well as the superposition coefficients of the mixed states, $c_i$. The sensitivity coefficients for the perturbed states are given by $$K_\mu=\sum_i c_i^2 K_\mu^i, \label{eq:Kmuperturbed}$$ where $i=0$ refers to the state under consideration and $K_\mu^i$ are the sensitivity coefficients of the perturbing states. In particular for some levels where a strong interaction between $B\,^1\Sigma_u^+$ and $C\,^1\Pi_u$ states occurs the non-adiabatic interaction contributes significantly to the values of $K_{\mu}$. ![Representative high-resolution spectrum. Recording of the $R(0)$ line in the $B-X(4,0)$ band of (upper) with etalon markers (lower) and an -saturation spectrum (middle) for calibration. The line marked with an asterisk (\*) is the $\mathrm{a}_2$ hyperfine component of the $B-X(8,4)\,R(49)$ transition line in at $15\,808.13518$[$\mathrm{cm}^{-1}$]{}used as an absolute reference [@Xu2000]. Note that the and etalon spectra are taken at the fundamental, whereas the XUV axis shown is the 6th harmonic. \[fig:H2-spec-line\]](./Fig01-HD_spectrum.pdf){width="1\columnwidth"} The procedures, following this semi-empirical (SE) procedure outlined in the above, yield $K_{\mu}$ coefficients for the Lyman lines (in the $B\,^1\Sigma_u^+$ - $X\,^1\Sigma_g^+$ system) and Werner lines (in the $C\,^1\Pi_u$ - $X\,^1\Sigma_g^+$ system) in the range (-0.05, +0.02). These results agree with values obtained from *ab initio* calculations (AI) within $\Delta K_{\mu} = K_{\mu}^{AI} - K_{\mu}^{SE} < 3 \times 10^{-4}$, so at the 1% level, providing confidence that a reliable set of sensitivity coefficients for H$_2$ is available. For the HD molecule a set of $K_{\mu}$ coefficients was obtained via *ab initio* calculations.[@Ivanov2010]\ A full set of accurate laboratory wavelengths was obtained in spectroscopic studies with the Amsterdam narrowband extreme ultraviolet (XUV) laser setup. Coherent and tunable radiation at wavelengths $92-112$nm is produced starting from a Nd:VO$_4$-pumped continuous wave (CW) ring dye laser, subsequent pulse amplification in a three-stage traveling-wave pulsed dye amplifier, frequency doubling in a KDP-crystal to produce UV-light, and third harmonic generation in a pulsed jet of Xe gas [@Ubachs1997]. The spectroscopy of the strong dipole allowed transitions in the Lyman bands and Werner bands was performed in a configuration with a collimated beam of molecules perpendicularly crossing the overlapping XUV and UV beams via the method of $1+1$ resonance-enhanced photo-ionization. Calibration of the absolute frequency scale in the XUV was established via comparison of the CW-output of the ring laser with on-line recording of saturated absorption lines of and fringes of a Fabry-Perot interferometer, which was stabilized against of laser. Wavelength uncertainties, for the major part related to residual Doppler effects, AC-Stark induced effects and frequency chirp in the pulsed dye amplifier, as well as to statistical effects, were carefully addressed leading to calibrated transition frequencies of the Lyman and Werner band lines in the range $92-112$nm at an absolute accuracy of $0.004$[$\mathrm{cm}^{-1}$]{} or $0.000004$nm, corresponding to a relative accuracy of $5 \times 10^{-8}$. A detailed description of the experimental procedures and of the results is given in a sequence of papers [@Philip2004a; @Philip2004b; @Ivanov2008b]. Similar investigations of the XUV-laser spectrum of HD were performed in view of the fact that HD lines were also observed in high-redshift spectra towards quasar sources [@Hollenstein2006; @Ivanov2008]. Additional spectroscopic studies of were performed assessing the level energies in these excited states in an indirect manner, thereby verifying and even improving the transition frequencies in the Lyman and Werner bands [@Salumbides2008; @Bailly2010]. The data set of laboratory wavelengths obtained for both and , has reached an accuracy that can be considered exact for the purpose of comparison with quasar data, where accuracies are never better than $10^{-7}$. A typical recording of an HD lines is shown in Fig. \[fig:H2-spec-line\]. A full listing of all relevant parameters on the laboratory absorption spectrum of and , including information on the intensities, is made available in digital form in the supplementary material of Ref. \[\]. ![Comparison between the spectrum of Q2123-005 in the $3097-3106$Å range observed with HIRES-Keck[@Malec2010] (upper panel) and UVES-VLT[@vanWeerdenburg2011] (lower panel). For both panels, fits to the molecular hydrogen lines are shown as the solid green lines and their velocity components are indicated by the tick marks that are shown above the spectrum. Tick marks indicating the positions of Lyman-$\alpha$ lines and lines are shown with a slight offset. spectral line identifications are shown at the bottom. Residuals from the fit are shown above the observed spectra. \[fig:keck\_vlt\]](./Fig02-compare_keck_vlt.pdf){width="1\columnwidth"} High quality data on high redshift absorbing systems, in terms of signal-to-noise (S/N) and resolution, is available only for a limited number of objects. In view of the transparency window of the earth’s atmosphere ($\lambda > 300$nm) absorbing systems at $z>2$ will reveal a sufficient number of lines to perform a $\Delta\mu/\mu$ constraining analysis. The systems observed and analyzed so far are: Q0347-383 at $z_{\rm{abs}}= 3.02$, Q0405-443 at $z_{\rm{abs}}= 2.59$, Q0528-250 at $z_{\rm{abs}}= 2.81$, Q2123-005 at $z_{\rm{abs}}= 2.05$, and Q2348-011 at $z_{\rm{abs}}= 2.42$. Note that the objects denoted by “Q” are background quasars, which in most studies focusing on H$_2$ spectra are considered as background light sources, and are indicated by their approximate right ascension (in hours, minutes and seconds) as a first coordinate and by their declination (in degrees, arcminutes and arcseconds, north with “+” and south with “-”) as a second coordinate. Hence Q0347-383 refers to a bright quasar located at RA =03:49:43.64 and dec =-38:10:30.6 in so-called J2000 coordinates (the slight discrepancies in numbers relate to the fact that most quasars were discovered some 30 years ago, in the epoch when the B1950 coordinate system was in use; hence they derive their names from the older, shifted coordinate frame). These coordinates imply that Q0347-383 is observable during night-time observations in October and a few months before and after. This quasar source is known to be located at $z_{\rm{emis}}=3.21$ from a Lyman-$\alpha$ intensity peak in its emission spectrum, while the absorbing galaxy containing one or more clouds with H$_2$ is at $z_{\rm{abs}}=3.02$. From the analysis of the redshifted H$_2$ spectrum a 7-digit accuracy value for the redshift is obtained, in the case of Q0347-383 $z_{\rm{abs}}=3.024\,899\,0 (12)$ [@Reinhold2006]. Such an accurate determination of $z_{\rm{abs}}$ is required for the $\mu$-variation analysis, since it sets the exact value of the Doppler shift of the absorbing cloud. Relevant parameters for the analysis are the column density, which should be sufficient to yield absorption of at least the lowest $J$-levels, hence $N$() $> 10^{14}$cm$^{-2}$ and lower than $10^{19}$cm$^{-2}$ to avoid full saturation of the lines, and the brightness of the background quasar which should produce a high S/N in a reasonable amount of observing time. The absorbing system toward Q2123-005 has the favorable condition that the magnitude of the quasar background source ($R_\text{mag}= 15.8$) is the brightest of all bearing systems observed so far. This system has been observed from both the Very Large Telescope (Paranal, Chile), equipped with the Ultraviolet-Visible Echelle Spectrometer (UVES) and with the Keck Telescope (Hawaii, USA) equipped with the HIRES spectrometer. For a comparison of observed spectra see Fig. \[fig:keck\_vlt\]. The results from the analyses are $\Delta\mu/\mu = (5.6 \pm 5.5_{stat} \pm 2.9_{syst}) \times 10^{-6}$ for the Keck spectrum[@Malec2010] and $\Delta\mu/\mu = (8.5 \pm 3.6_{stat} \pm 2.2_{syst}) \times 10^{-6}$ for the VLT spectrum[@vanWeerdenburg2011], are tightly constraining and in good agreement with each other. This result eases concerns on systematic effects associated with each of the instruments. Brightness of the other background quasars is typically $R_\text{mag}=17.5$, while Q2348-011 is the weakest with $R_\text{mag}=18.3$. The latter only delivered a poor constraint for reasons of low brightness and from a second damped-Lyman absorber taking away many lines by its Lyman cutoff [@Bagdonaite2012]. ![image](./Fig03-CH_CO.pdf){width="95.00000%"} Since the number of suitable absorber systems at high redshift is rather limited, additional schemes are required to improve the current constraint on $\mu$ variation at redshifts $z>1$. Recent observations of vacuum ultraviolet transitions in carbon monoxide at high redshift[@Srianand2008; @Noterdaeme2009; @Noterdaeme2010; @Noterdaeme2011] make a promising target species for probing variation of $\mu$. An additional advantage of the $A-X$ bands is that its wavelengths range from $130-154$nm, that is, at lower wavelengths than Lyman-$\alpha$, so that the spectral features in typical quasar spectra will fall outside the region of the so-called Lyman-$\alpha$ forest (provided that the emission redshift of the quasar $z_\text{em}$ is not too far from the redshift $z_\text{abs}$ of the intervening galaxy exhibiting the molecular absorption). The occurrence of the Lyman-forest lines is a major obstacle in the search for $\mu$ variation via molecular hydrogen lines. In order to prepare for a $\mu$-variation analysis, accurate laboratory measurements on the $A-X$ system of CO were performed, using laser-based excitation and Fourier-transform absorption spectroscopy [@Salumbides2012], yielding transition frequencies at an accuracy better than $\Delta\lambda/\lambda = 3 \times 10^{-7}$. Also a calculation of $K_{\mu}$ sensitivity coefficients was performed, which required a detailed analysis of the structure of the $A^1\Pi$ state of CO and its perturbation by a number of nearby lying singlet and triplet states.[@Niu2013] Near-degeneracies in diatomic radicals\[sec:radicals\] ------------------------------------------------------ In the previous section we discussed sensitivity coefficients for transitions in diatomics with closed-shell electronic states, that is, molecules that have zero electronic orbital angular momentum. Let us now turn to diatomic open-shell molecules in a $^2\Pi$ electronic state that have a nonzero projection of orbital angular momentum along the molecular axis. The overall angular momentum $\mathbf{J}$ depends on the coupling between the orbital angular momentum $\mathbf{L}$, the spin angular momentum $\mathbf{S}$, and the rotational angular momentum $\mathbf{R}$. Depending on the energy scales that are associated with these momenta, the coupling between the vectors is described by the different Hund’s cases. When only rotation and spin-orbit coupling are considered, the Hamiltonian matrix for a $^2\Pi$ electronic state in a Hund’s case (a) basis is given by[@BrownCarrington] $$\begin{gathered} \begin{pmatrix} \tfrac{1}{2}A+Bz & -B\sqrt{z} \\ -B\sqrt{z} & -\tfrac{1}{2}A+B\left (z+2 \right ) \end{pmatrix},\\\text{with }z=\left (J+\tfrac{1}{2} \right )^2 -1 \label{eq:SOrotHmatrix}\end{gathered}$$ where $A$ and $B$ refer to the spin-orbit and rotational constant, respectively. For a given value of $J$, the lower energy level is labelled as $F_1$ and the upper as $F_2$. The eigenfunctions of the Hamiltonian matrix are $$\ket{F_2}=a_J\ket{\tfrac{3}{2}}-b_J\ket{\tfrac{1}{2}}\text{ and } \ket{F_1}=b_J\ket{\tfrac{3}{2}}+a_J\ket{\tfrac{1}{2}},$$ where $$a_J^2=\frac{X+(A-2B)}{2X},\text{ and } b_J^2=\frac{X-(A-2B)}{2X},$$ and $$X=\sqrt{(A-2B)^2+4B^2z}.$$ It is instructive to analyze the sensitivity coefficients of transitions within these molecules as a function of $A/B$. These transitions can be divided into two categories; transitions within a spin-orbit manifold and transitions between adjacent spin-orbit manifolds. In the limit of large $|A/B|$, transitions within a $\Omega$ manifold become purely rotational having $K_\mu=-1$, while transitions between different $\Omega$ manifolds, become purely electronic, and therefore have $K_\mu=0$. When $A\sim Bz$, the spin-orbit manifolds become mixed and the sensitivity of the different types of transitions lies between 0 and -1[@DeNijs2012]. Three distinct situations, illustrated for a single transition in the left-hand side of Fig. \[fig:CHandCO\], can be identified; (i) When $A=0$ all transitions have a sensitivity coefficient of $-1$. (ii) When $A=2B$, $a_J=b_J=1/\sqrt{2}$ and the spin-orbit manifolds are completely mixed. This also results in sensitivity coefficients of $K_\mu=-1$. (iii) Finally, when $A=4B$, the levels $F_1(J)$ and $F_2(J-1)$ are degenerate for each value of $J$. This case (b) ‘behavior’ (zero spin-orbit splitting) gives rise to an enhancement of the sensitivity coefficient for transitions that connect these two states. However, it was shown by de Nijs *et al.*[@DeNijs2012] that the same conditions that led to the enhancement of the sensitivity coefficients also suppress the transition strength, leading them to conclude that one-photon transitions between different spin-orbit manifolds of molecular radicals are either insensitive to a variation of $\mu$ or too weak to be of relevance in astrophysical searches for variation of $\mu$. ![Relation between $A/B$ and the value of $J$ at which a near degeneracy occurs for diatomic molecules in doublet and triplet $\Pi$ states. The curves were calculated using a simplified model that neglects lambda and hyper-fine splitting. \[fig:2Pi3Pi\_res\]](./Fig04-2Pi_AB.pdf){width="1\columnwidth"} This problem disappears when two-photon transitions are considered, as was done by Bethlem and Ubachs[@BethlemUbachs2009] for in its metastable $a\,^3\Pi$ state, which is perhaps the best studied excited triplet system of any molecule.[@Freund1965; @Wicke1972; @Saykally1987; @Yamamoto1988] On the right-hand side of Fig. \[fig:CHandCO\], sensitivity coefficients for the $J_\Omega^p = 6_1^\pm - 8_0^\pm$ transitions in are shown as a function of $A/B$, calculated using a simplified Hamiltonian matrix for the $a\,^3\Pi$ state. Note that “+” and “-” signs refer to $\Lambda$-doublet components of opposite parity. Crosses, also shown in the figure, indicate sensitivity coefficients that were calculated using a full molecular Hamiltonian.[@DeNijs2011] From the figure it can be seen that resonances occur near $A/B\sim 25$ which is close to the $A/B$ values for the and isotopologues. When combined, the $6_1^+\rightarrow 8_0^+$ transition in and the $8_0^-\rightarrow 6_1^-$ transition in have a sensitivity that is almost 500 times that of a pure rotational transition. An experiment to measure these transitions in a double-resonance molecular beam machine using a two-photon microwave absorption is currently under construction in our laboratory.[@BethlemUbachs2009] The relation between $A/B$ and the value of $J$ at which a resonance is expected for two-photon transitions in diatomic molecules in doublet and triplet $\Pi$ states is shown in Fig. \[fig:2Pi3Pi\_res\]. From this figure it is easily seen that no such resonances occur in and , because for these molecules the value of $A/B$ results in a fine-structure splitting that is smaller than the rotational splitting. Most other molecules have resonances that occur only for relatively high values of $J$, making these systems difficult to access experimentally. For molecules with $^2\Pi$ electronic states, we see that only , , and have near degeneracies for $J<10$, whereas is the only molecule in a $^3\Pi$ electronic state with a resonance at low $J$.\ In the present discussion only rotational transitions between different spin-orbit manifolds were considered. Darling first suggested that $\Lambda$-doublet transitions in OH could serve as a probe for a time-variation of $\alpha$ and $\mu$.[@Darling2003] These transitions were measured at high accuracy in a Stark-decelerated molecular beam by Hudson *et al.* [@Hudson2006] It was shown by Kozlov[@Kozlov2009] that $\Lambda$-doublet transitions in particular rotational levels of OH and CH have an enhanced sensitivity for $\mu$-variation, as a result of an inversion of the $\Lambda$-doublet ordering. For OH the largest enhancement occurs in the $J=9/2$ of the $\Omega=3/2$ manifold which lies 220 cm$^{-1}$ above the ground-state and gives rise to $K_{\mu} \sim 10^{3}$. For CH the largest enhancements occur in the $J=3/2$ of the $\Omega=3/2$, which lies only 18 cm$^{-1}$ above the ground state, however, the enhancement is on the order of 10. Recently, Truppe *et al.*[@Truppe2013] used Ramsey’s separated zone oscillatory field technique to measure the 3.3 and 0.7GHz $\Lambda$-doublet transitions in with relative accuracies of $9\times 10^{-10}$ and $3\times 10^{-8}$, respectively. By comparing their line positions with astronomical observations of (and ) from sources in the local galaxy, they were able to constrain $\mu$-dependence on matter density effects (chameleon scenario) at $\Delta\mu/\mu<2.2\times 10^{-7}$. Large amplitude motion in polyatomic molecules ============================================== Tunneling inversion \[sec:inversion\] ------------------------------------- In its electronic ground state, the ammonia molecule has the form of a regular pyramid, whose apex is formed by the nitrogen atom, while the base consists of an equilateral triangle formed by the three hydrogen atoms. Classically, the lowest vibrational states possess insufficient energy to allow the nitrogen atom to be found in the plane of the hydrogen atoms, as can be seen from the potential energy curve in Fig. \[fig:ammonia\_potential\]. If the barrier between the two potential wells were of infinite height, the two wells would be totally disconnected and each energy eigenvalue of the system would be doubly degenerate. However, as the barrier is finite, quantum-mechanical tunneling of the nitrogen atom through the plane of the hydrogen atoms couples the two wells. This tunneling motion lifts the degeneracy, and the energy levels are split into doublets. The tunneling through the barrier with a height of 2023[$\mathrm{cm}^{-1}$]{}is responsible for an energy splitting of 0.8[$\mathrm{cm}^{-1}$]{}and 36[$\mathrm{cm}^{-1}$]{}in the ground vibrational and first excited vibrational states, respectively. These energies are much smaller than the energy corresponding to the normal vibrational motion in a single well ($\tilde{\nu}_0=950$[$\mathrm{cm}^{-1}$]{}), since the inversion of the molecule is severely hindered by the presence of the potential barrier. An analytical expression for the inversion frequency has been calculated by Dennison and Uhlenbeck [@Dennison1932], who used the Wentzel-Kramers-Brillouin approximation to obtain $$\begin{gathered} \omega_\mathrm{inv}=\frac{\omega_0}{\pi}e^{-G},\text{ with}\\\quad G=\frac{1}{\hbar}\int_{-s_0}^{s_0}\left [2\mu_{\text{red}} \left ( U(z) - E\right ) \right ]^{\tfrac{1}{2}}ds, \label{eq:WKBsplitting}\end{gathered}$$ with $\omega_0$ the energy of the vibration in one of the potential minima and $E$ the total vibrational energy. Townes and Schawlow already noted that “if the reduced mass is increased by a factor of 2, such as would be roughly done by changing from to , $\nu_\text{inv}$ decreases by $e^{6\left ( \sqrt{2}-1\right )}$ or a factor of 11.”[@TownesSchawlow1975]. Van Veldhoven *et al.*[@vanVeldhoven2004] and Flambaum and Kozlov[@FlambaumKozlov2007NH3] pointed out that the strong dependence of the inversion splitting on the reduced mass of the ammonia molecule can be exploited to probe a variation of $\mu$. ![Potential energy curve and lowest vibrational energy levels for the electronic ground state of as a function of the distance between the nitrogen atom and the plane of the hydrogen atoms, $s$. The classical turning points for the ground vibrational state, $\pm s_0$, are indicated as well. Due to tunneling through the potential barrier each vibrational level is split in a symmetric and antisymmetric component. \[fig:ammonia\_potential\]](./Fig05-ammonia_potential_v2.pdf){width="1\columnwidth"} To a first approximation the Gamow factor, $G$, is proportional to $\mu_\text{red}^{1/2}$ and the $\mu$ dependence of Eq.  can be expressed through $$\nu_\text{inv}=\frac{a_0}{\sqrt{\mu_\text{red}}}e^{-a_1\sqrt{\mu_\text{red}}}, \label{eq:invfit}$$ where $a_0$ and $a_1$ are fitting constants. The sensitivity coefficient for the inversion frequency is thus given by $$K_\mu^\text{inv}=-\tfrac{1}{2}a_1\sqrt{\mu_\text{red}}-\tfrac{1}{2}. \label{eq:Kmuinvfit}$$ From a fit through the inversion frequencies of the different isotopologues of ammonia we find $a_0 = 68$ and 88THzamu$^{1/2}$ and $a_1 = 4.7$ and 3.9amu$^{1/2}$ for the $\nu_2=0$ and $\nu_2=1$ inversion modes, respectively. For , this results in sensitivity coefficients $K_\mu^\text{inv}=-4.2$ and $-3.6$. Alternatively, an expression for the sensitivity coefficients may be obtained from the derivative of Eq. . By explicitly taking the $\mu$ dependence of the vibrational energy term in the exponent of Eq.  into account, Flambaum and Kozlov derived[@FlambaumKozlov2007NH3] $$K_\mu^\text{inv} =-\frac{1}{2}\left (1 + G + \frac{G}{2}\frac{\omega_0}{\Delta U-\tfrac{1}{2}\omega_0} \right ).$$ This expression yields $K_\mu^\text{inv}=-4.4$ and $-3.4$ respectively, in fair agreement with the result obtained from the fit through the isotopologue data. ![Level diagram of the lower rotational energy levels of the $\nu_2=0$ and $\nu_2=1$ states of . Each level is characterized by the rotational quantum numbers $J_K$ and the symmetry label, $+/-$, of the rovibronic wave function. The inversion doubling in the $\nu_2=0$ state has been exaggerated for clarity. The dashed lines indicate symmetry-forbidden levels. \[fig:ammonia\_Kladders\]](./Fig06-K-laddersNH3_v2.pdf){width="0.9\columnwidth"} Astronomical observations of the inversion splitting of , redshifted to the radio range of the electromagnetic spectrum, led to stringent constraints at the level of $(-3.5\pm 1.2)\times 10^{-7}$ at $z=0.69$[@Kanekar2011] and $(0.8\pm 4.7)\times 10^{-7}$ at $z=0.89$[@Henkel2009]. These constraints were derived by comparing the inversion lines of ammonia with pure rotation lines of [@Henkel2009] and and [@Kanekar2011] and rely on the assumption that these different molecular species reside at the same redshift. The relatively high sensitivity of the inversion frequency in ammonia also allows for a test of the time independence of $\mu$ in the current epoch. A molecular fountain based on a Stark-decelerated beam of ammonia molecules has been suggested as a novel instrument to perform such measurement [@Bethlem2008]. By comparing the inversion splitting with an appropriate frequency standard $\Delta\mu/\mu$ can be constrained or a possible drift may be detected. Near degeneracies between inversion and rotation energy {#sec:inversion_degeneracies} ------------------------------------------------------- The sensitivity coefficients for the inversion frequency in the different isotopologues of ammonia are two orders of magnitude larger than those found for rovibronic transitions in molecular hydrogen and carbon monoxide and one order of magnitude larger than a pure vibrational transition. Yet, Eq.  predicts even higher sensitivities if the inversion splitting becomes comparable to the rotational splitting, as this may introduce accidental degeneracies. Such degeneracies do not occur in the vibrational ground state of ammonia, but may happen in excited $\nu_2$ vibrational states. In Fig. \[fig:ammonia\_Kladders\], a rotational energy diagram of ammonia in the $\nu_2=0$ and $\nu_2=1$ state is shown. As can be seen in this figure, the larger inversion splitting in the $\nu_2=1$ state results in smaller energy differences between different rotational states within each $K$ manifold. This is in particular the case for the $J_K^s=1_1^-$ and $2_1^-$ levels that have an energy difference of only 140GHz. Using Eq.  we find $K_\mu = 18.8$ for this inversion-rotation transition. As and are symmetric top molecules, transitions that have $\Delta K\neq 0$ are not allowed and this reduces the number of possible accidental degeneracies. Kozlov *et. al*[@Kozlov2010] investigated transitions in the $\nu_2=0$ state of asymmetric isotopologues of ammonia (, ), in which transitions with $\Delta K \neq 0$ are allowed, but found no sensitive transitions, mainly because the inversion splitting in the $\nu_2=0$ mode is much smaller than the rotational splitting. It is interesting to note that “forbidden” transitions with $\Delta K=\pm 3$ gain amplitude in the $\nu_2=1$ state of due to perturbative mixing of the (accidental) near-degenerate $J_K^s=3_0^+$ and $3_3^-$ levels [@Laughton1976]. Using Eq.  to estimate the sensitivity coefficient of the 2.9GHz transition between these two levels, we find $K_\mu=-938$. However, since these levels both have positive overall parity, a two-photon transition is required to measure this transition directly. The hydronium ion () has a similar structure to ammonia but experiences a much smaller barrier to inversion. As a consequence the inversion splitting in the ground vibrational state in hydronium is much larger than for ammonia. Kozlov and Levshakov[@KozlovLevshakov2011] found that pure inversion transitions in hydronium have a sensitivity of $K_\mu^\text{inv}=-2.5$ and, in addition, identified several mixed transitions with sensitivity coefficients ranging from $K_\mu=-9.0$ to $+5.7$. Mixed transitions in the asymmetric hydronium isotopologues H$_2$DO$^+$ and D$_2$HO$^+$ possess sensitivity coefficients ranging from $K_\mu=-219$ to $+11$[@Kozlov2011]. Internal rotation; from methanol to methylamine\[sec:internal\_rotation\] ------------------------------------------------------------------------- While inversion doublets of ammonia-like molecules exhibit large sensitivity coefficients, even larger sensitivity coefficients arise for molecules that exhibit internally hindered rotation, in which one part of a molecule rotates with respect to the remainder. This is another example of a classically-forbidden tunneling motion that is frequently encountered in polyatomic molecules. This subject of the interaction between such hindered rotation, also referred to as torsion, and its quantum mechanical description has been investigated since the 1950s.[@Kivelson1954; @LinSwalen1959; @Herschbach1959; @Kirtman1962; @Lees1968; @Lees1973] In this section we outline the procedure for obtaining the sensitivity coefficients in internal rotor molecules containing a $C_{3v}$ symmetry group and show that a particular combination of molecular parameters can be identified that results in the highest sensitivity coefficients. The fact that methanol possesses transitions with enhanced sensitivity coefficients was discovered independently by Jansen *et al.*[@Jansen2011PRL] and by Levshakov *et al.*[@LevshakovKozlov2011]\ One of the simplest molecules that exhibits hindered internal rotation is methanol (). Methanol, schematically depicted on the right-hand side of Fig. \[fig:methanol\_potential\], consists of a methyl group () with a hydroxyl group () attached. The overall rotation of the molecule is described by three rotational constants $A$, $B$, and $C$, associated with the moments of inertia $I_a$, $I_b$, and $I_c$, respectively, along the three principal axes of the molecule. The total angular momentum of the molecule is given by the quantum number $J$, while the projection of $J$ onto the molecule fixed axis is given by $K$. ![Variation of the potential energy of methanol with the relative rotation of the group with respect to the group and a schematic representation of the molecule. Shown are the $J=1$, $|K|=1$ energies of the lowest torsion-vibrational levels. \[fig:methanol\_potential\]](./Fig07-potential_and_structure){width="1\columnwidth"} In addition to the overall rotation, the flexible bond allows the methyl group to rotate with respect to the hydroxyl group, denoted by the relative angle $\gamma$. This internal rotation is not free but hindered by a threefold potential barrier,[@Swalen1955] shown on the left-hand side of Fig. \[fig:methanol\_potential\], with minima and maxima that correspond to the staggered and eclipsed configuration of the molecule, respectively. The vibrational levels in this well are denoted by $\nu_t$. When we neglect the slight asymmetry of the molecule as well as higher-order terms in the potential and centrifugal distortions, the lowest-order Hamiltonian can be written as $$\begin{gathered} H = \frac{1}{2} \frac{P_a^2}{I_a} + \frac{1}{2} \frac{P_b^2}{I_b} + \frac{1}{2} \frac{P_c^2}{I_c} + \frac{1}{2} \frac{1}{I_\text{red}} p_{\gamma}^2 + \frac{1}{2} V_3 (1 - \cos {3\gamma}),\\ \text{ with} \quad I_\text{red} = \frac{I_{a1} I_{a2}}{I_a}. \label{eq:H_IR}\end{gathered}$$ The first three terms describe the overall rotation around the $a$, $b$ and $c$ axis, respectively. The fourth term describes the internal rotation around the $a$ axis, with $I_{red}$ the reduced moment of inertia along the $a$-axis, $I_{a2}$ the moment of inertia of the methyl group along its own symmetry axis and $I_{a1}$ the part of $I_a$ that is attributed to the group; $I_{a1} = I_a - I_{a2}$. Note that in the derivation of Eq. (\[eq:H\_IR\]) an axis transformation was applied in order to remove the coupling between internal and overall rotation. The fifth term is the lowest order term arising from the torsional potential. If the potential were infinitely high, the threefold barrier would result in three separate harmonic potentials, whereas the absence of the potential barrier would result in doubly degenerate free-rotor energy levels. In the case of a finite barrier, quantum-mechanical tunneling mixes the levels in different wells of the potential. As a result, each rotational level is split into three levels of different torsional symmetry, labeled as $A$, $E1$, or $E2$. Following Lees [@Lees1973], $E1$ and $E2$-symmetries are labeled by the sign of $K$; i.e, levels with $E1$-symmetry are denoted by a positive $K$-value, whereas levels with $E2$-symmetry are denoted by a negative $K$-value. For $K\neq 0$, $A$ levels are further split into $+/-$ components by molecular asymmetry. For $K = 0$, only single $E$ and $A^{+}$ levels exist. The splitting between the different symmetry levels is related to the tunneling frequency between the different torsional potential wells and is therefore very sensitive to the reduced moment of inertia, similar to the inversion of the ammonia molecule. It was shown by Jansen *et al.*[@Jansen2011PRL; @Jansen2011] that a pure torsional transition in methanol has a sensitivity coefficient of $K_\mu=-2.5$. However, pure torsional transitions are forbidden, since they possess a different torsional symmetry. Sensitivity coefficients for allowed transitions in methanol and other internal rotor molecules can be obtained by calculating the level energies as a function of $\mu$ and taking the numerical derivative, in accordance with Eq. (\[eq:Kmu\]). This can be achieved by scaling the different parameters in the molecular Hamiltonian according to their $\mu$ dependence. The physical interpretation of the lowest-order constants is straightforward and the scaling relations can be derived unambiguously. Higher order parameters pose a problem since their physical interpretation is not always clear. Jansen *et al.*[@Jansen2011PRL; @Jansen2011] derived the scaling relations for these higher-order constants by considering them as effective products of lower-order torsional and rotational operators. Ilyushin *et al.* showed that the scaling of the higher order constants only contributes marginally to the sensitivity coefficient of a transition. [@Ilyushin2012] Jansen *et al.*[@Jansen2011PRL; @Jansen2011] employed the state-of-the art effective Hamiltonian that is implemented in the [belgi]{} code[@Hougen1994] together with a set of 119 molecular parameters.[@Xu1999; @Xu2008] Similar calculations were performed by Levshakov *et al.* using a simpler model containing only six molecular parameters.[@LevshakovKozlov2011] The two results are in excellent agreement and sensitivity coefficients for transitions in methanol range from $-42$ for the $5_1\rightarrow 6_0 A^+$ transition at 6.6GHz to $+53$ for the $5_2\rightarrow 4_3 A^+$ transition at 10.0GHz. ![Observed spectrum of the $3_{-1} - 2_0E$ methanol transition observed in the gravitational lensed object PKS1830-211 with the Effelsberg radio telescope.[@Bagdonaite2013] \[fig:methanolobs\]](./Fig08-methanol_PKS_v3.pdf){width="1\columnwidth"} The large number of both positive and negative sensitivity coefficients makes methanol a preferred target system for probing a possible variation of $\mu$, since this makes it possible to test variation of $\mu$ using transitions in a single molecular species, thereby avoiding the many systematic effects that plague tests that are based on comparing transitions in different molecules. Following the recent detection of methanol in the gravitationally lensed object PKS1830-211 (PKS referring to the Parkes catalog of celestial objects, with 1830 and -211 referring to RA and dec coordinates as for quasars; the PKS1830-211 system is a radio-loud quasar at $z_{\rm{emis}}=2.51$) in an absorbing galaxy at a redshift of $z_{\rm{abs}}=0.89$ [@Muller2011], Bagdonaite *et al.*[@Bagdonaite2013] used four transitions that were observed in this system using the 100m radio telescope in Effelsberg to constrain $\Delta\mu/\mu$ at $(0.0\pm 1.0)\times 10^{-7}$ at a look-back time of 7 billion years. A spectrum of the $3_{-1} - 2_0E$ methanol line, the line with the largest sensitivity to $\mu$-variation observed at high redshift, is shown in Fig. \[fig:methanolobs\]. The enhancements discussed in methanol, generally occur in any molecule that contains an internal rotor with $C_{3v}$ symmetry. Jansen *et. al* constructed a simple model that predicts whether a molecule with such $C_{3v}$ group is likely to have large sensitivity coefficients.[@Jansen2011PRL] This “toy” model decomposes the energy of the molecule into a pure rotational and a pure torsional part, *cf.* Eq. . The rotational part is approximated by the well-known expression for the rotational energy levels of a slightly asymmetric top $$E_\text{rot}(J,K)=\frac{1}{2}\left (B+C \right )J\left (J+1 \right )+\left (A-\frac{B+C}{2} \right )K^2,$$ with $A$, $B$, and $C$ the rotational constants along the $a$, $b$, and $c$ axis of the molecule, respectively. The torsional energy contribution is approximated by a Fourier expansion as [@LinSwalen1959] $$E_\text{tors}(K)=F \left [a_0+a_1\cos\left \{ \frac{2\pi}{3}\left (\rho K +\sigma \right )\right \} \right ], \label{eq:Etors}$$ ![The product $f(s)\sin\left (\tfrac{\pi}{3}\rho \right )$ which is a measure of the maximum value of $K_\mu$ (see text). Also shown are data points for molecules containing a internal rotor with $C_{3v}$ symmetry for which the sensitivity coefficients have been calculated. \[fig:toy\_model\]](./Fig09-toy_modelv2){width="1\columnwidth"} where $F\simeq\frac{1}{2}\hbar^2 I_{\text{red}}^{-1}$ is the constant of the internal rotation, $\rho\simeq I_{a2}/I_a$ is a dimensionless constant reflecting the coupling between internal and overall rotation, and $\sigma=0,\pm 1$ is a constant relating to the torsional symmetry. The expansion coefficients $a_0$ and $a_1$ depend on the shape of the torsional potential. Since we are mainly interested in the torsional energy difference, $a_0$ cancels, and $a_1$ is obtained from $$a_1 = A_1s^{B_1}e^{-C_1\sqrt{s}},$$ with $A_1=-5.296$, $B_1=1.111$, and $C_1=2.120$ [@Jansen2011]. The dimensionless parameter $s=4V_3/9F$, with $V_3$ the height of the barrier, is a measure of the effective potential. The sensitivity of a pure torsional transition is given by $K_\mu^\text{tors}=(B_1-1)-\tfrac{1}{2}C_1\sqrt{s}$. Inserting the different terms in Eq.  reveals that the sensitivity coefficient of a transition is roughly proportional to $f(s)\sin\left (\tfrac{\pi}{3}\rho \right )$, with $f(s)=-2a_1\left (K_\mu^\text{tors}+1 \right )$. This function is plotted in Fig. \[fig:toy\_model\] for several values of $\rho$. The curves can be regarded as the maximum sensitivity one may hope to find in a molecule with a certain $F$ and transition energy $h\nu$. The maximum sensitivity peaks at $s=4$ and $\rho=1$. From the figure it is seen that only methanol, and to a lesser extend methyl mercaptan, lie close to this maximum. Indeed, the highest sensitivities are found in these molecules. It is unlikely that other molecules are more sensitive than methanol since the requirement for a large value of $\rho$ and a relatively low effective barrier favors light molecules.\ [l l l c c]{} & electronic state & origin & $K_\mu$ & Ref.\ Diatomic molecules & & & &\        & $B ^1\Sigma^+_u \leftarrow X ^1\Sigma^+_g/C ^1\Pi_u \leftarrow X ^1\Sigma^+_g$ & $E_\text{el}/ E_\text{vib}$ & $-0.054<K_\mu<+0.019$ & \[\]\        & $B ^1\Sigma^+_u \leftarrow X ^1\Sigma^+_g/C ^1\Pi_u \leftarrow X ^1\Sigma^+_g$ & $E_\text{el}/ E_\text{vib}$ & $-0.052<K_\mu<+0.012$ & \[\]\        & $X ^2\Pi$ & $E_\text{fs}/E_\text{rot}/E_\Lambda$ & $-6.2<K_\mu<+2.7$ & \[\]\        & $X ^2\Pi$ & $E_\text{fs}/E_\text{rot}/E_\Lambda$ & $-67<K_\mu<+18$ & \[\]\        & $X ^2\Pi$ & $E_\text{fs}/E_\text{rot}/E_\Lambda$ & $-460<K_\mu<-0.50$ & \[\]\        & $X ^2\Pi$ & $E_\text{fs}/E_\text{rot}/E_\Lambda$ & $-38.9<K_\mu<+6.81$ & \[\]\        & $X ^2\Pi$ & $E_\text{fs}/E_\text{rot}/E_\Lambda$ & $-4.24<K_\mu<-0.95$ & \[\]\        & $a ^4\Sigma^- \leftarrow X ^2\Pi \,(\nu=0,1)$ & $E_\text{el}/E_\text{rot}$ & $-185.8 < K_\mu < +126.9$ & \[\]\        &$A ^1\Pi \leftarrow X ^1\Sigma^+$ & $E_\text{el}/ E_\text{vib}$ & $-0.071<K_\mu<+0.003$ & \[\]\ & $a ^3\Pi$ & $E_\text{fs}/ E_\text{rot}$ & $-334<K_\mu<+128$ & \[\]\ Polyatomic molecules & & & &\        & $\tilde{X}$ & $E_\text{inv}$ & $-4.2$ & \[\]\        & $\tilde{X}$ & $E_\text{inv}$ & $-5.6$ & \[\]\       / & $\tilde{X}$ & $E_\text{inv}/ E_\text{rot}$ & $-1.54<K_\mu<+0.10$ & \[\]\        & $\tilde{X}$ & $E_\text{inv}$ & $-2.5$ & \[\]\ & $\tilde{X}$ & $E_\text{inv}/ E_\text{rot}$ & $-9.0<K_\mu<+5.7$ & \[\]\       / & $\tilde{X}$ & $E_\text{inv}/ E_\text{rot}$ & $-219<K_\mu<+11.0$ & \[\]\        & $\tilde{X}$ & $E_\text{inv}/E_\text{rot}$ & $-36.5<K_\mu<+13.0$ &\[\]\        & $\tilde{X}$ & $E_\text{tors}/ E_\text{rot}$ & $-88<K_\mu<+330$ &\[\]\        & $\tilde{X}$ & $E_\text{tors}/ E_\text{rot}$ & $-14.8<K_\mu<+12.2$ &\[\]\        & $\tilde{X}$ & $E_\text{tors}/ E_\text{rot}$ & $-3.7 < K_\mu < -0.5$ & \[\]\        & $\tilde{X}$ & $E_\text{tors}/ E_\text{rot}$ & $-1.34 < K_\mu < +0.06$ & \[\]\        & $\tilde{X}$ & $E_\text{tors}/ E_\text{rot}$ & $-1.07<K_\mu<-0.03$ & \[\]\        &$\tilde{X}$ & $E_\text{tors}/ E_\text{rot}$ & $-1.36<K_\mu<-0.27$ & \[\]\       *l-*& $\tilde{X} ^2\Pi$ & $E_\text{RT}/ E_\text{vib}/ E_\text{rot}$ & $-19<K_\mu<+742$ &\[\]\        & $\tilde{X}$ & $E_\text{inv}/E_\text{tors}/E_\text{rot}$ & $-19<K_\mu<+24$ &\[\]\ ![Current astrophysical constraints on $\Delta\mu/\mu$ based on , , and data. The constraints at higher redshift were derived from optical transitions of in the line of sight of 5 different quasars (Q0528-250[@King2008; @King2011], Q2123-0050[@Malec2010; @vanWeerdenburg2011], Q0347-383[@King2008; @WendtMolaro2012], Q2348-011[@Bagdonaite2012], and Q0405-443[@King2008]) and typically yield $\Delta\mu/\mu \lesssim 10^{-5}$. At intermediate redshift, the most stringent tests are based on microwave and radio-frequency transitions in methanol (PKS1830-211[@Bagdonaite2013]) and ammonia (B0218+357[@Murphy2008; @Kanekar2011] and PKS1830-211[@Henkel2009]) and constrain $\Delta\mu/\mu$ at the $10^{-7}$ level. \[fig:mu\_vs\_z\]](./Fig10-mu_vs_z){width="1\columnwidth"} We have seen that molecules that undergo inversion or internal rotation may possess transitions that are extremely sensitive to a possible variation of $\mu$. A molecule that exhibits both types of these motions, and has also been observed in PKS1830-211[@Muller2011], is methylamine (); hindered internal rotation of the methyl () group with respect to the amino group (), and tunneling associated with wagging of the amino group[@Tsuboi1964]. The coupling between the internal rotation and overall rotation in methylamine is rather strong resulting in a large value of $\rho$, which is favorable for obtaining large enhancements of the sensitivity coefficients. Ilyushin *et. al*[@Ilyushin2012] have calculated sensitivity coefficients for many transitions in methylamine and found that the transitions can be grouped in pure rotation transitions with $K_\mu=-1$, pure inversion transitions with $K_\mu\approx-5$, and mixed transitions with $K_\mu$ ranging from $-19$ to $+24$. Summary and outlook\[sec:summary\] ================================== In this paper we discussed several molecular species that are currently being used in studies aimed at constraining or detecting a possible variation of the proton-to-electron mass ratio. These molecules, together with a range of other species of relevance for $\mu$-variation, are listed in Table \[tab:moleculelist\]. From this table it can be seen that the highest sensitivities are found in open-shell free radicals and polyatomics, due to the systematic occurrence of near-degenerate energy levels in these molecules. Several of these molecules have been observed already at high redshift, others have been observed in the interstellar medium of our local galaxy providing a prospect to be observed at high redshift in the future, whereas other molecules, in particular low-abundant isotopic species might be suitable systems for tests of $\mu$ variation in the present epoch. Astrophysical and laboratory studies are complementary as they probe $\mu$ variation at different time scales. The most stringent constraint in the current epoch sets $\Delta\mu/\mu<6\times 10^{-14}$yr$^{-1}$ and was obtained from comparing rovibrational transitions in with a fountain clock.[@Shelkovnikov2008] On a cosmological time scale, at the highest redshifts observable molecular hydrogen remains the target species of choice limiting a cosmological variation of $\mu$ below $|\Delta\mu/\mu|<1\times 10^{-5}$.[@Malec2010; @vanWeerdenburg2011] Current constraints derived from astrophysical data are summarized graphically in Fig. \[fig:mu\_vs\_z\]. At somewhat lower redshifts ($z\sim 1$) constraints were derived from highly sensitive transitions in ammonia and methanol probed by radio astronomy are now producing limits on a varying $\mu$ of $|\Delta\mu/\mu| < 10^{-7}$.[@Muller2011; @Henkel2009; @Kanekar2011; @Bagdonaite2013] This result, obtained from observation of methanol at redshift $z=0.89$, represents the most stringent bound on a varying constant found so far.[@Bagdonaite2013; @Bagdonaite2013a] Its redshift corresponds to a look-back time of 7 Gyrs (half the age of the Universe), and it translates into $\dot{\mu}/\mu = (1.4 \pm 1.4) \times 10^{-17}$/yr if a linear rate of change is assumed. As it is likely that $\mu$ changes faster or at the same rate as $\alpha$, *cf.* Eq. (\[GUT\]), this result is even more constraining than the bounds on varying constants obtained with optical clocks in the laboratory.[@Rosenband2008] We thank Julija Bagdonaite, Adrian de Nijs, and Edcel Salumbides (VU Amsterdam) as well as Julian Berengut (UNSW Sydney) for helpful discussions. This research has been supported by the FOM-program ‘Broken Mirrors & Drifting Constants’. P. J. and W. U. acknowledge financial support from the Templeton Foundation. H. L. B acknowledges financial support from NWO via a VIDI-grant and from the ERC via a Starting Grant. [107]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase 10.1103/PhysRevD.69.115006) [****,  ()](\doibase 10.1007/BF03045991) [****, ()](\doibase 10.1103/RevModPhys.72.1149) [****,  ()](\doibase 10.1103/PhysRevD.25.1527) [****, ()](\doibase 10.1103/PhysRevLett.88.031302) [****, ()](\doibase http://dx.doi.org/10.1016/S0370-1573(99)00083-6) [****,  ()](\doibase 10.1103/PhysRevLett.93.171104) [****,  ()](\doibase 10.1103/PhysRevLett.82.884) [****,  ()](\doibase 10.1103/PhysRevLett.82.888) [****,  ()](\doibase 10.1103/PhysRevLett.107.191101) [****,  ()](\doibase 10.1111/j.1365-2966.2012.20852.x) [****,  ()](\doibase 10.1103/PhysRevC.73.055501) [****, ()](\doibase 10.1103/PhysRevA.84.042510) [****,  ()](\doibase 10.1103/PhysRevLett.95.041301) [****,  ()](\doibase 10.1103/PhysRevA.80.022118) [****,  ()](\doibase 10.1051/0004-6361/201219042) [****,  ()](\doibase 10.1007/s10052-002-0976-0) [****,  ()](\doibase 10.1103/PhysRevA.71.032505) [****, ()](\doibase 10.1103/PhysRevA.86.044501) [****,  ()](\doibase 10.1063/1.4724320) @noop [ ]{} [****, ()](\doibase 10.1103/PhysRevLett.100.150801) [****, ()](\doibase 10.1103/RevModPhys.75.403),  [****,  ()](\doibase 10.1002/andp.201300010) [****,  ()](\doibase 10.1103/PhysRevLett.96.151101) [****, ()](http://dx.doi.org/10.1016/j.jms.2006.12.004) [****,  ()](\doibase 10.1103/PhysRevA.86.022510) [****,  ()](\doibase 10.1088/1475-7516/2007/01/013) [****,  ()](\doibase 10.1103/PhysRevLett.100.043202) [****,  ()](\doibase 10.1103/PhysRevA.83.062514) [****,  ()](\doibase 10.1103/PhysRevLett.99.150801) [****, ()](\doibase 10.1039/B819099B) [****,  ()](\doibase 10.1103/PhysRevA.73.034101) [****,  ()](\doibase 10.1140/epjd/e2004-00160-9) [****,  ()](\doibase 10.1103/PhysRevLett.98.240801) [****,  ()](http://stacks.iop.org/0004-637X/726/i=2/a=65) [****,  ()](\doibase 10.1103/PhysRevLett.106.100801) [****,  ()](\doibase 10.1103/PhysRevA.87.032104) [****,  ()](\doibase 10.1103/PhysRevLett.104.070802) [****,  ()](\doibase 10.1103/PhysRevLett.109.230801) [****,  ()](\doibase 10.1103/PhysRevLett.100.043201) [****,  ()](\doibase 10.1126/science.1154622) [****,  ()](\doibase 10.1088/0953-4075/38/9/002) @noop [****,  ()]{} [****,  ()](http://www.jetpletters.ac.ru/ps/1187/article_17908.shtml) [****,  ()](\doibase 10.1134/S0021364006080017) [****, ()](\doibase 10.1103/PhysRev.41.721) [****,  ()](\doibase 10.1038/092231d0) [****,  ()](\doibase http://dx.doi.org/10.1006/jmsp.2000.8085) [****,  ()](\doibase 10.1080/00268971003649307) [****,  ()](\doibase 10.1364/JOSAB.14.002469) [****,  ()](\doibase 10.1139/v04-042) [****,  ()](\doibase 10.1007/s00340-004-1470-1) [****, ()](http://stacks.iop.org/0953-4075/41/i=3/a=035702) [****,  ()](http://stacks.iop.org/0953-4075/39/i=8/a=L02) [****,  ()](\doibase 10.1103/PhysRevLett.100.093007) [****,  ()](\doibase 10.1103/PhysRevLett.101.223001) [****,  ()](\doibase 10.1080/00268970903413350) [****,  ()](\doibase 10.1111/j.1365-2966.2009.16227.x) [****,  ()](\doibase 10.1103/PhysRevLett.106.180802) [****, ()](\doibase 10.1111/j.1365-2966.2011.20319.x) [****,  ()](\doibase 10.1103/PhysRevA.86.032501) [****,  ()](\doibase 10.1103/PhysRevA.84.052509) [****, ()](\doibase 10.1051/0004-6361:200809727) [****,  ()](\doibase 10.1051/0004-6361/200912330) [****,  ()](\doibase 10.1051/0004-6361/201015147) [****,  ()](\doibase 10.1051/0004-6361/201016140) [****,  ()](\doibase 10.1080/00268976.2013.793889) @noop [**]{} (, ) [****, ()](\doibase 10.1063/1.1697141) [****,  ()](\doibase 10.1063/1.1677113) [****,  ()](\doibase 10.1063/1.453473) [****,  ()](\doibase 10.1063/1.455091) [****,  ()](\doibase 10.1103/PhysRevLett.91.011301) [****, ()](\doibase 10.1103/PhysRevLett.104.070802) [****,  ()](\doibase 10.1038/ncomms.3600) [****, ()](\doibase 10.1103/PhysRev.41.313) @noop [**]{} (, ) [****,  ()](http://stacks.iop.org/2041-8205/728/i=1/a=L12) [****,  ()](\doibase 10.1051/0004-6361/200811475) [****,  ()](\doibase 10.1140/epjst/e2008-00809-5) [****,  ()](http://stacks.iop.org/0953-4075/43/i=7/a=074003) [****,  ()](\doibase 10.1016/0022-2852(76)90354-4) [****,  ()](\doibase 10.1103/PhysRevA.83.052123) [****, ()](\doibase 10.1063/1.1739886) [****, ()](\doibase 10.1103/RevModPhys.31.841) [****, ()](\doibase 10.1063/1.1730343) [****, ()](\doibase 10.1063/1.1733049) [****, ()](\doibase 10.1063/1.1668221) [****,  ()](\doibase 10.1086/152368) [****,  ()](http://stacks.iop.org/0004-637X/738/i=1/a=26) [****, ()](\doibase 10.1063/1.1742449) [****,  ()](\doibase 10.1103/PhysRevA.84.062505) [****,  ()](\doibase 10.1103/PhysRevA.85.032505) [****,  ()](\doibase DOI: 10.1006/jmsp.1994.1047),  [****,  ()](\doibase 10.1063/1.478272) [****,  ()](\doibase DOI: 10.1016/j.jms.2008.03.017),  [****,  ()](\doibase 10.1126/science.1224898) [****,  ()](\doibase 10.1051/0004-6361/201117096) [****,  ()](\doibase 10.1103/PhysRevA.84.042120) [****,  ()](\doibase 10.1103/PhysRevA.87.052509) [****,  ()](\doibase 10.1103/PhysRevLett.101.251304) [****,  ()](\doibase 10.1111/j.1365-2966.2011.19460.x) [****, ()](\doibase 10.1051/0004-6361/201218862) [****,  ()](\doibase 10.1126/science.1156352) [****,  ()](\doibase 10.1063/1.1726344) [****,  ()](\doibase 10.1103/PhysRevLett.111.231101)
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we study alternative tableaux introduced by Viennot [@VienCamb]. These tableaux are in simple bijection with permutation tableaux, defined previously by Postnikov [@Postnikov]. We exhibit a simple recursive structure for alternative tableaux. From this decomposition, we can easily deduce a number of enumerative results. We also give bijections between these tableaux and certain classes of labeled trees. Finally, we exhibit a bijection with permutations, and relate it to some other bijections that already appeared in the literature.' address: | Faculty of Mathematics, University of Vienna, Nordbergstra[ß]{}e 15, 1090 Vienna, Austria.\ E-mail: [philippe.nadeau@univie.ac.at]{}. author: - Philippe Nadeau title: 'The structure of alternative tableaux.' --- Introduction {#introduction .unnumbered} ============ Alternative tableaux are certain fillings of Ferrers diagrams introduced by Xavier Viennot [@VienCamb], in simple bijection with permutation tableaux introduced by Postnikov [@Postnikov] in his study of the totally positive part of the grassmannian. These permutation tableaux have then been considered by several authors. Alternative tableaux are related to the stationary distribution of a certain Markov process from statistical physics, the Asymmetric Simple Exclusion Process (ASEP). They are more precisely connected to a [*Matrix Ansatz*]{} describing a general solution for this stationary distribution (see Proposition \[prop:connection\]). Formulas for this distribution have been first computed by Sasamoto, Uchiyama and Wadati [@Uchi], and involve the famous Askey-Wilson orthogonal polynomials. An understanding of alternative tableaux thus seem to be a key in order to get a better grasp of these polynomials, for which no combinatorial interpretation is known. The articles [@CW; @CW1; @CW2] develop such connections between tableaux and some special cases of the ASEP model. Another line of research is related to permutations; the starting point here is that permutation tableaux of size $n$ are counted simply by $n!$. They were first studied from a combinatorial point of view in [@SW], in which a bijection between these tableaux and permutations of $\{1,\ldots,n\}$ was studied in detail. Since then, other bijections have been described [@Bu; @CN; @V]. These tableaux also showed up in the work of Lam and Williams [@LW] where permutation tableaux were shown to fit naturally into the [*type A*]{} case of the classification of Coxeter systems. In this article, we show that alternative tableaux admit a natural recursive structure, which is best expressed when considering alternative tableaux as labeled combinatorial objects. The central part of this work is thus Section \[sect:struct\], in which we exhibit this recursive decomposition. These structural results are then applied in the following sections, first by giving enumerative results in a very straightforward manner, and then by encoding the recursive decomposition by certain classes of labeled trees, which are then themselves in bijection with permutations. Let us give a more precise outline of the paper. We introduce some elementary definitions and properties concerning alternative and permutation tableaux in Section \[sect:first\]. We then give our main results relative to the structure of alternative tableaux in Section \[sect:struct\], the recursive structure being a consequence of Proposition \[prop:cut\] and Theorem \[th:decomp\] in particular. Using this decomposition, we prove several enumeration results in Section \[sect:enum\]: this gives in particular elementary proofs of certain results of [@CN; @CW2; @LW], as well as some new results. We then describe how the recursive decomposition is naturally associated with certain labeled trees in Section \[sect:alttrees\]. Finally we exhibit in Section \[sect:perm\] a bijection from alternative tableaux to permutations, and stress its connection to bijections which have already appeared in the literature. Tableaux {#sect:first} ======== Shapes and tableaux ------------------- We call *shape* a staircase diagram (also called Ferrers shape) with possible empty rows or columns, cf. Figure \[fig:shape\]. The [*length*]{} of a shape is the number of rows plus the number of columns of the shape. Note that a shape is determined by its south east border, which is the path from the top right corner of the shape to its bottom left corner; it is the path labeled by the integers $1,2,\ldots 13$ on the left of Figure \[fig:shape\]. There are thus $2^n$ shapes of length $n$, since one can choose to go down or left at each step. ![A shape with its standard labeling. \[fig:shape\]](Shape.pdf){height="4cm"} Rows and columns of shapes will be labeled by integers in the following manner: let $S$ be a shape of length $n$, and $L=\{i_1<i_2<\ldots<i_n\}$ a set of integers. Then $S$ is *labeled by $L$* if the numbers $i_\ell$ are attached to the rows and columns, in increasing order following the south east border of $S$ from top right to bottom left. If $L=\{1,\ldots,n\}$, then we say that $S$ has the *standard labeling*; this is the case of the shape of Figure \[fig:shape\]. Note that although the labeling is always defined with respect to the south east border, we will actually write the labels on the top and left side for an improved readability, as shown on the right of Figure \[fig:shape\]. Given a labeled shape, we will sometimes say row $i$ or column $j$ when we actually refer to the row with the label $i$ or the column with the label $j$. Then the cell lying at the intersection of row $i$ and column $j$ is denoted by $(i,j)$, where we have $i<j$ necessarily by definition of the labelings. We now define two possible ways to fill these shapes, called permutation tableaux and alternative tableaux: the two are in bijection by Viennot’s Theorem \[th:permalt\]. What we will show in this paper is that it is better to study alternative tableaux when it comes to discover the intrinsic structure of these combinatorial objects. A [*permutation tableau*]{} $T$ is a shape with a filling of each of its cells by $0$ or $1$ such that the following properties hold: (i) each column contains at least one $1$, and (ii) there is no cell filled by a $0$ which has simultaneously a $1$ above it in the same column and a $1$ to its left in the same row. Note that permutation tableaux cannot have any empty columns because of the first condition. A $0$ in a permutation tableau is [*restricted*]{} if there is a $1$ above it in the same column; it is [*rightmost restricted*]{} if it is the rightmost such $0$ in its row. A row is [*unrestricted*]{} if it does not contain a restricted $0$. A $1$ is [*superfluous*]{} if, somewhere above it in the same column, there is another cell containing a $1$. A permutation tableau is represented on the left of Figure \[fig:permtabaltab\]; lines $0,4,11$ and $13$ are unrestricted, the cells in the top row filled by $1$ appear in columns $1,2,5$ and $12$, and there are four superfluous ones in cells $(4,5),(4,12)$ and $(11,12)$. \[def:altab\] An [*alternative tableau*]{} is a shape with a partial filling of the cells with left arrows $\leftarrow$ and up arrows $\uparrow$, such that all cells left of a left arrow, or above an up arrow are empty. In other words, all cells pointed by an arrow must be empty. In an alternative tableau, a [*free row*]{} is a row with no left arrow, and a [*free column*]{} is a column with no up arrow. Thus rows (respectively columns) that are not free are in bijection with left (resp. up) arrows. A [*free cell*]{} is a cell which is not filled, and such that there exists no left arrow to its right and no up arrow under it; in other words, the cell is empty and no arrow points toward it. We will let $frow(T),fcol(T)$ and $fcell(T)$ denote the number of free rows, free columns and free cells of a given tableau. For the tableau $T_0$ which is represented on the right of Figure \[fig:permtabaltab\], the free rows are $4,11$ and $13$ while the free columns are $1,2,5$ and $12$. There are four free cells, namely $(4,5),(4,12),(7,8)$ and $(11,12)$. Thus we have $frow(T_0)=3$ and $fcol(T_0)=fcell(T_0)=4$. We can now state the fundamental result of Xavier Viennot, showing that alternative tableaux are actually a new simple encoding of permutation tableaux: \[th:permalt\] There is a bijection $\alpha$ between permutation tableaux of length $n+1$ and alternative tableaux of length $n$. If $P$ is labeled by $L$, and $L'$ is the set $L$ minus its smallest element, then we label $\alpha(P)$ by $L'$, and we have: - columns of $P$ with a $1$ in their top row correspond to free columns of $\alpha(P)$; - unrestricted rows of $P$ (the top one excepted) correspond to free rows of $\alpha(P)$, - and cells of $P$ filled with superfluous $1$ correspond to free cells of $\alpha(P)$. [**Proof:** ]{}We just give a description of the bijection and its inverse, and refer to [@VienCamb] for more details about the proof. Given a permutation tableau $P$, transform all non superfluous $1$ to up arrows, and all rightmost restricted $0$ to left arrows; then erase all the remaining $0$ and $1$, and finally remove the first row from $P$; the result is $\alpha(P)$. An illustration is given on Figure \[fig:permtabaltab\]. For the inverse bijection, given an alternative tableau $T$, add a new top row on top of it, and fill by $1$ all cells of this row that lie above a free column of $T$; then change all up arrows and free cells to $1$, and all remaining cells to $0$. The resulting tableau is $\alpha^{(-1)}(T)$. $\square~$ ![Bijection between permutation tableaux and alternative tableaux. \[fig:permtabaltab\]](permtabaltab.pdf){height="5cm"} Alternative tableaux and the ASEP --------------------------------- An important application of permutation tableaux, due to Corteel and Williams in a series of papers [@CW; @CW1; @CW2], is related to a certain model of statistical mechanics, the ASEP: we will briefly talk about some of the connections. The ASEP model(Asymmetric Simple Exclusion Process) is a model that can be described as the following Markov chain (see [@DS]). Let $\alpha,\beta,\gamma,\delta,q$ be real numbers in $[0,1]$, and $n$ a nonnegative integer. The states of the Markov chain are the $2^n$ words of length $n$ on the symbols $\circ$ and $\bullet$. Positions in the words represent sites, which can be either empty ($\circ$) or occupied by a particle ($\bullet$). The transition probabilities $p(s_1,s_2)$ between two states $s_1$ and $s_2$ model the way particles can jump from site to site, enter or exit the system: - If $s_1=A\bullet\circ B$ and $s_2=A\circ\bullet B$, then $p(s_1,s_2)=\frac{1}{n+1}$ and $p(s_2,s_1)=\frac{q}{n+1}$. - If $s_1=A\bullet$ and $s_2=A\circ$, then $p(s_1,s_2)=\beta$ and $p(s_2,s_1)=\delta$. - If $s_1=\circ B$ and $s_2=\bullet B$, then $p(s_1,s_2)=\alpha$ and $p(s_2,s_1)=\gamma$. - If $s_1\neq s_2$ do not correspond to any of these cases, then we set $p(s_1,s_2)=0$. - Finally, we have naturally $p(s_1,s_1)=1-\sum_{s_2\neq s_1}. p(s_1,s_2)$. The model is [*simple*]{} because there can be at most one particle in each site. We illustrate schematically this model in Figure \[fig:asep\]. ![\[fig:asep\] Illustration of the ASEP model](Asep) It was shown by Derrida *et al.*[@Derrida1] that this model has a unique stationary distribution, and that moreover this distribution could be computed through the following [*Matrix Ansatz*]{}: suppose that we can find two matrices $D,E$, a column vector $V$ and a row vector $W$ such that the following relations hold: $$\label{eq:matrel} \begin{cases} DE=qED+D+E\\ ({\beta}D-{\delta}E)V= V \\ W({\alpha}E-{\gamma}D)=W \end{cases}$$ Now let $s$ be a state of the ASEP, and let $s(E,D)$ be the word of length $n$ in $D$ and $E$ obtained through the substitutions $\bullet\mapsto D$, $ \circ\mapsto E$: for instance, the state $\circ\bullet\bullet\circ\bullet$ is associated to the word $EDDED$. Then Derrida *et al* show : \[prop:proba\] If $D,E,V,W$ satisfy the relations , and words in $D,E$ are interpreted as matrix products, the probability $P_n(s)$ to be in state $s$ in is given by $$\label{eq:matrixansatz} P_n(s)= \frac{Ws(E,D)V}{Z_n}~~~\text{with}~~~Z_n=W(D+E)^nV$$ If we have a word $w$ in the letters $E$ and $D$, we can create a shape by reading the word from left to right and interpreting each $D$ as a south step and each E as an east step, thus defining the south east boundary of a shape ${\lambda}(w)$; for instance the shape of Figure \[fig:shape\] is associated to the word $EEDDEDDEEDDED$; if $s$ is a state of the ASEP, we will write simply ${\lambda}(s)$ for the shape ${\lambda}(s(E,D))$. Now we can state the connection with alternative tableaux, first noticed by Corteel and Williams in a series of papers [@CW; @CW1] and expressed in terms of permutation tableaux, and later reformulated by Viennot [@VienCamb] in the following way: \[prop:connection\] If $D,E$ are matrices that verify the first relation in , and $w=w(E,D)$ is any word in $E,D$, then we have the following identity: $$w=\sum_T q^{fcell(T)}E^{fcol(T)}D^{frow(T)}$$ where the sum is over all alternative tableaux of shape ${\lambda}(w)$. In [@Derrida1] it was in fact shown that in the case ${\gamma}={\delta}=0$ there exists matrices $D,E$ verifying . But note that in this particular case, the vectors $W$ and $V$ become respectively left and right eigenvectors for $D$ and $E$. So from Propositions \[prop:proba\] and \[prop:connection\] we obtain the following: When ${\gamma}={\delta}=0$ we have: $$P_n(s)=\frac{\sum_{T\text{~of shape~}{\lambda}(s)} q^{fcell(T)}{\alpha}^{-fcol(T)}{\beta}^{-frow(T)}}{\sum_{T\text{~of size~}n} q^{fcell(T)}{\alpha}^{-fcol(T)}{\beta}^{-frow(T)}}$$ In the ASEP model where ${\gamma},{\delta}$ are general but $q=1$, Corteel and Williams found a similar expression for the stationary probabilities in terms of certain enriched alternative tableaux, see Corollary 4.2 in [@CW2], by slightly generalizing the Matrix Ansatz. The structure of alternative tableaux {#sect:struct} ===================================== We define different operations on tableaux, and use them to exhibit a natural recursive structure on alternative tableaux. First properties of alternative tableaux {#sub:firstprop} ---------------------------------------- We denote by ${\mathcal{A}}(n)$ the set of alternative tableaux of length $n$, and ${\mathcal{A}}_{i,j}(n)$ those with $i$ free rows and $j$ free columns. We also denote by ${\mathcal{A}}_{i,*}(n)$ and ${\mathcal{A}}_{*,j}(n)$ the tableaux having $i$ free rows and $j$ free columns respectively. ### Transposition {#subsub:tr} We let $tr$ be the operation of transposing a tableau, which is the reflection across the [*main diagonal*]{}, i.e. the line going south east from the top left corner; in this reflection, we naturally exchange up and left arrows. We have the following immediate result, which we state as a proposition for future reference: \[prop:transp\] Transposition is an involution on alternative tableaux. For all $n,i,j\geq 0$,it exchanges ${\mathcal{A}}_{i,j}(n)$ and ${\mathcal{A}}_{j,i}(n)$. In fact it is easily checked that the transposition operation coincides with the involution $I$ defined in Section 7 of [@CW1] for permutation tableaux: more precisely, if $P$ is a permutation tableau, then we have $tr\circ\alpha(P)=\alpha\circ I(P)$, where $\alpha$ is the bijection between permutation tableaux and alternative tableaux of Theorem \[th:permalt\]. Note then that the trivial result on alternative tableaux from Proposition \[prop:transp\] demanded a much greater effort in [@CW1] where the authors worked with permutation tableaux. ### Packed tableaux As already noticed, the arrows of a tableau are in bijection with its non free rows and columns. This implies immediately that the tableaux in ${\mathcal{A}}_{i,j}(n)$ have exactly $n-i-j$ arrows. The maximum of $n$ arrows, i.e. the case $i=j=0$, cannot actually be attained if $n>0$: indeed, if for instance a tableau has no free row, then the leftmost column of this tableau cannot contain any up arrow and thus is free. A total of $n-1$ arrows in a tableau can actually be reached, and in fact constitutes a fundamental class of tableaux as we will see: \[def:packed\] A *packed tableau* of length $n>0$ is a tableau with $n-1$ arrows. Equivalently, it is a member of either ${\mathcal{A}}_{0,1}(n)$ or ${\mathcal{A}}_{1,0}(n)$. In fact, the following proposition shows that the unique free column of a tableau in ${\mathcal{A}}_{0,1}(n)$ is necessarily the leftmost one: \[prop:packed\] If $n>1$ and $T$ is a tableau in ${\mathcal{A}}_{0,1}(n)$, the top left cell $c$ of $T$ contains a left arrow. [**Proof:** ]{}First note that the tableau $T$ has at least one cell (so that $c$ is well defined) since tableaux with no cells have at least one free row or two free columns (note that we chose $n>1$). $T$ has no free rows, so there is a left arrow in the top row in particular. The column where this arrow lies is necessary free, because any up arrow in it would violate the alternative tableau property. But the leftmost column of $T$ is also free, because the presence of any up arrow in it would force the row where this arrow lies to be free, which is excluded. As $T$ has just one free column, this implies that $c$ contains indeed a left arrow. $\square~$ By transposition, the top left cell of tableaux in ${\mathcal{A}}_{1,0}(n), n>1$ is filled by an up arrow. But there is a simpler way to go bijectively from ${\mathcal{A}}_{0,1}(n)$ to ${\mathcal{A}}_{1,0}(n)$ when $n>1$: simply [*change the filling of the cell in the top left corner from $\leftarrow$ to $\uparrow$*]{}. This also explains why we decided to call these two sets of tableaux with the same name: up to this arrow and the case $n=1$, they are identical. ### Cutting rows and columns {#subsub:cut} For a nonempty alternative tableau with at least one column and no empty rows, we let $cut_c$ be the operation of deleting its leftmost column (so that all row lengths decrease by one); we define $cut_r$ similarly for deleting the topmost row. When the tableau from which we start is labeled, we obtain naturally a labeled tableau as a result by simply keeping the labels of the remaining rows and columns in each case. This is illustrated on Figure \[fig:cutr\]. ![Cutting the first row of a tableau. \[fig:cutr\]](cutr.pdf){height="4cm"} Given a tableau $T$ in ${\mathcal{A}}$, add a new top row, and fill by up arrows all cells in this row that lie above the free columns of $T$; this construction will be called $block_c$ to reflect the fact that no free column remains after its application. We define $block_r$ symmetrically. Then we have the following properties: \[prop:cut\] For all $i,n\geq 0$, the operation $cut_r$ is a bijection between ${\mathcal{A}}_{i+1,0}(n+1)$ and ${\mathcal{A}}_{i,*}(n)$; its inverse is $block_c$. For all $j,n\geq 0$, the operation $cut_c$ is a bijection between ${\mathcal{A}}_{0,j+1}(n+1)$ and ${\mathcal{A}}_{*,j}(n)$; its inverse is $block_r$. [**Proof:** ]{}We just prove the claim concerning $cut_r$, the one for $cut_c$ being equivalent after transposition. Tableaux in ${\mathcal{A}}_{i+1,0}(n+1)$ have no empty columns (because such columns are free), and their first row is free, since a left arrow in a cell from this row would force the corresponding column to be free. This shows that the restrictions of $cut_r$ and $block_c$ are well defined. It is immediate that $cut_r\circ block_c$ is the identity on ${\mathcal{A}}_{i,*}(n)$. Now given a tableau $T$ in ${\mathcal{A}}_{i+1,0}(n+1)$, we noticed that it has no left arrows in its top row, and that the up arrows in this row occur exactly in columns that are free in $cut_r(T)$. This implies that $block_c \circ cut_r$ is the identity on ${\mathcal{A}}_{i+1,0}(n+1)$, and the proposition is proved. $\square~$ If we do the union of the sets above for all $i$ and for all $j$ respectively, we get bijections between ${\mathcal{A}}_{*,0}(n+1)$ and ${\mathcal{A}}(n)$, and between ${\mathcal{A}}_{0,*}(n+1)$ and ${\mathcal{A}}(n)$. The special cases $i=0$ and $j=0$ are also of interest, and we get the following corollary: \[cor:relations\] For all $n\geq 0$, the operation $cut_r$ is a bijection between ${\mathcal{A}}_{*,0}(n+1)$ and ${\mathcal{A}}(n)$, and between ${\mathcal{A}}_{1,0}(n+1)$ and ${\mathcal{A}}_{0,*}(n)$; $cut_c$ is a bijection between ${\mathcal{A}}_{0,*}(n+1)$ and ${\mathcal{A}}(n)$, and between ${\mathcal{A}}_{0,1}(n+1)$ and ${\mathcal{A}}_{*,0}(n)$. Therefore for all $n\geq 0$, we have $$\label{eq:conseqcut} A(n)=A_{0,*}(n+1)=A_{*,0}(n+1)=A_{0,1}(n+2)=A_{1,0}(n+2)$$ Splitting a tableau ------------------- Now we exhibit a more complicated decomposition, which can be traced back to the last part of Burstein’s work [@Bu]. Nevertheless in his work Burstein did not exhibit a complete recursive decomposition, mainly because he was working with permutation tableaux which are less easy to manipulate than alternative tableaux. Let $T$ be an alternative tableau of size $n$, labeled by $L$, and $i_0$ be the label of one of its free rows (we suppose there is such a row). We compute iteratively a set of labels $T(i_0)$ in the following way: first we set $X:=\{i_0\}$. Then we add to $X$ all columns $j$ such that there is an arrow in the cell $(i_0,j)$. Afterwards, we add to $X$ all rows $i$ such that there is an arrow in $(i,j)$ for one of the columns $j$ in $X$ added at a previous stage. And so on, we keep adding row and column labels alternatively until there are no new rows or columns to add. The procedure is finite since the set $L$ is finite, and in the end we set $T(i_0):=X$. [**Example:**]{} consider the free row labeled $4$ in the alternative tableau $T_0$ on the right of Figure \[fig:permtabaltab\]; so $T_0(4)$ contains $4$. Now the cell $(4,9)$ is the only cell on row $4$ containing an up arrow, so we add $9$ to the set $T_0(4)$. In this column $9$ there are two left arrow in cells $(6,9)$ and $(7,9)$, so $T_0(4)$ contains also the row labels $6$ and $7$. There is no other arrow on row $7$, and there is one on row $6$ in the cell $(6,8)$, so $8$ also belongs to $T_0(4)$. Since there is no other arrow in column $7$, we have finally $T_0(4)=\{4,6,7,8,9\}$. Another equivalent characterization of $T(i_0)$ is the following, which we take as a definition: Given a tableau $T$ labeled by $L$ and a free row or column $i_0\in L$, $T(i_0)$ is the smallest set $X\subseteq L$ (wrt. inclusion) which contains $i_0$, and is such that, for every cell $(i,j)$ filled by an arrow, $i$ belongs to $X$ if and only if $j$ belongs to $X$. If $Free(T)$ stands for the set of labels of free rows and columns of $T$, then we have a collection of subsets of $L$ given by $\{T(k), k\in Free(T)\}$. \[lem:packed1\] Let $T\in {\mathcal{A}}$, and $k\in Free(T)$. Then the elements of $T(k)$ other than $k$ label non free rows and columns of $T$. [**Proof:** ]{}By the iterative definition of $T(k)$, a row label $i\neq k$ belongs to $T(k)$ if there exists a column label $j\in T(k)$ and a left arrow in the cell $(i,j)$. In particular, row $i$ is not free in $T$. The proof is similar for column labels. $\square~$ \[lem:partition\] Let $T$ be a tableau labeled by $L$. Then the sets $T(k), k\in Free(T)$ form a partition of $L$. [**Proof:** ]{}Suppose first that there exists an integer $p$ in $L$ that does not belong to any subset $T(k)$. We assume without loss of generality that $p$ is the label of a row, and choose the minimal such $p$. First, $p$ cannot label a free row (since it would belong to $T(p)$) so there exists $j>p$ such that $(p,j)$ contains a left arrow $\leftarrow$. Now $j$ is not the label of a free column, since otherwise $p$ would belong to $T(j)$; so there exists a row label $p'$ such that $(p',j)$ contains an up arrow $\uparrow$. We have $p'<p$ because otherwise the up arrow in $(p,j)$ would point towards the up arrow in $(p',j)$ . But then $p'$ belongs to a set $T(k)$ by minimality of $p$, which entails that $j$ and $p$ also belong to this set, which contradicts the hypothesis that $p$ belongs to no such set. We have thus shown that the sets $T(k), k\in Free(T)$ cover $L$, we now have to prove that they are disjoint. Let $p$ belong to a set $T(k)$; we will show that we can uniquely determine $k$ from $p$. If $p$ is free, then $k=p$ because there is only one free row or column in $T(k)$ by Lemma \[lem:packed1\]. Now suppose $p$ is not free, and let us assume that $p$ labels a row: there exists a (necessarily unique) column $j\in T(k)$ with $(p,j)$ containing a left arrow. Now if $j$ is free, we know that $k=j$ and we are done. Otherwise we have an up arrow in $(i,j)$ for a unique $i\in T(k)$. If $i$ is free, then $k=i$ and we are done, otherwise we continue this process, and stop until we hit upon a free row or column, and we know this is the index $k$. To conclude, we just need to be sure that the process will end: this is indeed the case because the row labels that we encounter are strictly decreasing (and the column labels strictly increasing). $\square~$ Given a tableau $T$ with label set $L$, and any subset $A\subseteq L$, one can form a new tableau by selecting in $T$ only the rows and columns with labels in $A$: Let $T$ be a tableau labeled by $L$, and $A\subseteq L$. The tableau $T[A]$ is defined as the tableau labeled by the subset $A$, where $l\in A$ labels a row (respectively a column) in $T[A]$ if it labels a row (resp. a column) in $T$, and such that the cell $(i,j)\in T[A]$ has the same filling as the cell $(i,j)$ in $T$. We write $T[k]:=T[T(k)]$ for simplicity if $k$ is a free row or a free column. Then from Lemma \[lem:packed1\] we deduce immediately: \[prop:freeTk\] Let $T$ be a labeled tableau and $k\in Free(T)$. The tableau $T[k]$ is a packed tableau in which $k$ labels the only free row or column. Merging tableaux ---------------- Since we described a way to split a tableau into smaller tableaux, it is natural to try to reconstruct the original tableau, so we need to define a way to merge tableaux together. Let $T$ and $T'$ be two alternative tableaux labeled on *disjoint* integer sets $L$ and $L'$. Then $T''={\operatorname{merge}}(T,T')$ is a labeled tableau defined as follows: its label set is $L''=L\cup L'$, where $k\in L''$ labels a row in $T''$ if and only if it labels a row in either $T$ or $T'$. Then the cell $(i,j)\in T''$ is filled with a left arrow if one of the following two cases occur: either $i,j\in L$ and $(i,j)$ is a left arrow in $T$, or $i,j\in L'$ and $(i,j)$ is a left arrow in $T'$. Up arrows in $T''$ are defined similarly, and the other cells are left empty. Note that the empty cells of ${\operatorname{merge}}(T,T')$ correspond either to empty cells in $T$ or $T'$, or to cells $(i,j)$ for which one of $i,j$ belongs to $L$ and the other to $L'$. An example of merging is given on Figure \[fig:merge\]. ![Merging of two tableaux. \[fig:merge\]](Merge.pdf){width="\textwidth"} We make a slight abuse of notation in writing ${\operatorname{merge}}(T,T')$, since the operation of merging depends crucially on the labels and not merely on the tableaux. This will not cause any problem in the rest of the paper, since we will always use it when the labels of the tableaux are clear from the context. We now record some immediate properties of the merging procedure in the following proposition: \[prop:merge\] Given $T,T'$ as above, then $T''={\operatorname{merge}}(T,T')$ is an alternative tableau. If $k\in Free(T'')$, then either $k\in Free(T)$ and $T''[k]=T[k]$, or $k\in Free(T')$ and $T''[k]=T'[k]$. Moreover, the application ${\operatorname{merge}}$ is *symmetric*, i.e. ${\operatorname{merge}}(T,T')={\operatorname{merge}}(T',T)$; it is also *associative*, in the sense that if $T_1,T_2,T_3$ are tableaux labeled by pairwise disjoint sets, then ${\operatorname{merge}}(T_1,{\operatorname{merge}}(T_2,T_3))={\operatorname{merge}}({\operatorname{merge}}(T_1,T_2),T_3)$. The last two properties allow us to extend the domain of definition of ${\operatorname{merge}}$: given a finite collection of tableaux $C=(T_i)_{i\in I}$ with pairwise disjoint label sets, we can merge all tableaux in $C$ by defining $${\operatorname{merge}}(C)={\operatorname{merge}}(T_{i_1},{\operatorname{merge}}(T_{i_2},\ldots, {\operatorname{merge}}(T_{i_{t-1}},T_{i_t}))),$$ where $i_1,\ldots,i_t$ is any ordering of the index set $I$; this is well defined thanks to the properties of symmetry and associativity. Decomposition of tableaux ------------------------- The following theorem is the main structural result of this paper; together with Corollary \[cor:relations\], it describes a recursive structure that completely characterizes alternative tableaux. This will be applied in the remaining sections, first to easily obtain old and new enumerative results, and then to give bijections between alternative tableaux, certain classes of trees, and permutations of integers. \[th:decomp\] Let $i,j$ be nonnegative integers, and $L$ be a label set. The function ${\operatorname{split}}:T\mapsto \{T[k],~k\in Free(T)\}$ is a bijection between: 1. Tableaux in ${\mathcal{A}}_{i,j}$ labeled by $L$, and 2. Sets of $i+j$ packed tableaux, with $i$ of them in ${\mathcal{A}}_{1,0}$ and $j$ in ${\mathcal{A}}_{0,1}$, all labeled in such a way that their $i+j$ label sets form a partition of $L$. The inverse bijection is the operation ${\operatorname{merge}}$. [**Proof:** ]{}First, the fact that ${\operatorname{split}}$ is well defined is a consequence of Lemma \[lem:partition\] and Proposition \[prop:freeTk\] . We have also ${\operatorname{split}}\circ {\operatorname{merge}}$ is equal to the identity function thanks to Proposition \[prop:merge\]. What remains to be proven is that ${\operatorname{merge}}\circ {\operatorname{split}}$ is the identity on ${\mathcal{A}}(n)$: that is, we need to show that given a tableau $T$, merging the labeled tableaux $\{T[k],~k\in Free(T)\}$ gives back the tableau $T$. Let us then denote by $T'$ the tableau ${\operatorname{merge}}\circ {\operatorname{split}}(T)={\operatorname{merge}}((T[k])_{k\in Free(T)})$, and show that we have $T'=T$. We note immediately that the (labeled) shapes of $T$ and $T'$ coincide, so we have to show that the contents of all cells are identical. Let then $c$ (respectively $c'$) be the content of a cell $(i,j)$ in $T$ (*resp.* in $T'$). If $i$ and $j$ are labels of the same tableau $T[k]$ (for a certain $k$), then $c$ is the content of $(i,j)$ in $T[k]$ by the definition of ${\operatorname{split}}$; but by definition of ${\operatorname{merge}}$, this is also equal to $c'$. Otherwise, $i$ and $j$ belong respectively to tableaux $T[k]$ and $T[k']$ with $k\neq k'$, and in this case $c$ is necessarily empty by Lemma \[lem:partition\]; and by the definition of ${\operatorname{merge}}$ again, $c'$ is also empty. Thus $T=T'$ and the result is proved. $\square~$ ![The decomposition ${\operatorname{split}}$. \[fig:decomptab\]](decomptab.pdf){width="\textwidth"} An immediate corollary of the theorem is the following, which gives a different way of decomposing tableaux: \[cor:decompo\] There is a bijection $\operatorname{divide}$ between $(i)$ tableaux in ${\mathcal{A}}_{i,j}(n)$ labeled by a set $L$, and $(ii)$ pairs of tableaux $(P,Q)\in{\mathcal{A}}_{i,0}\times {\mathcal{A}}_{0,j}$ labeled by sets $L_P$ and $L_Q$ such that $\{L_P,L_Q\}$ is a partition of $L$. [**Proof:** ]{}Let $T$ be a tableau in ${\mathcal{A}}_{i,j}(n)$ labeled by a set $L$. First use the bijection ${\operatorname{split}}$ of the previous theorem, and, among the tableaux obtained, separate the ones in ${\mathcal{A}}_{1,0}$ and the ones in ${\mathcal{A}}_{0,1}$; merge separately each of these two collections to obtain the tableaux $P$ and $Q$ of the theorem. $\square~$ There is a more direct way to obtain the same bijection: consider the subsets of labels $A=\cup_k T(k)$ and $B=\cup_l T(l)$, where $k$ (respectively $l$) goes through the labels of the free rows of $T$ (resp. the free columns). Then define simply $P:=T[A]$ and $Q:=T[B]$. ![The tableaux $(P,Q)=\operatorname{divide}(T_0)$ for the tableau $T_0$ of Figure \[fig:permtabaltab\], left. \[fig:divide\]](Divide.pdf){width="\textwidth"} Enumeration {#sect:enum} =========== We will show that, using the structure of alternative tableaux discovered in Section \[sect:struct\], it is easy to prove various enumeration results in a simple way, starting with the plain enumeration of alternative tableaux according to their size. Labeled combinatorial classes {#sub:combclass} ----------------------------- From the decompositions of Theorem \[th:decomp\] and Corollary \[cor:decompo\], one can easily write down equations for the combinatorial class ${\mathcal{A}}$ of alternative tableaux, in the manner of Flajolet and Sedgewick [@Flaj]. Indeed, Theorem \[th:decomp\] says that the number of tableaux labeled on a set $L$ is the same as the number of ways to partition $L$ and then choose, for each block $b$ of this partition, a tableau labeled on $b$ belonging to either ${\mathcal{A}}_{0,1}$ or ${\mathcal{A}}_{1,0}$; in the language of [@Flaj], this is written: $$\label{eq:bla1} {\mathcal{A}}=SET({\mathcal{A}}_{0,1}+{\mathcal{A}}_{1,0}).$$ Similarly, a consequence of Corollary \[cor:decompo\] is $$\label{eq:bla2} {\mathcal{A}}={\mathcal{A}}_{0,*}\star {\mathcal{A}}_{*,0}.$$ This means that an alternative tableau labeled on $L$ is obtained by choosing two alternative tableaux $P,Q$ in ${\mathcal{A}}_{0,*}$ and ${\mathcal{A}}_{*,0}$, labeled respectively by $L_P$ and $L_Q$ which are disjoint and whose union is equal to $L$. The advantage of describing our theorems in this way is that there is an automatic way to write down equations for the corresponding exponential generating functions, with the added possibility of taking into account certain parameters; this is what we will do in the rest of this Section. As a matter of fact, the natural framework for the study of alternative tableaux is arguably the theory of [*species on a totally ordered set*]{}, cf. [@BLL Chapter 5]. This is not needed for the results in this work and therefore we will not develop this approach. The number of alternative tableaux. {#sub:number} ----------------------------------- We will give first a simple proof of the well known fact that alternative tableaux of size $n$ are enumerated by $(n+1)!$; not surprisingly, that is how the original permutation tableaux got their name. Let $A(z)$,$B(z)$ and $C(z)$ be the exponential generating functions of tableaux in ${\mathcal{A}}$, ${\mathcal{A}}_{0,*}$ and ${\mathcal{A}}_{0,1}$ according to their length, that is: $$A(z)=\sum_{n\geq 0} A(n)\frac{z^n}{n!},~ B(z)=\sum_{n\geq 0} A_{0,*}(n)\frac{z^n}{n!},\text{~and~} C(z)=\sum_{n\geq 0} A_{0,1}(n)\frac{z^n}{n!}.$$ On the one hand, Corollary \[prop:cut\] implies the following relations on generating functions: $$\label{eq:diffrel} B'(z)=A(z)\text{~~and~~}C''(z)=A(z).$$ On the other hand, note that ${\mathcal{A}}_{1,0}$ and ${\mathcal{A}}_{*,0}$ have respectively the generating functions $C(z)$ and $B(z)$: this is immediate by transposition (cf. Proposition \[prop:transp\]). So we can use the combinatorial equations  and  to obtain the functional equations $A(z)=B(z)^2$ and $A(z)=\exp(2C(z))$, by an application of the principles found in [@Flaj Chapter II]. Together with , we get the differential equations $$B'(z)=B(z)^2 \quad\text{and}\quad C''(z)=\exp(2C(z)).$$ With the obvious initial conditions $B(0)=1,C(0)=0, C'(0)=1$, the solutions to these are respectively $B(z)=\frac{1}{1-z}$ and $C(z)=-\log(1-z)$. Taking coefficients, we obtain $A_{0,*}(n)=n!$ and $A_{0,1}(n)=(n-1)!$, which both give us $A(n)=(n+1)!$ by Corollary \[cor:relations\]. To sum up we have: \[prop:simpleenum\] We have the following expressions: $$A(z)=\frac{1}{(1-z)^2},~~B(z)=\frac{1}{1-z},\text{~and~}C(z)=-\log(1-z)$$ Refined enumeration ------------------- In fact we can do much better by introducing some statistics. Let $A_{i,j}(n,k)$ be the number of tableaux in ${\mathcal{A}}_{i,j}(n)$ with $k$ rows, where we allow $i=*$ or $j=*$. We define the corresponding generating functions $A_{i,j}(z,u)=\sum_{n,k\geq 0}A_{i,j}(n,k)\frac{z^n}{n!}u^k$ and $A(z,u,x,y)=\sum_{i,j\geq 0}x^iy^jA_{i,j}(z,u)$. We have then the following refined enumeration: \[th:enum\] $$\label{eq:enum} A(z,u,x,y)=\exp\left(zy(1-u)+(x+y)\ln\left(\frac{1-u}{1-u\exp(z(1-u))}\right)\right)$$ [**Proof:** ]{}By Theorem \[th:decomp\], we know that the number of free rows (respectively free columns) of a tableau is equal to the number of tableaux in ${\mathcal{A}}_{1,0}$ (resp. ${\mathcal{A}}_{0,1}$) under the bijection ${\operatorname{split}}$. We can then use Equation  and insert parameters $x$ and $y$ in it (cf. [@Flaj Chapter III]) and this gives the equation: $$\label{eq:de} A(z,u,x,y)=\exp (xA_{1,0}(z,u))\exp(yA_{0,1}(z,u))$$ Now we have $A_{0,1}(n,k)=A_{1,0}(n,k)$ if $n>1$, by the remark following Proposition \[prop:packed\]. Taking into account $n=1$, we obtain on the level of generating functions $A_{0,1}(z,u)=A_{1,0}(z,u)+z(1-u)$; plugging into Equation  gives $$\label{eq:de2} A(z,u,x,y)=\exp\left((x+y)A_{1,0}(z,u)+zy(1-u)\right).$$ Using the bijections $cut_r$ and $cut_c$, we get the following refinements of Corollary \[cor:relations\] $$A(n,k)=A_{0,*}(n+1,k)=A_{*,0}(n+1,k+1)=A_{1,0}(n+2,k+1),$$ for $n,k\geq 0$. This translates into the following equations for the generating functions, where all derivatives here are taken with respect to the variable $z$: $$\begin{aligned} A'_{0,*}(z,u)&=A(z,u);\label{eq10}\\ A_{*,0}(z,u)&=uA_{0,*}(z,u)+(1-u);\label{eq11}\\ A'_{1,0}(z,u)&=uA_{0,*}(z,u).\label{eq12}\end{aligned}$$ Since the number of rows of ${\operatorname{merge}}(T,T')$ is the sum of the number of rows of $T$ and $T'$, we have the equation $A(z,u)=A_{0,*}(z,u)\cdot A_{0,*}(z,u)$ by Equation . Using Equations  and  we get $A'_{0,*}(z,u)=A_{0,*}(z,u)\cdot (uA_{0,*}(z,u)+1-u)$. Taking into account the initial condition $A_{0,*}(0,u)=1$, this differential equation is easily solved and gives us $$\label{eq:ao1} A_{0,*}(z,u)=\frac{(1-u)}{\exp(z(u-1))-u}.$$ Now we use Equation , and by immediate integration of  we obtain $$A_{1,0}(z,u)=\ln\left(\frac{1-u}{1-u\exp(z(1-u))}\right).$$ Now it suffices to replace $A_{1,0}(z,u)$ in  and the result follows. $\square~$ From this theorem, we have the following corollary, first proved in [@CN] by a complicated recurrence: Define the polynomial $A_n(x,y)=\sum_{i,j}A_{i,j}(n)x^iy^j$; then we have the following expression:$$A_n(x,y)=\prod_{i=0}^{n-1}(x+y+i)$$ [**Proof:** ]{}It is easily seen that for $u=1$ the expression inside the logarithm in  boils down to $\frac{1}{1-z}$, so $$\begin{aligned} A(z,1,x,y)& =\exp(-(x+y)\log(1-z))=(1-z)^{-(x+y)}\\ &= \sum_{n\geq 0} (x+y)(x+y+1)\cdots (x+y+n-1) \frac{z^n}{n!}.\end{aligned}$$ It suffices to take the coefficient of $\frac{z^n}{n!}$ on both sides to obtain the result. Note that in fact we just need Equation  from the proof of Theorem \[th:enum\], and then the expression of $A(z,1,x,y)$ follows from the fact that both $A_{1,0}(z)$ and $A_{0,1}(z)$ are equal to $-\log(1-z)$ by Proposition \[prop:simpleenum\]. $\square~$ Decorated tableaux ------------------ In their study of the ASEP model in the case $q=1$, Corteel and Williams [@CW2] managed to express the stationary distribution in terms of alternative tableaux with certain weights. In particular, the so called [*partition function*]{} can be expressed combinatorially. Following [@CW2], let us call [*decorated alternative tableau*]{} an alternative tableau where each arrow can be in two states, marked and unmarked: a usual alternative tableau with $k$ arrows thus gives rise to $2^k$ different decorated alternative tableaux. \[th:nonfree\] The number of decorated alternative tableaux of length $n$ is equal to $2^n n!$. We give a simple proof of this fact based on the recursive structure of tableaux: [**Proof:** ]{}Let $\widetilde{A}(z)$ (respectively $\widetilde{C}(z)$) be the exponential generating function of decorated tableaux (resp. decorated tableaux such that the underlying alternative tableau belongs to ${\mathcal{A}}_{0,1}$). Note that, by transposition, $\widetilde{C}(z)$ can equivalently be defined by replacing ${\mathcal{A}}_{0,1}$ by ${\mathcal{A}}_{1,0}$. Remember that arrows correspond to non free rows and columns: so this is an additive parameter of tableaux with respect to the decomposition ${\operatorname{split}}$. Thus from Equation  we get immediately $$\label{eq:nonfree} \widetilde{A}(z)=\exp(2\widetilde{C}(z)).$$ But since tableaux in $A_{0,1}(n)$ (for $n\geq 1$) have exactly $n-1$ non free rows and columns, each of them gives rise to $2^{n-1}$ decorated tableaux. In terms of generating functions this means that $\widetilde{C}(z)=\frac{C(2z)}{2}$. Now we know that $C(z)=-\log(1-z)$ by Proposition \[prop:simpleenum\], so after substituting in Equation  we get $$\widetilde{A}(z)=\frac{1}{1-2z}$$ and the result follows by taking the coefficient of $z^n/n!$ on both sides. $\square~$ The proof in [@CW2] is more involved, but has the nice feature of being bijective. It turns out that the proof above can be easily “bijectivized”: There is a bijection between $(i)$ decorated tableaux of length $n$, and $(ii)$ tableaux in $A_{0,*}(n)$ such that all rows and columns can be marked. Since we will give in Section \[sect:perm\] a bijection between tableaux of $A_{0,*}(n)$ and permutations on $n$ elements, this will indeed give a fully bijective proof of Theorem \[th:nonfree\]. [**Proof:** ]{}Let $T$ be a tableau of length $n$, with standard labeling, and let $P,Q$ be the tableaux respectively in $A_{*,0}$ and $A_{0,*}$ obtained by the procedure $divide$ of Corollary \[cor:decompo\], together with their label sets $L_P$ and $L_Q$: we also naturally let rows and columns of $P$ and $Q$ be marked whenever they were originally marked in $T$. Now define a marked labeled tableau $P'$ as follows: the underlying tableau is $tr(P)$ and the labels are given by $L_P$. For the marks, note that transposition exchanges (free) rows and (free) columns. We keep the marks of $P$ in $P'$ for all non free rows and columns. Now $P$ has no free columns (so that $P'$ has no free rows), and all its free rows are unmarked by the definition of decorated tableaux: the corresponding free columns in $P'$ are defined to be all marked. $P'$ and $Q$ are two marked, labeled tableaux in $A_{0,*}$, therefore $T':={\operatorname{merge}}(P',Q)$ is a marked, labeled tableau in $A_{0,*}(n)$, and we claim that $T\mapsto T'$ is the desired bijection. Indeed, let us describe the inverse bijection. Given $U$ a marked tableau in $A_{0,*}(n)$, let $(j_1,\ldots,j_k)$ be the labels of the marked free columns, and $(l_1,\ldots, l_t)$ the labels of the unmarked free columns. Define then $R$ (respectively $S$) as the tableau $T[X]$ where $X$ is the subset of labels $\cup T(j_i)$ (resp. $\cup T(l_i)$); these are both labeled, marked tableaux in $A_{0,*}$. Now transpose the tableau $R$, keeping all marks except the ones corresponding to the original labels $(j_1,\ldots,j_k)$ which are deleted. Merge the resulting tableau $R'$ with $S$, and let $U'$ be the resulting labeled, marked tableau: it is clear that it has no marks on free rows and columns, and $U'$ is thus a decorated tableau. It is then easy to see that $U\mapsto U'$ is the wanted inverse bijection. $\square~$ Symmetric tableaux ------------------ We call a tableau [*symmetric*]{} if it is fixed by the operation of transposition defined in Section \[sect:first\]. Clearly symmetric tableaux have even length since they have the same number of rows and columns. We have then the following enumeration \[prop:symmtab\] The number of symmetric tableaux of size $2n$ is $2^nn!$. [**Proof:** ]{}Let $T$ be a symmetric tableau of size $2n$ with standard labeling. If $k$ labels a row, then $2n+1-k$ labels a free column. In fact, even more is true: the tableau $T[2n+1-k]$ labeled by $T(2n+1-k)$ is the transpose of the tableau $T[k]$ labeled by $T(k)$, and the labels verify $T(2n+1-k)=\{2n+1-\ell, \ell \in T(k)\}$. By the bijection of Corollary \[cor:decompo\], symmetric tableaux are thus in one to one correspondence with pairs of labeled tableaux $(P,Q)\in {\mathcal{A}}_{*,0}(n)\times {\mathcal{A}}_{0,*}(n)$, where $Q=tr(P)$ and the labels verify $L_Q=\{2n+1-\ell, \ell \in L_P\}$ as well as $L_Q=\{1,\ldots,2n\}-L_P$. Thus all symmetric tableaux are obtained in the following manner: pick an alternative tableau $U$ in ${\mathcal{A}}_{*,0}(n)$, and for each pair $\{i,2n+1-i\}, i=1\ldots n$ pick one of the two integers; let $X$ be the set of the chosen integers and $Y$ the complement of $X$ in $\{1,\ldots,2n\}$. Then merge $U$ labeled by $X$ and $tr(U)$ labeled by $Y$: this is a symmetric tableau. Since ${\mathcal{A}}_{*,0}(n)$ has $n!$ elements and there are clearly $2^n$ choices for the labels $X$, the result follows. We illustrate the correspondence $T\mapsto P$ on Figure \[fig:symm\]. $\square~$ ![A symmetric tableau and its associated column packed tableau. \[fig:symm\]](Symmtab.pdf){height="3.5cm"} So symmetric tableaux of size $n$ are equinumerous with *signed permutations* of $\{1,\ldots,n\}$, which are permutations on $\{1,\ldots,n\}$ in which letters can be [*barred*]{}; we actually describe a bijection at the end of Section \[sub:bijperm\]. In the work of Lam and Williams [@LW], [*permutation tableaux for the type $B_n$*]{} are defined, also enumerated by $2^n n!$. It is actually possible to show that their tableaux are in bijection with symmetric alternative tableaux, by adapting suitably the bijection $\alpha$ from Theorem \[th:permalt\] to the symmetric case. We note that these permutation tableaux for the type $B_n$ were defined as a certain subclass of diagrams that appeared naturally in the context of Coxeter groups of type $B_n$. It is quite surprising that when one starts with alternative tableaux (originally related to the ASEP), and then considers those that are symmetric, one obtains configurations that are in simple bijection with permutation tableaux of type $B_n$. This raises the following question: given a finite Coxeter system $(W,S)$, is there a natural way to associate to each of the elements of $W$ a certain generalized alternative tableau ? Alternative trees and forests {#sect:alttrees} ============================= In this section we will give bijections from alternative tableaux to various families of planes and forests, bijections which are based on the decompositions of Section \[sect:struct\]. All trees considered are rooted and *plane*, by which we mean as usual that the children of every vertex are linearly ordered. Furthermore, we will consider *labeled* trees and forests, where the labels will be pairwise different integers attached to the vertices; these integers form the *label set* of the tree or forest. Given a vertex $v$ in a labeled tree, we say that $v$ is [*minimal*]{} (respectively [*maximal*]{}) if its label is smaller (resp. bigger) than all its descendants. Plane alternative trees and forests {#sub:pat} ----------------------------------- A plane alternative tree is a labeled rooted plane tree with black and white vertices, such that: - each white vertex is minimal, its children are black and have decreasing labels from left to right; - each black vertex is maximal, its children are white and have increasing labels from left to right. A *plane alternative forest* is a set of plane alternative trees. We represent a plane tree on Figure \[fig:planetree\]. ![A plane alternative tree. \[fig:planetree\]](planetree.pdf){height="4cm"} In this subsection we show that these trees are the natural objects encoding the recursive structure described in Section \[sect:struct\]. First we define the function $\operatorname{Tree}$ which goes from labeled *packed* tableaux to alternative trees. Let $T\in {\mathcal{A}}_{1,0}$ be labeled; then $T':=cut_r(T)$ is an element of ${\mathcal{A}}_{0,m}$ for a certain $m$, by Corollary \[cor:relations\]. Let $T'_1,\ldots,T'_m$ be the labeled tableaux of ${\mathcal{A}}_{0,1}$ given by ${\operatorname{split}}(T')$ (cf. Theorem \[th:decomp\]), and $\ell$ be the label of the top row of $T$. Symmetrically, if $T\in {\mathcal{A}}_{0,1}$, then we let the $T'_i$ be the tableaux of ${\mathcal{A}}_{1,0}$ obtained by applying in succession $cut_c$ and $break$, and $\ell$ be the label of the leftmost column. We define $\operatorname{Tree}(T)$ recursively to be the tree whose root is white (respectively black) and labeled by $\ell$, and whose subtrees attached to the roots trees are $\{\operatorname{Tree}(T'_i)\}_{i=1\ldots m}$, arranged from left to right in increasing (respectively decreasing) order of the labels of their roots if $T\in {\mathcal{A}}_{1,0}$ (resp. $\in{\mathcal{A}}_{0,1}$). If $T\in {\mathcal{A}}$ is any labeled alternative tableau, and $\{T_i\}$ are the labeled packed tableaux given by ${\operatorname{split}}(T)$ from Theorem \[th:decomp\], we define $\operatorname{Forest}(T)$ as the labeled forest consisting of the trees $\operatorname{Tree}(T_i)$. The forest $\operatorname{Forest}(T_0)$ for the alternative tableau of Figure \[fig:permtabaltab\] is represented on Figure \[fig:arbre\]. ![A plane alternative forest. \[fig:arbre\]](Arbre.pdf){height="3cm"} $\operatorname{Tree}$ is a bijection from packed labeled alternative tableaux of length $n$ to plane alternative trees with $n$ vertices (with the same label set). $\operatorname{Forest}$ is a bijection from labeled alternative tableaux of length $n$ to plane alternative forests with $n$ vertices (with the same label set). [**Proof:** ]{}Note first that the claim about $\operatorname{Forest}$ is an immediate corollary of the result for $\operatorname{Tree}$, thanks to Theorem \[th:decomp\]. The proof is mostly straightforward, and consists simply of noticing that the recursive structure of tableaux given by Theorem \[th:decomp\] and Lemma \[prop:cut\] is naturally encoded by alternative trees. The only point that needs to be checked is the minimality of white vertices (the maximality of black vertices being clearly proved symmetrically): when a white vertex is added in a tree, its label $\ell$ is the first row of a tableau $T$, and the labels of its descendants are the labels of the other rows and columns of $T$, which by definition of the labeling of tableaux are indeed larger than $\ell$. So $\operatorname{Tree}$ is well defined, and the inverse (recursive) construction is clear: given a tree $t$ with root labeled $\ell$, construct (recursively) the labeled packed tableau $\operatorname{Tree}^{-1}(t')$ for each root subtree $t'$, then merge all these tableaux to get a tableau $T$, and finish by applying $block_c$ or $block_r$ (according to the root color) to $T$, labeling the new row or column by $\ell$: the result is $\operatorname{Tree}^{-1}(t)$. $\square~$ Arc diagrams ------------ We now introduce [*alternative arc diagrams*]{}, that turn out to be a nice representation of plane alternative forests. We will call *arc diagram* the data of points aligned horizontally, labeled increasingly from left to right by integers and of arcs $(i,j), i<j$ where $i,j$ are two of the labels. It is thus a particular representation of a labeled (simple, loopless) graph where the vertices are ordered according to their value. Given an arc diagram, we say that an arc $(i,j)$ is topmost on its right side if there is no arc $(k,j)$ with $k<j$, and that it is topmost on its left side if there is no arc $(j,\ell)$ with $\ell>j$. \[def:arcdiag\] Let $L$ be a label set of size at least $2$, with minimal and maximal elements $m$ and $M$ respectively. An arc diagram with points labeled by $L$ is called [*alternative*]{} if the following three conditions are verified: 1. at each vertex $i$, there are no two arcs $(k,i)$ and $(i,j)$ for some integers $k<i<j$. 2. as an abstract graph, it is a tree; 3. each arc $(i,j)\neq (m,M)$ is topmost on exactly one of its sides. ![Alternative arc diagram. \[fig:altertree\]](Altertree.pdf){height="3cm"} An example is shown on Figure \[fig:altertree\]. Every arc has been oriented from its topmost side for clarity; moreover, a vertex $i$ is colored white when all arcs adjacent to it are of the form $(i,j)$ with $j>i$; it is colored black otherwise. Let $F$ be a plane alternative forest labeled on ${[\![}1,n{]\!]}$, and consider $n+2$ points aligned horizontally with labels $0,1,\ldots,n+1$ from left to right. Add an arc between points $i$ and $j$ for each edge $(i,j)$ of the forest, an arc $(0,b_\ell)$ for each black root $b_\ell$ and an arc $(n+1,w_k)$ for each white root $w_k$. Finally put an arc between $0$ and $n+1$, and let the resulting arc diagram be $\phi(F)$. For instance, the diagram of Figure \[fig:altertree\] corresponds through $\phi$ to the forest of Figure \[fig:arbre\]. The procedure $\phi$ is a bijection from alternative forests to alternative arc diagrams . [**Proof:** ]{}We first check the three conditions in Definition \[def:arcdiag\], then show the bijectivity. So let us be given a set of arcs $\phi(F)$ on $n+2$ points coming from a forest $F$, and show that it is an arc diagram. Condition $(1)$ is trivial for points $0$ and $n+1$; every other point $i$ is the label of a vertex in $F$. If this vertex is black, then all its descendants have smaller labels, and its father also; therefore in the diagram, all arcs go to the left of $i$. A similar proof shows that all white vertices become points from which all arcs go right. Condition $(2)$ is clear, because $F$ is a forest by hypothesis, and the arcs $(0,b_\ell)$,$(w_k,n+1)$ and $(0,n+1)$ make it into a tree. We finally want to check condition $(3)$; let an arc $e=(i,j)\neq (0,n+1)$, $i<j$, be given. If $i=0$ or $j=n+1$ the result is immediate; now suppose $e$ is topmost in $i$; we’ll show that it’s not topmost in $j$, and by symmetry we’ll have that if $e$ is topmost in $j$ then it’s not topmost in $i$, which will conclude the proof. But $e$ topmost in $i$ means that $j$ is the father of $i$ in $F$; by minimality, the father of $j$ will necessarily be less than $i$, so that $e$ is not topmost in $j$. And if $j$ has no father in $F$, then it’s a black root, thus there is an arc $(0,j)$ so that $e$ is not topmost in $j$ in this case either. Consider now the following construction: given an arc diagram, color in white (respectively black) all points ($\neq 0,n+1$) at which arcs go to the left (resp. to the right). Then destroy all arcs $(0,j)$ and $(j,n+1)$: the corresponding vertices $i$ and $j$ are then roots of certain trees, which form a forest. It is immediate that this is precisely the inverse of $\phi$. $\square~$ Crossings in alternative arc diagrams. -------------------------------------- There is a very elementary way to describe the composition of $\operatorname{Forest}$ with the bijection $\phi$; let us call $Arc$ this bijection $\phi\circ Forest$ from labeled tableaux to diagrams. Given a tableau $T$ of length $n$ with standard labeling, and $n+2$ points labeled from $0$ to $n+1$. Then draw an arc $(i,j)$ for all cells $(i,j)$ filled with an arrow (up or left). Draw also an arc $(0,j)$ for each free column $j$, an arc $(i,n+1)$ for each free row $i$, and finally an arc $(0,n+1)$; the result is the alternative arc diagram $Arc(T)$. We have then the result The construction $Arc$ is a bijection from alternative tableaux of length $n$ to alternative arc diagrams on the labels $\{0,1,\ldots,n+1\}$, and coincides with the composition $\phi\circ Forest$. In an alternative arc diagram, we call *crossing* a pairs of arcs $(i',j),(i,j')$ with $i'<i<j<j'$. Such a crossing is an *out-crossing* if these arcs are topmost in $j$ and $i$ respectively. On Figure \[fig:altertree\], crossings correspond to the intersection of two arcs, and out-crossings to the subset of those for which arrows are directed “outwards”, i.e. towards $i'$ and $j'$. In this example, out-crossings occur for $(i,j)$ equal to $(4,5),(4,12),(7,8)$ and $(11,12)$. We now relate out-crossings to the free cells of alternative tableau, as defined after Definition \[def:altab\], and whose importance is underlined by Proposition \[prop:connection\]. These free cells are also of interest in connection with permutations, see [@CN; @SW] for instance. \[prop:crossings\] Let $T$ be an alternative tableau with standard labeling. A cell $(i,j)$ in $T$ is free if and only if there exists $i',j'$ such that $(i,j'),(i',j)$ is an out-crossing of $Arc(T)$; $i'$ and $j'$ are in this case unique. [**Proof:** ]{}By definition, a cell $(i,j)$ is free if the two following conditions are verified: - Row $i$ is free, or there is a left arrow in a cell $(i,j')$ with $j<j'$; - Column $j$ is free, or there is an up arrow in a cell $(i',j)$ with $i'<i$. Note that the indices $j'$ and $i'$ are necessarily unique if they exist. The first condition corresponds in the arc diagram to a unique arc $(i,j')$ topmost in $i$ with $j'>i$, while the second condition corresponds to a unique arc $(i',j)$ topmost in $j$ with $i'<i$, which achieves the proof. $\square~$ Note that free cells are not easily visualized when looking at plane alternative forests. As a corollary, we have the following well known enumeration, of which we give here a new simple bijective proof. [@CN; @CW; @V] Tableaux of size $n$ with no free cells are counted by the Catalan number $C_{n+1}=\frac{1}{n+2}\binom{2n+2}{n+1}$. [**Proof:** ]{}By Theorem \[prop:crossings\], such tableaux are in bijection with alternative arc diagrams on $n+2$ points with no out-crossing. In fact, such diagrams have no crossing at all: suppose there was such a crossing $(i',j),(i,j')$ in $Arc(T)$ with $i<i'<j<j'$. Then in the tableau $T$ there are arrows in both $(i',j)$ and $(i,j')$; but this implies that the cell $(i,j)$ is free, which is absurd because this would mean that there is a out-crossing in $Arc(T)$. So we have to enumerate alternative arc diagrams with no crossings, and in this case Condition $(3)$ in Definition \[def:arcdiag\] is easily seen to be superfluous; the arc diagrams $Arc(T)$ for $T$ of size $n$ with no free cells are then identified with the well known called [*noncrossing alternating trees*]{} on $n+2$ points. These objects are in a simple bijection with binary trees with $n+1$ leaves, and thus are counted by the Catalan number $C_n$ : this is done in [@Stan2], exercise 6.19 (p) for instance. $\square~$ Binary alternative trees ------------------------ We describe more briefly the trees that appear when one encodes the recursive structure of alternative tableaux reflected by Corollary \[cor:decompo\]; as can be expected, binary trees are obtained. A *binary alternative tree* of size $n$ is a labeled binary tree with $n$ vertices such that each left child is maximal, while each right child is minimal; the root is either maximal or maximal. We will note $\mathcal{B}_{min}$ (respectively $\mathcal{B}_{max}$) the class of binary alternative trees where the root is minimal (resp. maximal). We remark that these trees were already defined by Burstein [@Bu] in the context of permutation tableaux. They consist a variation of the *binary increasing trees*, in which every vertex is minimal; here we distinguish left and right sons. ![Binary alternative trees. \[fig:bintrees\]](Bintrees.pdf){height="3cm"} Let $n>0$, and $T$ be a tableau in ${\mathcal{A}}_{0,*}(n)$ labeled by $L$. If $T$ is the empty tableau, set $Bin_{min}(T)=Bin_{max}(T)=\emptyset$. Otherwise, define $m\in L$ to be the label of the first row of $T$, let $T'$ be the labeled tableau $cut_r(T)$, and finally let $P$ and $Q$ be the labeled tableaux given by $(P,Q):=divide(T')\in{\mathcal{A}}_{*,0}\times {\mathcal{A}}_{0,*}$ using the bijection of Corollary \[cor:decompo\]. By induction, we define $Bin_{min}(T)$ as the tree in $\mathcal{B}_{min}$ with a root labeled $m$ which has right subtree equal to $Bin_{min}(P)$ and left subtree equal to $Bin_{max}(Q)$. $Bin_{max}(T)$ is defined similarly for tableaux $T$ in ${\mathcal{A}}_{*,0}(n)$, except that $m$ is the label of the first column of $T$ and $T'=cut_c(T)$. $Bin_{min}$ is a bijection from ${\mathcal{A}}_{0,*}$ to $\mathcal{B}_{min}$, and $Bin_{max}$ is a bijection from ${\mathcal{A}}_{*,0}$ to $\mathcal{B}_{max}$. Now let $T$ be any labeled alternative tableau, and set $$CoupleBin(T):= (Bin_{max}(P),Bin_{min}(Q))$$, in which $(P,Q)$ are the labeled tableaux given by $divide(T)$. $CoupleBin$ is a bijection from alternative tableaux labeled by a set $L$ to pairs of trees $(b_1,b_2)\in \mathcal{B}_{max}\times \mathcal{B}_{min}$ with respective labels $L_1$ and $L_2$ verifying $L_1\sqcup L_2=L$. Alternative tableaux and permutations {#sect:perm} ===================================== In this Section we define a bijection from alternative tableaux to permutations, which relies on the representation of tableaux as trees from Section \[sub:pat\]. We then show that this bijection is equivalent to some other ones that already appeared in the literature. Some definitions ---------------- We define a permutation as a word on the alphabet of integers with *no repeated letters*. For a permutation $w=a_1a_2\cdots a_k$, we define the *support* of $w$ as $supp(w):=\{a_1,\ldots,a_k\}$, i.e. the set of positive integers that appear in it; by definition of a permutation this set has cardinal equal to $k$. A RL-maximum (respectively a RL-minimum) in a permutation is a letter that is greater (resp. smaller) than all the letters to its left. RL stands for “right to left”, a RL-minimum being a letter that is greater than all those seen before when one reads the word from right to left. Let $w$ be a permutation, and consider its factorization $w_1mw_2$, where $m$ is the smallest element of $supp(w)$. A *shifted RL-maximum* of $w$ is a $RL$-maximum of the permutation $w_1$. A *descent* in a permutation $a_1\cdots a_k$ is a letter $a_i$ greater than $a_{i+1}$, and an *ascent* is a letter smaller than the next one; by convention the last letter of a word is considered to be an ascent. Bijection with permutations {#sub:bijperm} --------------------------- We construct here a bijection $\Psi$ from plane alternative forests to permutations; composition with the function $Forest$ will give us a bijection $\Phi_N$ from tableaux of length $n$ to permutations of $\{0,\ldots,n\}$. Let $T$ be a plane alternative tree; we define a permutation $\psi(T)$ recursively. If $T$ is reduced to one vertex labeled $m$, then we set $\psi(T)=m$. Otherwise, let $T_1,\ldots,T_k$ be the subtrees attached to the root (from left to right), and $m$ be the label of the root. Then the permutation attached to $T$ is the word $\psi(T):=\psi(T_1)\ldots \psi(T_k)m$. In other words, we do a [*postorder traversal*]{} of the tree. $\psi$ is a bijection between $(i)$ trees with a black root (respectively white root) labeled on $L$ and $(ii)$ permutations with support $L$ ending with the letter $\max(L)$ (resp. $\min(L)$). [**Proof:** ]{}The key observation is the following: if $T$ is a tree with subtrees $T_1,\ldots,T_k$ as above, then in the permutation $w=\psi(T_1)\ldots \psi(T_k)$, the last letters of the words $\psi(T_i)$ are exactly the RL-minima of $w$ (respectively the RL-maxima of $w$) if $T$ has a black root (resp. a white root). This is proved immediately by induction, since it is a translation of the fact that black vertices are maximal, white vertices are minimal, and that the subtrees of a black vertex (respectively a white vertex) are ordered in the increasing order of their root labels (respectively the decreasing order of these labels). From this remark one can immediately defie an inverse to $\psi$. $\square~$ Note that from this Lemma and the bijection $Tree$, we have that $A_{0,1}(n)=(n-1)!$ immediately, as proved in Proposition \[prop:simpleenum\]. Now let $F$ be alternative forest of size $n$ with label set $L$, composed of the trees $T_1,\ldots,T_i$ with white roots, ordered in increasing order of their roots, and $T'_1,\ldots,T'_j$ with black roots in decreasing order of their roots. Let us also fix $x<\min (L)$. Then the permutation $\Psi(F)$ is defined as the concatenation $$\Psi(F):=\psi(T'_1)\cdots \psi(T'_j)\cdot x\cdot \psi(T_1)\cdots \psi(T_i).$$ \[prop:fortoperm\] Let $L$ be a label set, and $x<\min (L)$. Then $\Psi$ is a bijection between plane alternative forests labeled by $L$ and permutations $w$ such that $supp(w)=L\cup\{x\}$. If $\sigma=\Psi(F)$ and $i\in L$, then $i$ labels a white root (respectively a black root, a white vertex, a black vertex) of $F$ if and only if $i$ is a RL-minimum in $\sigma$ (resp. a shifted RL-maximum, an ascent, a descent). [**Proof:** ]{}The proof that $\Phi$ is bijective is essentially the same as the one for $\psi$, and the rest follows immediately from its definition. $\square~$ Now we can define our bijection $\Phi_N:=\Psi\circ Forest$ from labeled alternative tableaux to permutations. Note that it requires to fix not only a labeled set $L$, but also an integer $x$ smaller than $\min(L)$. When $T\in {\mathcal{A}}$ has the standard labeling, we will naturally take $x=0$, and with this convention we have the following theorem: The bijection $\Phi_N$ is a bijective correspondence between alternative tableaux of size $n$ and permutations of $\{0,\ldots,n\}$. Furthermore, if $\sigma=\Phi_N(T)$ and $i\in \{1,\ldots,n\}$, then $i$ labels a row (respectively a column, a free row, a free column) in the standard labeling of $T$ if and only if $i$ is an ascent (resp. a descent, a RL-minima, a shifted RL-maxima) of $\sigma$. Note that Theorem \[th:enum\] thus gives a refined enumeration of permutations of $\{0,\ldots,n\}$ with respect to ascents, RL-minima and shifted RL-maxima. In fact the generating function  is the generating function of Eulerian polynomials (see [@Comtet p.51]). It is easy to see what *symmetric tableaux* become via the bijection $\Phi_N$. Using the decomposition described in the proof of Proposition \[prop:symmtab\], it is equivalent to perform the bijection $\Phi_N$ on tableaux in ${\mathcal{A}}_{*,0}(n)$ labeled by sets $L\subseteq \{1,\ldots,2n\}$ such that $L$ contains exactly one element in each pair $\{i,2n+1-i\}$ for $i=1,\ldots,n$. The permutations obtained by $\Phi_N$ are exactly words of length $n$ labeled by such sets $L$, preceded by a $0$. Now if we delete this $0$ and replace each entry $2n+1-i$ ($i\leq n$) in this word by a barred letter $\bar{i}$, then we get a bijection with permutations where letters may be barred: \[prop:symmtabtoperm\] The bijection $\Phi_N$ induces a bijection between symmetric alternative tableaux of size $2n$ and signed permutations of size $n$, i.e. permutations on $\{1,\ldots,n\}$ such that each letter may be barred. This gives a bijective proof of the fact that symmetric tableaux of size $2n$ are counted by $2^nn!$. An ubiquitous bijection ----------------------- In this section we point out that the bijection $\Phi_N$ is identical to two bijections that have appeared previously in the literature. ### Corteel and Nadeau’s [*bijection I*]{} In the work of the author with Sylvie Corteel [@CN], two bijections were defined between permutation tableaux and permutations; we show that the first of these bijections is identical to the bijection $\Phi_N$. We recall this bijection $\Phi_{C}$; starting with a tableau $T$, we will define it algorithmically, by successively inserting row and column labels in a word until we reach the desired permutation. Initialize the word to the list of the labels of free rows in increasing order, preceded by $0$. Considering the columns of $T$ successively from left to right, perform the following with $j$ the current column label: if the column has no up arrow, insert $j$ to the left of $0$, while if it has an up arrow in position $(i,j)$ then insert $j$ to the left of $i$. In both cases, if $i_1,\ldots ,i_k$ are the labels of the rows containing a left arrow in column $j$, insert $i_1,\ldots ,i_k$ in increasing order to the left of $j$. When the rightmost column has been processed, we have obtain is the desired permutation $\Phi_C(T)$. [**Example:**]{} Let us apply this on the left of tableau $T_0$ of Figure \[fig:permtabaltab\]; the free rows are labeled by $4,11$ and $13$, so we obtain initially $(0,4,11,13)$. For column number $12$, no up arrow, a left arrow in row $10$: we get $(10,12,0,4,11,13)$. Column number $9$ has an up arrow in row $4$ and a left arrow in rows $6$ and $7$: we thus obtain $(10,12,0,6,7,9,4,11,13)$. For the remaining columns $8,5,2,1$, we obtain successively $$\begin{aligned} (10,12,0,8,6,7,9,4,11,13),(10,12,3,5,0,8,6,7,9,4,11,13),\\ (10,12,3,5,2, 0,8,6,7,9,4,11,13)\end{aligned}$$ and finally $$\Phi_{C}(T_0)=(10,12,3,5,2,1,0,8,6,7,9,4,11,13).$$ This is the same result as applying $\Phi_N$, and this is indeed no coincidence: The bijection $\Phi_{C}$ coincides with the main bijection $\Phi_N$. [**Proof:** ]{}We will prove that the plane alternative forest corresponding to the permutation $\Phi_{C}(T)$ coincides with the plane alternative forest attached to an alternative tableau $T$, i.e. that we have $\operatorname{Forest}=\Psi^{(-1)}\circ \Phi_C $. The reasoning goes by induction on the number of columns of $T$. Suppose first that $T$ has no column, and let $i_1<\ldots< i_k$ be the labels of its (necessarily free) rows. Then $\Phi_{C}(T)$ is simply the permutation $0,i_1,\ldots,i_k$, and the forest attached to this permutation is nothing else than the completely disconnected graph with $k$ white vertices labeled by $i_1,\ldots,i_k$: this is indeed the forest $\operatorname{Forest}(T)$. Now suppose that $T$ possesses $m>0$ columns, let $j$ be the label of its rightmost column, and define $i_1<\ldots<i_k$ as the row labels of left arrows in column $j$. Let $T_1$ be the tableau obtained by suppressing this column (we keep all the labels and arrows of all other rows and columns); by induction, we know that $\sigma_1:=\Phi_{C}(T_1)$ corresponds to the forest $F_1:=\operatorname{Forest}(T_1)$. Let $\sigma:=\Phi_C(T)$ and $F:=\Psi^{-1}(\sigma)$. We distinguish two cases: 1. Column $j$ of $T$ has no up arrow. Then the permutation $\sigma$ is obtained by inserting $i_1\cdots i_kj$ to the left of $0$ in $\sigma_1$. The corresponding forest $F$ is obtained by adding a new black root to $F_1$ labeled $j$, and attach to it the white vertices $i_1,\ldots ,i_k$ (which were previously isolated). 2. Column $j$ of $T$ has an up arrow in row $i$. Then the permutation $\Phi_{C}(T)$ is obtained by inserting $i_1\cdots i_kj$ to the left of $i$ in the permutation $\sigma_1$. The corresponding forest $F$ is obtained by adding a new black vertex to $F_1$ labeled $j$, making it the leftmost vertex of $i$, and attach to it the white vertices $i_1,\ldots ,i_k$. In both cases, the forest $F$ obtained is easily seen to be precisely $\operatorname{Forest}(T)$. This proves by induction that the two functions $\operatorname{Forest}$ and $\Psi^{(-1)}\circ \Phi_C $ coincide, and thus we get indeed $\Phi_N=\Psi\circ \operatorname{Forest}=\Phi_C$. $\square~$ ### Other bijections After introducing the concept of alternative tableaux in [@VienCamb], Viennot defines a bijection $\Phi_V$ with permutations, which he presents under different equivalent forms. One of these consists in starting from a permutation, and the shape of a tableau (computed according to the ascents and descents of the permutation), and proceeds to fill the tableau little by little. Under this form, it is possible to show by induction it is equivalent to the bijection $\Phi_{N}$, in a similar way to what was done for$\Phi_C$ above. At the end of Burstein’s paper [@Bu], a bijection is also introduced. We will not go into detail, but it is possible to see that his bijection is essentially equivalent to the other ones encountered, up some elementary transformations of permutation tableaux and of permutations. Finally, there are two other bijections in the literature: Corteel and Nadeau’s *bijection II*, which is at the core of the paper [@CN], and Steingrímsson and Williams’s original bijection [@SW], which is known to be equivalent to one in Postnikov’s preprint [@Postnikov]. It would be interesting to study how these bijections are related to $\Phi_N$. [99]{} F. Bergeron, G. Labelle and P. Leroux, Combinatorial Species and Tree-Like Structures, Cambridge University Press, Cambridge–New York, 1998. A. Burstein, On some properties of permutation tableaux, Annals of Combinatorics, Vol. 11, 2007, Issue 3-4, 355–368. L.Comtet, Advanced Combinatorics, Reidel, Dordrecht, 1974. S. Corteel and P. Nadeau, Bijections for Permutation Tableaux, European Journal of Combinatorics, Vol. 30, Issue 1, 2009, 295–310. S. Corteel and L. Williams, Tableaux combinatorics for the asymmetric exclusion process, Adv. in Appl. Math, Vol. 37, Issue 3, 2007, 293–310. S. Corteel and L. Williams, A Markov chain on permutations which projects to the PASEP, [*Int Math Res Notices*]{}, to appear, 27 pages (2007). S. Corteel and L. Williams, Tableaux combinatorics for the asymmetric exclusion process II, Preprint, 2008, arXiv:math.CO/0810.2916v1 . E. Duchi and G. Schaffer, A combinatorial approach to jumping particles. J. Combin. Theory Ser. A 110 (2005), no. 1, 1–29. P. Flajolet and R. Sedgewick, Analytic combinatorics, web edition, 809+xii pages (available from the authors’ web sites). To be published in 2008 by Cambridge University Press). B. Derrida, M. Evans, V. Hakim, V. Pasquier, Exact solution of a 1D asymmetrix exclusion model using a matrix formulation, J. Phys. A: Math. Gen. 26 (1993), 1493–1517. T. Lam and L. Williams, Total positivity for cominuscule grassmannians, New York Journal of Mathematics, Vol. 14, 2008, 53-99. A. Postnikov, Total positivity, Grassmannians, and networks. Preprint, 2006, arXiv:math.CO/0609764. R. Stanley, “Enumerative Combinatorics,” vol. 2, Cambridge University Press, New York/Cambridge, 1999. E. Steingrímsson and L. Williams, Permutation tableaux and permutation patterns, Journal of Combinatorial Theory, Series A, Vol. 114, Issue 2, 2007, 211-234. M. Uchiyama, T. Sasamoto and M. Wadati, Asymmetric Simple Exclusion Process with Open Boundaries and Askey-Wilson Polynomials, J. Phys. A:Math. Gen. 37 (2004), 4985-5002. X. Viennot, Catalan tableaux, permutation tableaux and the asymmetric exclusion process, FPSAC 07, Tianjin, China. X. Viennot, Alternative tableaux, permutations and partially asymmetric exclusion process, Isaac Newton institute, April 2007,
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this letter, we report our experimental results on phase-sensitive amplification (PSA) in non-degenerate signal-idler configuration using ultra-narrow coherent population oscillations in metastable helium at room temperature. We achieved a high PSA gain of nearly 7 with a bandwidth of 200 kHz, by using the system at resonance in a single-pass scheme. Further, the measured minimum gain is close to the ideal value, showing that we have a pure PSA. This is also confirmed from our phase-to-phase transfer curves measurements, illustrating that we have a nearly perfect squeezer, which is interesting for a variety of applications.' author: - 'J. Lugani$^1$' - 'C. Banerjee$^1$' - 'M-A. Maynard$^1$' - 'P. Neveu$^1$' - 'W. Xie$^{1,2}$' - 'R. Ghosh$^3$' - 'F. Bretenaker$^1$' - 'F. Goldfarb$^1$' title: 'Phase-sensitive amplification via coherent population oscillations in metastable helium at room temperature' --- Over the last few years, phase-sensitive amplification (PSA) has been a subject of wide research in a variety of fields due to its unique noise properties. It enables amplification of a weak signal without adding any extra noise, i.e. without degrading its signal to noise ratio [@Caves], and thus finds applications in metrology [@metrolgy], imaging industry [@imaging] and telecommunication [@PA2011]. Further, this noiseless amplification is associated with the generation of squeezed states of light, which makes this parametric process very interesting for quantum optics and quantum information experiments [@Agarwal]. PSA has been successfully achieved in nonlinear crystals and waveguides [@levenson] through three-wave mixing ($\chi^{(2)}$ process), and in fibers [@TongIEEE2012] and alkali vapors such as rubidium [@PLprl; @PLOE1] through four-wave mixing (FWM) ($\chi^{(3)}$ process). Although a very large quantum noise reduction has been achieved using crystals [@Hemmer], it is difficult to directly couple the down-converted photons with atomic systems because of their frequency and bandwidth, while it would be useful for many applications in quantum information such as realization of atomic memories, processing of atomic qubits through quantum light, entanglement swapping, etc. Realizing PSA in the same atomic system is thus interesting as the generated non-classical light is automatically within the bandwidth of the atomic transition, spectrally narrow, and can moreover be spatially multimode [@PLOE1; @JingPRL; @Novikova16]. Motivated by these works, we report our results on PSA in metastable helium (He$^{4}$) at room temperature, through coherent population oscillation (CPO) assisted FWM processes in a $\Lambda$ scheme at resonance. In other atomic systems (e.g. alkali vapors), the FWM process relies on the coherence between the two ground states of the $\Lambda$ system [@PLprl; @PLOE1; @JingPRL]. In comparison to this, we use a particular kind of CPO resonance, which involves the dynamics of the population difference of the atoms in the two ground states [@thomas12] to enhance the non-linearity of the system. It enables us to achieve comparable PSA gains with approximately 2 $\times$ 10$^{11}$ cm$^{-3}$ of atomic density, which is at least two orders of magnitude less than rubidium [@PLOL1]. Further, $^4$He has other favorable properties such as absence of nuclear spin resulting in a simplified energy level structure without any hyperfine levels. This has an important consequence as it eliminates unwanted FWM processes, which usually arise due to transitions in the nearby hyperfine levels and add extra noise and degrade squeezing [@PLOE1]. Thus, using this simple system, we expect to achieve high PSA gains, close to resonance, within the Doppler width and implement a perfect squeezer. ![(a) (Color online) Schematics of the $\Lambda$ structure in helium, both arms are excited by all the beams: pump ($\Omega_p$, red), signal (($\Omega_s$), blue) and idler ($\Omega_i$, green) (b) Different FWM processes possible in the system[]{data-label="fw"}](scheme.eps){width="12cm"} The experiment is based on the $2^3\mathrm{S}_1 \rightarrow 2^3\mathrm{P}_1$ (D1) transition of He$^{4}$, and the $\Lambda$ system is constituted by two transitions corresponding to $\sigma^+$ and $\sigma^-$ polarizations (see Fig. \[fw\]a). We excite this system with linearly polarized pump, signal, and idler beams with co-polarized signal and idler beams orthogonal to the pump beam. A FWM process takes place when two pump photons are annihilated and a signal and an idler photon are generated, or vice versa: due to the chosen polarizations of the beams, there are four FWM channels possible in this $\Lambda$ system, each conserving energy and momentum (both linear and angular), as shown in fig. \[fw\]b. In process (i), two $\sigma^+$ pump (p) photons are absorbed in the left arm and a $\sigma^+$ signal ($s$) and a $\sigma^+$ idler ($i$) photon are emitted in the same arm. Likewise, in process (ii) all photons are $\sigma^-$ and are on the right arm. These two processes are based on CPO in the coupled open system [@thomas12]. In process (iii) (and (iv)) two pump photons, a $\sigma^+$ and a $\sigma^-$, are absorbed and a $\sigma^+$ ($\sigma^-$) signal and a $\sigma^-$ ($\sigma^+$) idler photon are emitted. Processes (iii) and (iv) involve the excitation of Raman coherence between the two ground states. For the present work, we have not performed any quantum measurements and thus, the system can be modeled using classical approach. But in view of the future applications in the quantum domain, we adopt here a quantum-mechanical approach and the qualitative behavior of the PSA can be explained using a similar formalism as in [@levenson; @Jingth]. Considering the pump as a strong classical field and signal and idler beams as quantum mechanical operators, the FWM interaction Hamiltonian can be written as [@Jingth] $$\hat{H}=i \hbar \zeta e^{2 \phi_{p_{in}}} \hat{a_s} \hat{a_i}+ h.c, \label{hami}$$ where $\hat{a_s} (\hat{a_i})$ is the annihilation operator corresponding to the signal (idler), $\zeta$ is the strength of the FWM process, proportional to the third order susceptibility and the intensity of the pump beam and $\phi_{p_{in}}$ is the input phase of the pump. In the Heisenberg picture, using the Hamiltonian (Eq. \[hami\]), the rate equation for $\hat{a_s}$ is given as d$\hat{a_s} (t)/$d$t=i/\hbar [\hat{H},\hat{a_s}(t)]$, from which its time evolution can be evaluated as: $$\begin{aligned} \hat{a_s}(t)= \cosh (\zeta t)\hat{a}_{s0} + e^{i 2\phi_{p_{in}}} \sinh (\zeta t) \hat{a}_{i0}^\dagger,\nonumber\\ \hat{a_i}^\dagger (t)= e^{-i 2\phi_{p_{in}}} \sinh (\zeta t) \hat{a}_{s0} + \cosh (\zeta t) \hat{a}_{i0}^\dagger. \label{ops}\end{aligned}$$ Neglecting pump depletion, PSA gain for the signal is then computed by calculating the ratio between the average number of signal photons at the output $\left\langle \hat{a_s}^\dagger (t) \hat{a_s} (t) \right\rangle$ and the input $\left\langle \hat{a}_{s0}^\dagger \hat{a}_{s0} \right\rangle$ and is found to be $$G_{PSA}=2g - 1 + 2 \sqrt{g(g-1)} \cos{(\Phi)}, \label{eqgain}$$ where $g=(\cosh(\zeta t))^2$ and $\Phi=2\phi_{p_{in}}-\phi_{s_{in}}-\phi_{i_{in}}$ is the relative phase between the three beams: pump, signal and idler. Thus, from the above equation (Eq.(\[eqgain\])), we see that the gain is maximum ($G_{max}$) when $\Phi=0$ and minimum ($G_{min}$) when $\Phi=\pi$ and that for an ideal PSA, $G_{min}=1/G_{max}$ [@levenson]. ![(Color online) Experimental setup: Pump, signal and idler are derived from the same laser with frequencies and amplitudes controlled by two acousto optic (AO) modulators. Signal and idler are non-degenerate but identically linearly polarized (orthogonal to the pump polarization) and follow the same optical path. A polarizing beam splitter (PBS) recombines the beams before the cell. The piezo-actuator in the pump path enables to scan the phase. The photodiode 1 is used to measure the input relative phase between pump and signal/idler before the cell and the amplified output is detected after the cell by the photodiode 2[]{data-label="exp"}](exp.eps){width="12cm"} Our experimental set up is shown in fig. \[exp\]. The helium cell is 6 cm long, filled with 1 Torr of He$^4$ and is at room temperature. It is placed in a three-layer $\mu$-metal shield to remove magnetic field gradients. Helium atoms are excited to the metastable state by an RF discharge at 27 MHz. The Doppler width corresponding to the optical transition (D1) is around 0.9 GHz (half width at half maximum). For the non-degenerate signal-idler PSA configuration, signal and idler photons have a frequency separation of $2\delta$ and they are symmetrically located on either side of the pump frequency ($\omega_p$) as shown in fig. \[exp\]. Both beams are derived from the same laser at 1.083 $\mu$m and have nearly the same diameter of about 2 mm. They are controlled in frequencies and amplitudes by two acousto-optic modulators (AO) and recombined using a polarizing beam splitter (PBS). The pump power can be varied from 5 mW to 80 mW using a tapered amplifier while the signal and idler have equal powers at the input of the cell, around 30 $\mu$W. The input relative phase ($\Phi$) between the pump, signal and idler is scanned using a piezo actuator attached to a mirror in the pump path and is measured using the beatnote detected by the photodiode 1 before the cell (see fig. \[exp\]). After the cell, polarization optics allows the detection of mainly the amplified signal and idler along with a small amount of coupling. Using this residual coupling as the local oscillator, we perform heterodyne detection and measure the output relative phase of the amplified signal/idler with respect to the pump. Thus, at photodiode 2, we detect the beating between the three beams: signal, idler and residual pump, reading: $$\begin{aligned} I=G_s I_{s_{in}}+G_i I_{i_{in}}+2\sqrt{G_s I_{s_{in}} G_i I_{i_{in}}} \cos{(2\delta t+\Delta \phi_{si})}+ I_{p} \nonumber\\ +2 \sqrt{I_p} (\sqrt{G_s I_{s_{in}}} \cos(\delta t + \Delta \phi_{sp_{out}})+\sqrt{G_i I_{i_{in}}}\cos(\delta t + \Delta \phi_{ip_{out}})), \label{eq1}\end{aligned}$$ where $G_{s(i)}$ is defined to be the signal (idler) gain, as the ratio of the output signal (idler) intensity to the input signal (idler) intensity. $I_{s_{in}}$ and $I_{i_{in}}$ correspond to the input signal and idler intensities, respectively, and $I_p$ is the residual pump intensity. For PSA operation, we send equal intensities of signal and idler with the same phase, i.e. $I_{s_{in}}=I_{i_{in}}$ (and $G_s=G_i=G$). We checked that the relative phase between signal and idler is still 0 at the output when $\delta$ is small enough (&lt; 25 kHz), i.e. $\Delta \phi_{si}=0$. This also results in the same phase for the pump-signal and pump-idler beatnote at the output, i.e. $ \Delta \phi_{sp_{out}}=\Delta \phi_{ip_{out}} \equiv \Delta \phi_{out}$. Thus Eq.(\[eq1\]) reduces to $$I=2 G I_{s_{in}}+2 G I_{s_{in}} \cos{(2\delta t)}+ I_{p}+ 4 \sqrt{I_p G I_{s_{in}}} \cos{(\delta t)} \cos{(\Delta \phi_{out})}. \label{eqI}$$ In order to evaluate PSA gain $G$, we perform Fourier transform of the data, which gives us peaks at $\delta$ and $2\delta$ frequencies. The gain is then calculated by computing the ratio of the amplitudes of the peak at $2\delta$ frequency for the cell-on and cell-off conditions. ![(Color online) (a) Variation of PSA gain as a function of input relative phase between pump and signal ($\Delta \phi_{in}$) for a pump power of 30 mW (b) Variation of maximum PSA gain ($G_{max}$, black squares) and minimum PSA gain ($G_{min}$, red circles) as a function of pump power. 1/$G_{max}$ ( blue triangles) corresponds to the ideal value for $G_{min}$[]{data-label="psf"}](fig3n.eps){width="14cm"} For PSA, as the relative phase ($\Phi$) between the pump, signal and idler at the input is scanned, the signal successively undergoes amplification ($G>1$) and deamplification ($G<1$). Note that in the present case, by our definition, $\Phi=2\phi_{p_{in}}-\phi_{s_{in}}-\phi_{i_{in}}=2\phi_{p_{in}}-2\phi_{s_{in}}=2\Delta \phi_{in}$, where $\Delta \phi_{in}$ is the relative phase between pump and signal at the input. In the experiment, the piezo actuator attached to the mirror in the pump path scans the relative phase and we study the variation of the gain as shown in fig. \[psf\]a, which is as expected theoretically (Eq.(\[eqgain\])). The maximum obtainable PSA gain depends on the input pump power, the overlap between the spatial modes of the beams and the optical detuning. We have studied the variation of the maximum and minimum gains ($G_{max}$ and $G_{min}$, respectively) as a function of pump power as shown in fig. \[psf\]b. A maximum gain of around 7 can be achieved for 40 mW of pump power and a pump-signal detuning ($\delta$) of 2 kHz. With better alignment and larger optical thickness, we may achieve even larger gains, for example through a multi-pass scheme. From fig. \[psf\]b, it is visible that the measured $G_{min}$ is close to the ideal value (=1/$G_{max}$) for a wide range of pump powers. Further, we have also performed phase insensitive amplification (PIA) in our scheme, by sending only the signal (and no idler) at the input of the cell. In this case, the idler is generated at a frequency ($\omega_p-\delta$) from the vacuum fluctuations and the signal is amplified in the process. Under these conditions, PIA gain can be similarly found using Eq.(\[ops\]) with no idler at the input, i.e. by substituting $\hat{a}_{i0}=0$. The resulting PIA gain is then given as $G_{PIA}$=g=$(\cosh(\zeta t))^2$ and from Eq.(\[eqgain\]), the relationship between the maximum PSA and PIA gain is: $G_{max}=2G_{PIA}-1+2\sqrt{G_{PIA}(G_{PIA}-1)}$. In the experiment, the PIA gain is measured for different pump powers and is found to be close to its ideal value obtained from the corresponding $G_{max}$ as shown in fig.\[piaf\]a. We have also investigated the variation of gain with the pump-signal detuning, $\delta$ (fig. \[piaf\]b, for pump power=30 mW). We define the PSA bandwidth as the maximum value of $\delta$ separation for which $G_{min}$ is very close to its ideal value (1/$G_{max}$). As shown in fig. \[piaf\]b, the system has a large gain bandwidth of more than 200 kHz. It is to be noted that this agrees well with the bandwidth of the CPO resonance at the corresponding pump power [@thomas12]. For larger $\delta$ separation, the signal frequency goes out of the transparency window and gets absorbed and thus both maximum and minimum gains tend to drop as shown in fig. \[piaf\]b. The gain bandwidth can be increased by improving the spatial modes of the beams and the alignment at the input. ![(Color online) (a) Variation of maximum PSA gain ($G_{max}$, black squares) and PIA gain ($G_{PIA}$, red circles) and expected PIA gain from $G_{max}$ (blue triangles) as a function of pump power (b) PSA Gain spectrum: Variation of $G_{max}$ (black squares) and $G_{min}$ (red circles) and 1/$G_{max}$ (blue triangles) as a function of pump-signal detuning ($\delta$)](fig4n.eps){width="14cm"} . \[piaf\] We must stress here that unlike other atomic systems, our scheme does not suffer from ’unwanted’ FWM processes [@PLprl; @PLOE1]. This is evidenced from (a) the absence of any additional peaks at any undesired frequency in the Fourier transform of the beatnote pattern detected at photodiode 2, (b) the fact that, in our system, $G_{min}$ $\approx$ 1/$G_{max}$ for a wide range of experimental parameters. ![(Color online)(a) Variation of the measured output relative phase (blue, dashed) and measured PSA gain (green solid) as a function of the measured input relative phase ($\Delta\phi_{in}$) (b) Output phase histogram[]{data-label="ph"}](fig5.eps){width="12cm"} In order to further explore the quality of this phase sensitive amplifier, we investigate the phase-to-phase transfer characteristics of our PSA. Such measurements have been performed earlier in fiber based PSA in the context of phase regeneration [@PA2011] but for an atom-based PSA, this is being reported for the first time to the best of our knowledge. Such transfer curves can be used to characterize the performance of the amplifier. Figure \[ph\]a shows the transfer curves (experimental) for a pump power of 30 mW. The blue (dashed) curve shows the phase transfer which is the variation of the output relative phase ($\Delta \phi_{out}$) between the pump and signal with the input relative phase ($\Delta \phi_{in}$), while the green (solid) curve is the corresponding variation of the PSA gain. As we can see, $\Delta \phi_{out}$ is either close to 0 or $\pi$ for a wide range of input phases. For an ideal PSA, when the gain is large, the phase transfer curve is like a square wave, oscillating between 0 and $\pi$, which is called phase squeezing in the telecommunication field [@PA2011]. In terms of a histogram, the output phase is localized around 0 and $\pi$ as shown in fig. \[ph\]b. Such transfer curves are characteristic of a squeezer [@symplec]: the more localized the output phase, the better the performance of the squeezer. ![(Color online) Phase transfer curves for two different cases , (a) Experimental phase transfer curves for a very good PSA (blue solid) and for PSA-PIA mixed (red dashed) (b) Corresponding theoretical curves[]{data-label="dpt"}](fig6v.eps){width="12cm"} In fig. \[dpt\]a, we have plotted such phase-to-phase transfer curves (experimental) for two different cases: in the first case, we send in equal intensities of signal and idler at the input and the measured maximum gain is 5.3. The minimum gain is 0.25, which is close to its ideal value (=0.19). This corresponds to a nearly pure PSA and is represented by the blue solid curve in fig. \[dpt\]a. The second case (red dashed curve) is a mixed PSA-PIA, as signal and idler intensities at the input are not the same: $I_{s_{in}}/I_{i_{in}}$=1.78. We can see from fig. \[dpt\]a that for a pure PSA, the output phase is quite flat while for mixed PSA-PIA, the output phase exhibits a higher slope. It should be noted that here for a better physical understanding and a clear comparison, we have not wrapped the output phase like in fig. \[ph\]a. The corresponding theoretical curves plotted in fig. \[dpt\]b are obtained considering the fields to be classical  [@PA2011] and agree well with the experimental curves of fig. \[dpt\]a. The small mismatch of the experimental curve in the case of mixed PSA-PIA with the theoretical curve is probably due to the added uncertainity of the output relative phase between signal-idler which is not completely preserved in the presence of PIA and limits our data processing. Notwithstanding these minor discrepancies, these phase transfer curves qualitatively give an idea of the purity of the PSA. Indeed, we found that these curves are very sensitive to a small mismatch: the output phase is quickly less localized, while $G_{max}.G_{min}$ is still close to 1. From these results, we expect a high degree of quantum squeezing in our output, which will be very interesting for performing quantum information processing tasks using metastable helium. Further, in order to completely model the system, one needs to consider full density matrix for the system and solve Maxwell-Bloch equations: this work is in progress and will be reported later but from our preliminary simulation results, we have found that the Raman coherence does not play much role in the PSA: it is the CPO based processes which mainly contribute to the PSA gain. In conclusion, we have demonstrated phase-sensitive amplification in metastable helium using ultra-narrow CPO resonance with a maximum gain of nearly 7 and 200 kHz bandwidth, at resonance and within the Doppler width. The measured PSA and PIA gains are well consistent with each other and close to the ideal values, illustrating that we have a pure PSA without any additional unwanted FWM processes. Such large gains in the absence of an external cavity have been made possible due to inherently large CPO-enhanced $\chi^{(3)}$ offered by the system. Further, we have investigated phase-to-phase transfer characteristics which confirm that this system is a very good squeezer. This ensures that we can realize a pure phase-sensitive amplifier which should lead to the generation of non-classical states of light. Since the gain of the PSA is closely related to the degree of squeezing, we believe that we can generate highly squeezed states at low frequencies over some hundreds of kHz. Since optical storage has already been successfully implemented in this system [@MACPO], it opens the way to realizing an efficient quantum memory [@Lvovsky] using metastable helium with two cascaded ${^4}$He cells. Further, the system is quite versatile and can be used to implement PSA in the degenerate signal-idler configuration, giving rise to the possibility of twin beam generation [@PL2]. **Funding** Indo-French CEFIPRA, labex PALM, Délégation Generale à l’Armement (DGA) and the Region Ile-de-France DIM Nano-K, Institut Universitaire de France (IUF), Chinese Scholarship Council (CSC). [10]{} C. M. Caves, Phys. Rev. D **26**, 1817 (1982). K. McKenzie, D. Shaddock, D. McClelland, B. Buchler, and P. K. Lam, Phys. Rev. Lett. **88**, 231102 (2002). I. Sokolov and M. Kolobov, Opt. Lett. **29**, 703 (2004). C. Lundstrom, Z. Tong, M. Karlsson, and P. A. Andrekson, Opt. Lett. **36**, 20149 (2011). G. S. Agarwal, *Quantum optics* (Cambridge University Press, 2013). J. A. Levenson, I. Abram, T. Rivera, and P. Grangier, J. Opt. Soc. Am. B **10**, 2233 (1993). Z. Tong, C. Lundstrom, P. A. Andrekson, M. Karlsson, and A. Bogris, IEEE J. Sel. Top. Quantum Electron **18**, 1016 (2012). N. V. Corzo, A. M. Marino, K. M. Jones, and P. D. Lett, Phys. Rev. Lett. **17**, 043602 (2012). N. Corzo, A. M. Marino, K. M. Jones, and P. D. Lett, Opt. Exp. **17**, 21358 (2011). Z. Qin, L. Cao, H. Wang, A. M. Marino, W. Zhang, and J. Jing, Phys. Rev. Lett. **113**, 023602 (2014). M. Zhang, R. N. Lanning, Z. Xiao, J. P. Dowling, I. Novikova, and E. E. Mikhailov, Phys. Rev. A **93**, 013853 (2016) H. Vahlbruch, M. Mehmet, S. Chelkowski, B. Hage, A. Franzen, N. Lastzka, S. Goßler, K. Danzmann, and R. Schnabel, Phys. Rev. Lett. **100**, 033602 (2008). T. Laupr[ê]{}tre, S. Kumar, P. Berger, R. Faoro, R. Ghosh, F. Bretenaker, and F. Goldfarb, Phys. Rev. A **85**, 051805 (2012). C. F. McCormick, V. Boyer, E. Arimondo, and P. D. Lett, Opt. Lett. **32**, 178 (2007). Y. Fang and J. Jing, New J Phys **17**, 023027 (2015). G. Ferrini, I. Fsaifes, T. Labidi, F. Goldfarb, N. Treps, and F. Bretenaker, J. Opt. Soc. Am. B **31**, 1627 (2014). M.-A. Maynard, F. Bretenaker, and F. Goldfarb, Phys. Rev. A **90**, 061801(R) (2014). J. Appel, E. Figueroa, D. Korystov, M. Lobino, and A. I. Lvovsky, Phys. Rev. Lett. **100**, 093602 (2008). A. M. Marino, V. Boyer, R. C. Pooser, P. D. Lett, K. Lemons, and K. M. Jones, Phys. Rev. Lett. **101**, 093602 (2008).
{ "pile_set_name": "ArXiv" }
--- author: - | \ Jožef Stefan Institute and Faculty of Mathematics and Physics, University of Ljubljana\ E-mail: title: 'Beyond the Standard Model, guided by Lepton Universality' --- Introduction ============ The Standard Model, our best theory describing the basic constituents and their interactions, has passed so far all the challenges posed by the numerous tests performed at high-energy and high-intensity experiments. The flavour sector of the Standard Model (SM) is the richest in number of parameters describing fermion masses and mixing parameters among them. The flavour and CP-violating effects have been confirmed by the $B$-factories to conform to the Cabibbo-Kobayashi-Maskawa (CKM) framework. We do know SM is superseeded by some BSM model due to several shortcomings: unexplained neutrino masses, breakdown of the theory at the Planck scale, and possibly the lack of dark matter particle. There are also conceptual issues such as the electroweak hierarchy problem and puzzling hierarchies in masses and mixings. The measurements at future $B$-meson experiments can further challenge the flavour sector by making careful tests of the SM predictions. The observables that test lepton flavour universality (LFU) have been essential in SM validation. Violation of a large flavour symmetry of the SM is confined to the Yukawa sector. According to the SM pattern of Yukawas lepton flavours are conserved and are treated universally by gauge bosons. Only Higgs Yukawas and masses separate charged lepton flavours. In LFU ratios we compare decay widths or cross sections which differ only in charged lepton flavour. The gauge and CKM factors cancel out in the ratio, whereas hadronic physics parameters entering processes, such as decay constants and form factors, cancel in processes with two body final states and the LFU ratio can be expressed as a function of particle masses. In processes with $(\geq 3)$-body kinematics the dependence on form factors is partially cancelled and vanishes in the limit where lepton masses can be neglected. The persistent hints of violation of the LFU predictions in the SM have been reported in semileptonic $B$ meson decays. In charged-current decay $B \to D^{(*)} l \nu$ the two ratios ${R_{D^{(*)}}}= {\mathcal}{B}(B \to D^{(*)}) \tau \nu)/{\mathcal}{B}(B \to D^{(*)}) \ell \nu)$, with $\ell = e,\mu$ have been measured by BaBar, Belle, while LHCb experiment measured $R_{D^*}$. The current world average (WA) of ${R_{D^{(*)}}}$ measurements by HFLAV [@Amhis:2019ckw] is based on experimental results [@1205.5442; @1303.0571; @1507.03233; @1506.08614; @1612.00529; @1709.00129; @1708.08856; @1709.02505; @1904.08794] and exhibits more than $3\sigma$ tension of SM with the data: $$\label{eq:RDexp} \left. \begin{array}{rcl} R_D^{\mathrm}{WA} & = & 0.340(27)(13) \\ R_{D^*}^{\mathrm}{WA} &= & 0.295(11)(8) \end{array} \right\} \,\, {\mathrm}{Corr}(R_D^{\mathrm}{WA}, R_{D^*}^{\mathrm}{WA}) = -0.38,\qquad \begin{array}{rcl} R_D^{\mathrm}{SM} & = & 0.299(3), \\ R_{D^*}^{\mathrm}{SM} &= & 0.258(5). \end{array}$$ The above SM predictions refer to HFLAV average of results in [@1606.08030; @1703.05330; @1707.09509; @1707.09977]. Here the large mass of the $\tau$ implies large LFU violation already in the SM. In rare semileptonic decays $B \to K \ell \ell$ where $\ell = e,\mu$ the SM contribution should respect LFU at $q^2 \gtrsim m_\mu^2$ and the ratios ${R_{K^{(*)}}}= {\mathcal}{B}(B \to K^{(*)} \mu^+ \mu^-)/ {\mathcal}{B}(B \to K^{(*)} e^+ e^-)$ are expected to be close to $1$. The LHCb experiment measured ${R_{K^{(*)}}}$ at low invariant mass bin $[1.1, 6.0]{\mathrm{~GeV}}^2$ of $q^2$, $R_{K^*}$ was measured also in the ultra low bin $[0.045,1.1]{\mathrm{~GeV^2}}$: $$\begin{aligned} \phantom{aaa}R_K^{\mathrm}{LHCb}\big|_{[1.1,6]{\mathrm{~GeV}}^2} &= 0.846 ^{+0.060+0.016}_{-0.054-0.014}~\cite{Aaij:2019wad}, \qquad R_K^{\mathrm}{th}\big|_{[1.1,6]{\mathrm{~GeV}}^2} = 1.00\pm 0.01~\cite{1605.07633},\\ R_{K^*}^{\mathrm}{LHCb}\big|_{[1.1,6]{\mathrm{~GeV}}^2} &= 0.69^{+0.11}_{-0.07}\pm 0.03~\cite{Aaij:2017vbb}, \qquad R_{K^*}^{\mathrm}{th}\big|_{[1.1,6]{\mathrm{~GeV}}^2} = 1.00\pm 0.01 ~\cite{1605.07633},\\ R_{K^*}^{\mathrm}{LHCb}\big|_{<1.1{\mathrm{~GeV}}^2} &= 0.66^{+0.11}_{-0.07}\pm 0.03~\cite{Aaij:2017vbb}, \qquad R_{K^*}^{\mathrm}{th}\big|_{<1.1{\mathrm{~GeV}}^2} = 0.983 \pm 0.014 ~\cite{1605.07633}. \end{aligned}$$ The two experimental errors are split into statistical and systematic one, and are also given in that order. All three results have systematic errors well under control and are consistently below the SM predictions. The combined significance of about $4\sigma$. The two LFU ratios are currently our only potential hints of flavour effects beyond the SM. In this talk we explore what are the implications of lepton flavour universality hints for BSM scenarios. Effective theory view ===================== The weak effective Lagrangian for semileptonic decays is an appropriate framework to parameterize general BSM effects in ${R_{D^{(*)}}}$. At renormalization scale $\mu = m_b$ the relevant interactions for $b \to c \tau \bar \nu_\tau$ transitions are: $$\label{eq:1} \begin{split} {\mathcal}{L}_{\mathrm}{SL} = \frac{4G_F V_{cb}}{\sqrt{2}} \Big[(1&+g_{V_L}) (\bar c_L \gamma^\mu b_L) (\bar\tau_L \gamma_\mu \nu_{\tau L}) + g_{V_R} (\bar c_R \gamma^\mu b_R) (\bar\tau_L \gamma_\mu \nu_{\tau L}) \\ &+g_{S_L} (\bar c_R b_L) (\bar\tau_R \nu_{\tau L}) + g_{S_R} (\bar c_L b_R) (\bar\tau_R \nu_{\tau L}) +g_{T} (\bar c_R \sigma_{\mu\nu} b_L) (\bar\tau_R \sigma^{\mu\nu} \nu_{\tau L})\Big]. \end{split}$$ Here we will assume that BSM in ${R_{D^{(*)}}}$ does not contribute to $b\to c \ell \bar\nu_\ell$. Using the form factors from the lattice [@1505.03925; @1503.07237; @1311.5071], and using further theoretical [@1703.05330] and experimental results [@1612.07233] the authors of Ref. [@1806.10155] presented compact numerical expressions for ${R_{D^{(*)}}}$: $$\begin{split} \frac{{R_{D^{(*)}}}}{R_{D^{(*)}}^{\mathrm}{SM}} = &1 + a_S^{D^{(*)}} |g_{S_R} + g_{S_L}|^2 + a_P^{D^{(*)}} |g_{S_R} - g_{S_L}|^2 + a_T^{D^{(*)}} |g_T|^2\\ &+ a_{S V_L}^{D^{(*)}} {\mathrm{Re}}[g_{S_R} + g_{S_L}] + a_{P V_L}^{D^{(*)}} {\mathrm{Re}}[g_{S_R} - g_{S_L}] + a_{T V_L}^{D^{(*)}} {\mathrm{Re}}[g_T]. \end{split}$$ Values of coefficients $a_i^{D^{(*)}}$ can be found in [@1806.10155] or [@1811.09603]. Using these coefficents it was found in Ref. [@1808.08179] that the tensor (real $g_T$) scenario fits the data best, followed by the left-handed scenario (real $g_{V_L}$). The same authors also studied various leptoquark (LQ) scenarios with a combination of scalar and tensor coupling and found two LQ scenarios that accommodate the data: real $g_{S_L} = -4 g_T$, or imaginary $g_{S_L} = 4 g_T$, where both relations hold at the LQ scale and are modified by QCD renormalization at the scale $m_b$. Additional information is available in the $q^2$ shapes of decay spectra as shown by Ref. [@1506.08896]. Belle experiment also measured in $B \to D^* \tau \nu$ the polarization of $\tau$ and longitudinal polarization [@1903.03102] of $D^*$ and the latter observable $F_L^{D^*,{\mathrm}{Belle}} = 0.60 \pm 0.08({\mathrm}{stat}) \pm 0.04({\mathrm}{sys})$ is somewhat below the SM value $0.46\pm 0.04$ [@1606.03164] as well as below most of BSM models that fit ${R_{D^{(*)}}}$ well [@1606.03164]. Furthermore, it was shown that various LQ models could be separated one from another by more preciese measurement of the $\tau$ polarization [@1811.08899]. Scalar interactions $g_{S_L}$, $g_{S_R}$ could also severely disturb the branching fraction $B_c \to \tau \nu$. The latter can be constrained from the LEP data [@1611.06676; @1708.04072] and together with the known $B_c$ lifetime provide relevant constraint on $g_{S_R} - g_{S_L}$ [@1605.09308; @1811.09603]. Naive scale of NP from ${R_{D^{(*)}}}$ can be inferred to be few TeV if the effective interactions (e.g. $g_{V_L}$) are assumed to be of order 1. Such large BSM effects might also generate dangerous effects via radiative corrections in precisely measured lepton decays, see e.g. [@1606.00524; @1705.00929]. The ${R_{K^{(*)}}}$ anomalies and related measurements in $b \to s \mu^+ \mu^-$ transitions are described at low energies with $$\label{eq:Leff-RK} {\mathcal}{L}_{b \to s \ell\ell} = \frac{4G_F}{\sqrt{2}}V_{tb} V_{ts}^* \sum_{\substack{i=7,9,10,S,P\\\phantom{i=}9',10',S',P'}} C_i \mathcal{O}_i,$$ where the most relevant operators for the anomalies are $$\label{eq:4} \begin{split} \mathcal{O}_9 &= \frac{e^2}{16\pi^2} (\bar s_L \gamma^\mu b_L) (\bar \mu \gamma_\mu \mu),\\ \mathcal{O}_{10} &= \frac{e^2}{16\pi^2} (\bar s_L \gamma^\mu b_L) (\bar \mu \gamma_\mu \gamma^5 \mu),\\ \mathcal{O}_{S} &= \frac{e^2}{16\pi^2} (\bar s_L b_R) (\bar \mu \mu),\\ \mathcal{O}_{P} &= \frac{e^2}{16\pi^2} (\bar s_L b_R) (\bar \mu \gamma^5 \mu). \end{split}$$ The fits of ${R_{K^{(*)}}}$ indicate that left-handed scenario, $C_9 = -C_{10}$, gives a very good description of the data, where we have assumed SM-like couplings to the electrons. Furthermore, if we include many available observables of the $b\to s\ell \ell$ transitions from Belle, BaBar, and LHCb in the global fit we find that NP couplings to electrons are indeed not necessary, whereas there is strong indication for non-zero $C_9$ [@1903.09578]. The scale of generic NP model with order 1 flavour changing neutral current couplings to explain the ${R_{K^{(*)}}}$ is few $10$’s of TeV. The requirement of perturbative unitarity in $qq \to \ell \ell$ scattering give the upper bound on the scale of ${R_{K^{(*)}}}$ to be $\lesssim 10{\mathrm{~TeV}}$ and $\lesssim 100{\mathrm{~TeV}}$ for BSM explaining ${R_{D^{(*)}}}$ [@1706.01868]. Leptoquark models ================= Here we focus on the LQ mediators with suitable properties for ${R_{D^{(*)}}}$ and/or ${R_{K^{(*)}}}$. The phenomenological advantage of LQs is that their natural tree-level contribution are semi-leptonic processes whereas their SM charges do not allow for tree-level contributions to neutral meson mixing amplitudes. The latter are major obstacle to $Z'$ models that address ${R_{K^{(*)}}}$. The recent phenomenological evaluation of LQs with respect to their role in LFU observables, flavour constraints, $Z$-pole observables, high-$p_T$ constraints was undertaken in [@1808.08179] and confirmed that vector leptoquark $U_1$ in the representation $(3,1,2/3)$ is the only LQ that explains all observed LFU. The $U_1$ has been proposed before [@1706.07808] and was later embedded in a necessary UV completion, based on Pati-Salam unified groups for each generation [@1712.01368; @1805.09328; @1903.11517] or in the context of 4321 model [@1802.04274], among others. Ref. [@1808.08179] found no appropriate scalar LQ for both LFU anomalies. $S_3$ with purely left-handed couplings can accommodate ${R_{K^{(*)}}}$ while $R_2$ and $S_1$ are suitable for ${R_{D^{(*)}}}$. At loop level $R_2$ has the desired Lorentz structure of couplings to explain ${R_{K^{(*)}}}$ but the needed couplings are in tension with LEP and LHC constraints [@1704.05835; @1805.04917]. Here we focus on scenarios with two scalar LQs. One possibility is to take $S_1$ and $S_3$ which is the route described in  [@1703.09226] while here we will entertain the possibility to employ pair of $R_2$ and $S_3$ LQs. $R_2$ and $S_3$ from Grand Unified Theory ========================================= The Yukawa couplings of $R_2$ and $S_3$ with the quarks and leptons in the mass basis can be written as [@Becirevic:2018afm] $$\begin{aligned} \label{eq:two} \begin{split} \mathcal{L}_{\mathrm}{Yuk} = &+(V Y_R E_R^\dagger)^{ij} \bar{u}_{Li}\ell_{Rj}R_2^{\frac{5}{3}} + (Y_R E_R^\dagger)^{ij} \bar{d}_{Li}\ell_{Rj} R_2^{\frac{2}{3}} +(U_R Y_L U)^{ij} \bar{u}_{Ri} \nu_{Lj} R_2^{\frac{2}{3}}\\ &- (U_R Y_L)^{ij} \bar{u}_{Ri}\ell_{Lj} R_2^{\frac{5}{3}}+(Y_L U)^{ij} \bar{d}^C_{Li} \nu_{Lj} S_3^{\frac{1}{3}} - \sqrt{2}(V^* Y_L U)^{ij} \bar{u}^C_{Li} \nu_{Lj} S_3^{-\frac{2}{3}}\\ &+\sqrt{2} Y_L^{ij} \bar{d}^C_{Li} \ell_{Lj} S_3^{\frac{4}{3}} +(V^* Y_L)^{ij} \bar{u}^C_{Li} \ell_{Lj} S_3^{\frac{1}{3}}. \end{split} \end{aligned}$$ Here $Y_L$, $Y_R$ are the arbitrary LQ Yukawa matrices, $R_2^{(Q)}$ and $S_3^{(Q)}$ are LQ charge eigenstates of LQs. The unitary matrices $U_{L,R}$, $D_{L,R}$, $E_{L,R}$, and $N_L$ rotate between mass and gauge basis of up-type quarks, down-type quarks, charged leptons, and neutrinos, respectively. $V \equiv U_L D_L^\dagger = U_L$ is the CKM matrix, $U\equiv E_L N_L^\dagger = N_L^\dagger$ is the PMNS matrix. The following numerical pattern is assumed for the Yukawa matrices: $$\label{eq:yL-yR} Y_R E_R^\dagger = \begin{pmatrix} 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & y_R^{b\tau} \end{pmatrix},\qquad U_R Y_L = \begin{pmatrix} 0 & 0 & 0\\ 0 & y_L^{c\mu} & y_L^{c\tau}\\ 0 & 0 & 0 \end{pmatrix},$$ where $U_R$ is a rotation by angle $\theta$ between second and third generation. The parameters of the model are thus $m_{R_2}$, $m_{S_3}$, $y_R^{b\tau}$, $y_L^{c\mu}$, $y_L^{c\tau}$, and $\theta$. In a low energy LQ setting there is no reason for the Yukawa couplings of $S_3$ to be related to the ones of $R_2$. In our case we consider two leptoquarks within the $SU(5)$ based unification model where the scalar sector contains representations of dimension $\bm{45}$ and $\bm{50}$. The SM fermions reside in $\overline{\bm{5}}_{i}$ and $\bm{10}_i$, with $i(=1,2,3)$ counting generations. All the low-energy operators of Eq.  can be generated with $SU(5)$ contractions $a^{ij} \bm{10}_i \overline{\bm{5}}_j \overline{\bm{45}}$, and $b^{ij} \bm{10}_i \bm{10}_j \bm{50}$, where $a$ and $b(=b^T)$ are matrices in the flavour space. The former contraction couples $R_2 \in \bm{45}$ ($S_3 \in \bm{45}$) with the right-handed up-type quarks (quark doublets) and leptonic doublets, while the latter generates couplings of $R_2 \in \bm{50}$ with the quark doublets and right-handed charged leptons. To break $SU(5)$ down to the SM we use scalar representation $\bm{24}$ and write a term in the scalar potential $m\, \bm{45} \, \overline{\bm{50}} \, \bm{24}$. The two $R_2$ leptoquarks that reside in $\bm{45}$ and $\bm{50}$ then mix and allow us to have one light $R_2$ and one heavy $R_2$ in the spectrum, where the latter state completely decouples from the low-energy spectrum for large values of $m$ [@Becirevic:2018afm]. LQs can be dangerous for proton decay if they couple to diquarks. The $S_3$ leptoquark would not couple to the diquark if $SU(5)$ contraction $c^{ij} \bm{10}_i \bm{10}_j \bm{45}$ was forbidden or suppressed. Furthermore, $S_3$ must not mix with any other LQ with diquark couplings. Both conditions can be met in a generic $SU(5)$ framework [@Dorsner:2017wwn]. At high scale $\Lambda = m_{R_2}$ the Wilson coefficients for the charged current processes are: $$\label{eq:semilep-WC} \begin{split} g_{S_L}(\Lambda) = 4 \, g_T(\Lambda) &= \frac{y_L^{c\tau}\, {y_R^{b\tau}}^{\ast}}{4 \sqrt{2} \, m_{R_2}^2 \, G_F V_{cb}}. \end{split}$$ This is the only flavour of charged current semileptonic process affected by $R_2$. The ${R_{K^{(*)}}}$ anomaly is accounted for through left-handed tree-level contributions of $S_3$ to the vector and axial-vector Wilson coefficients [@Dorsner:2017ufx] $$\begin{split} \delta C_9 = - \delta C_{10} &=\dfrac{\pi v^2}{ \lambda_t \alpha_{\mathrm{em}}} \frac{\sin 2\theta\, (y_L^{c\mu})^2}{2 m_{S_3}^2}. \end{split}$$ Here the mixing angle enters as $\sin 2\theta$, originating from the matrix $U_R$, and plays an important role in suppressing effect in $R_{K^{(*)}}$ relative to the one in $R_{D^{(*)}}$. The $1\,\sigma$ interval $C_9=-C_{10} \in (-0.85,-0.50)$ has been obtained by performing a fit to $R_K$, $R_{K^{\ast}}$, and $\mathcal{B}(B_s\to \mu\mu)$. The left-handed (weak triplet) nature of the $S_3$ LQ imply contributions to both neutral and charged current semileptonic processes. Among the charged current observables the LFU ratios $R_{D^{(\ast)}}^{\mu/e} = \mathcal{B}(B\to D^{(\ast)}\mu \bar{\nu})/\mathcal{B}(B\to D^{(\ast)} e \bar{\nu})$ impose severe constraints on $S_3$ couplings. We have also considered $\mathcal{B}(B\to \tau \bar{\nu})$ and the kaon LFU ratio $R^K_{e/\mu}= \Gamma(K^-\to e^- \bar{\nu})/\Gamma(K^-\to \mu^- \bar{\nu})$. The $b\to s \nu\bar \nu$ constraints are not taken as inputs as constraints. Instead we predict $R_{\nu\nu}^{(\ast)}$ and compare it to experimental bounds, $R_{\nu\nu}<3.9$ and $R_{\nu\nu}^{\ast}<2.7$ [@Grygier:2017tzo]. The loop-induced neutral-current constraints affect both LQ’s couplings. We have taken into account the $B_s-\bar{B}_s$ mixing frequency, which is modified by the $S_3$ box-diagram, proportional to $\sin^2 2\theta \left[(y_L^{c\mu})^2 + (y_L^{c\tau})^2\right]^2/m_{S_3}^2$, and the upper limit on lepton flavour violating $\tau$ decays $\mathcal{B}(\tau\to\mu\phi)$, $\mathcal{B}(\tau\to\mu\gamma)$. The $Z$-boson couplings to leptons measured at LEP [@ALEPH:2005ab] are also modified at loop level by both LQs. ![Results of the flavour fit in the $g_{S_L}$ plane, scalar coupling entering the transition $b\to c\tau \bar{\nu}_\tau$. The $1\,\sigma (2\,\sigma)$ fitted regions are rendered in red (orange). Separate constraints from $R_D$ and $R_{D^\ast}$ to $2\,\sigma$ accuracy are shown by the blue and purple regions, respectively. The LHC exclusions are depicted by the gray regions. Dashed circle denotes the $pp \to \tau \nu$ constraint.[]{data-label="fig:gSplotRDst"}](gSscan) Taking into account the aforementioned flavour observables we have performed a fit for parameters $y_R^{b\tau}$, $y_L^{c\mu}$, $y_L^{c\tau}$ and $\theta$, while fixing the masses to $m_{R_2} = 0.8{\mathrm{~TeV}}$ and $m_{S_3} = 2{\mathrm{~TeV}}$. The opposite sign of interference terms in $R_D$ and $R_{D^*}$ require complex Wilson coefficient $g_{S_L}$ (Fig. \[fig:gSplotRDst\]), where we have put the complex phase in $y^R_{b\tau}$. The SM is excluded at $3.6\,\sigma$, the best fit point provides a good agreement with $R_{D^{(\ast)}}$ and $R_{K^{(\ast)}}$. Note that the required large imaginary part in $y_R^{b\tau}$ could be in principle tested in the future experiments measuring neutron EDM [@Dekens:2018bci]. The best fit point is consistent with the LHC constraints [@Becirevic:2018afm] superimposed in gray on the same plot. The $pp \to \tau \nu$ constraint in the effective theory approximation excludes the region outside dashed circle [@1811.07920] in Fig. \[fig:gSplotRDst\]. The consistency of model with low energy data requires that $\mathcal{B}(B\to K\mu\tau)$ is bounded and at $1\,\sigma$ we obtain $1.1 \times 10^{-7}\lesssim \mathcal{B}( B\to K \mu^\pm \tau^\mp) \lesssim 6.5\times 10^{-7}$, whereas related decay modes are predicted to be $\mathcal{B}(B\to K^\ast \mu\tau)\approx 1.9\times \mathcal{B}(B\to K \mu \tau)$ and $\mathcal{B}(B_s\to \mu\tau)\approx 0.9 \times \mathcal{B}(B\to K \mu \tau)$. Another important prediction is a $\gtrsim 50\%$ enhancement of $\mathcal{B}(B\to K ^{(\ast)} \nu \nu)$, which will be further tested at Belle 2. Remarkably, these two observables are highly correlated (Fig. \[fig:prediction\]). Furthermore, we derive a lower bound on $\mathcal{B}(\tau \to \mu \gamma)$, just below the current experimental limit, $\mathcal{B}(\tau \to \mu \gamma) \gtrsim 1.5\times 10^{-8}$. Finally, our description of $R_{D^{(\ast)}}$ anomaly requires relatively light LQ states, not far from the TeV scale, and these LQs are necessarily accessible at the LHC, as we discuss in Ref. [@Becirevic:2018afm]. ![Predicted $\mathcal{B}(B\to K \mu \tau)$ is plotted against predicted $R_{\nu\nu}=\mathcal{B}(B\to K^{(\ast)} \nu \bar{\nu})/\mathcal{B}(B\to K^{(\ast)} \nu \bar{\nu})^{\mathrm{SM}}$ for the $1\,\sigma$ (red) and $2\,\sigma$ (orange) regions of Fig. \[fig:gSplotRDst\]. The black line denotes the current experimental limit, $R_{\nu\nu}^{\ast}<2.7$ [@Grygier:2017tzo].[]{data-label="fig:prediction"}](BKnunu-BKtaumu.pdf) ![Most important LHC limits for each LQ process at a projected luminosity of 100fb$^{-1}$. The red region is excluded by the high-$p_T$ di-tau search by ATLAS [@Aaboud:2017sjh], the green and turquoise exclusion regions come from LQ pair production searches by CMS [@Sirunyan:2017yrk; @CMS:2017kmd; @Sirunyan:2018nkj]. The region of Yukawa couplings above the black line is excluded due to their non-perturbative values below the GUT scale (see [@Becirevic:2018afm] for more details). The yellow contour denotes the $1\,\sigma$ region of the fit to the low-energy observables.[]{data-label="fig:LHCbound"}](R2_S3_LHC_bounds_POST-MORIOND_2019.pdf){width="0.7\hsize"} Summary and outlook =================== Hints of lepton universality violation inconsistency in ${R_{K^{(*)}}}$ and ${R_{D^{(*)}}}$ with the SM have trigerred a gold rush in the flavour community that resulted in many proposed models. Leptoquarks are one possibility which are probably phenomenologically most suited to the observed puzzles which all dwell in leptoquarks’ natural habitat of semileptonic processes. The $U_1$ vector leptoquark accommodates all observed lepton universality anomalies and has to be accompanyed with its own mass generation mechanism stemming from the UV. We have proposed a two scalar LQ model that accommodates the observed LFU ratios in $B$-meson decays and is compatible with other low energy constraints as well as with direct searches at the LHC. The model has an $SU(5)$ origin that relates Yukawa couplings of the two LQs through a mixing angle. The model remains perturbative up to the unification scale. We propose signals of the two light LQs at the LHC and spell out predictions for several flavour observables. We predict and correlate $\mathcal{B}(B\to K\mu\tau)$ with $\mathcal{B}(B\to K ^{(\ast)} \nu \nu)$, as well as derive a lower bound for $\mathcal{B}(\tau\to\mu\gamma)$, which should be in reach of the Belle 2 experiment. [99]{} Y. S. Amhis [*et al.*]{} \[HFLAV Collaboration\], arXiv:1909.12524 \[hep-ex\]. J. P. Lees [*et al.*]{} \[BaBar Collaboration\], Phys. Rev. Lett.  [**109**]{}, 101802 (2012) doi:10.1103/PhysRevLett.109.101802 \[arXiv:1205.5442 \[hep-ex\]\]. J. P. Lees [*et al.*]{} \[BaBar Collaboration\], Phys. Rev. D [**88**]{}, no. 7, 072012 (2013) doi:10.1103/PhysRevD.88.072012 \[arXiv:1303.0571 \[hep-ex\]\]. M. Huschle [*et al.*]{} \[Belle Collaboration\], Phys. Rev. D [**92**]{}, no. 7, 072014 (2015) doi:10.1103/PhysRevD.92.072014 \[arXiv:1507.03233 \[hep-ex\]\]. R. Aaij [*et al.*]{} \[LHCb Collaboration\], Phys. Rev. Lett.  [**115**]{}, no. 11, 111803 (2015) Erratum: \[Phys. Rev. Lett.  [**115**]{}, no. 15, 159901 (2015)\] doi:10.1103/PhysRevLett.115.159901, 10.1103/PhysRevLett.115.111803 \[arXiv:1506.08614 \[hep-ex\]\]. S. Hirose [*et al.*]{} \[Belle Collaboration\], Phys. Rev. Lett.  [**118**]{}, no. 21, 211801 (2017) doi:10.1103/PhysRevLett.118.211801 \[arXiv:1612.00529 \[hep-ex\]\]. S. Hirose [*et al.*]{} \[Belle Collaboration\], Phys. Rev. D [**97**]{}, no. 1, 012004 (2018) doi:10.1103/PhysRevD.97.012004 \[arXiv:1709.00129 \[hep-ex\]\]. R. Aaij [*et al.*]{} \[LHCb Collaboration\], Phys. Rev. Lett.  [**120**]{}, no. 17, 171802 (2018) doi:10.1103/PhysRevLett.120.171802 \[arXiv:1708.08856 \[hep-ex\]\]. \*\*\* Not Found with lookup: ’find eprint 1709.02505’ A. Abdesselam [*et al.*]{} \[Belle Collaboration\], arXiv:1904.08794 \[hep-ex\]. D. Bigi and P. Gambino, Phys. Rev. D [**94**]{}, no. 9, 094008 (2016) doi:10.1103/PhysRevD.94.094008 \[arXiv:1606.08030 \[hep-ph\]\]. F. U. Bernlochner, Z. Ligeti, M. Papucci and D. J. Robinson, Phys. Rev. D [**95**]{}, no. 11, 115008 (2017) Erratum: \[Phys. Rev. D [**97**]{}, no. 5, 059902 (2018)\] doi:10.1103/PhysRevD.95.115008, 10.1103/PhysRevD.97.059902 \[arXiv:1703.05330 \[hep-ph\]\]. D. Bigi, P. Gambino and S. Schacht, JHEP [**1711**]{}, 061 (2017) doi:10.1007/JHEP11(2017)061 \[arXiv:1707.09509 \[hep-ph\]\]. S. Jaiswal, S. Nandi and S. K. Patra, JHEP [**1712**]{}, 060 (2017) doi:10.1007/JHEP12(2017)060 \[arXiv:1707.09977 \[hep-ph\]\]. R. Aaij [*et al.*]{} \[LHCb Collaboration\], Phys. Rev. Lett.  [**122**]{}, no. 19, 191801 (2019) doi:10.1103/PhysRevLett.122.191801 \[arXiv:1903.09252 \[hep-ex\]\]. M. Bordone, G. Isidori and A. Pattori, Eur. Phys. J. C [**76**]{}, no. 8, 440 (2016) doi:10.1140/epjc/s10052-016-4274-7 \[arXiv:1605.07633 \[hep-ph\]\]. R. Aaij [*et al.*]{} \[LHCb Collaboration\], JHEP [**1708**]{}, 055 (2017) doi:10.1007/JHEP08(2017)055 \[arXiv:1705.05802 \[hep-ex\]\]. H. Na [*et al.*]{} \[HPQCD Collaboration\], Phys. Rev. D [**92**]{}, no. 5, 054510 (2015) Erratum: \[Phys. Rev. D [**93**]{}, no. 11, 119906 (2016)\] doi:10.1103/PhysRevD.93.119906, 10.1103/PhysRevD.92.054510 \[arXiv:1505.03925 \[hep-lat\]\]. J. A. Bailey [*et al.*]{} \[MILC Collaboration\], Phys. Rev. D [**92**]{}, no. 3, 034506 (2015) doi:10.1103/PhysRevD.92.034506 \[arXiv:1503.07237 \[hep-lat\]\]. M. Atoui, D. Becirevic, V. Morénas and F. Sanfilippo, PoS LATTICE [**2013**]{}, 384 (2014) doi:10.22323/1.187.0384 \[arXiv:1311.5071 \[hep-lat\]\]. Y. Amhis [*et al.*]{} \[HFLAV Collaboration\], Eur. Phys. J. C [**77**]{}, no. 12, 895 (2017) doi:10.1140/epjc/s10052-017-5058-4 \[arXiv:1612.07233 \[hep-ex\]\]. F. Feruglio, P. Paradisi and O. Sumensari, JHEP [**1811**]{}, 191 (2018) doi:10.1007/JHEP11(2018)191 \[arXiv:1806.10155 \[hep-ph\]\]. M. Blanke, A. Crivellin, S. de Boer, T. Kitahara, M. Moscati, U. Nierste and I. Nišandžić, Phys. Rev. D [**99**]{}, no. 7, 075006 (2019) doi:10.1103/PhysRevD.99.075006 \[arXiv:1811.09603 \[hep-ph\]\]. A. Angelescu, D. Bečirević, D. A. Faroughy and O. Sumensari, JHEP [**1810**]{}, 183 (2018) doi:10.1007/JHEP10(2018)183 \[arXiv:1808.08179 \[hep-ph\]\]. M. Freytsis, Z. Ligeti and J. T. Ruderman, Phys. Rev. D [**92**]{}, no. 5, 054018 (2015) doi:10.1103/PhysRevD.92.054018 \[arXiv:1506.08896 \[hep-ph\]\]. A. Abdesselam [*et al.*]{} \[Belle Collaboration\], arXiv:1903.03102 \[hep-ex\]. A. K. Alok, D. Kumar, S. Kumbhakar and S. U. Sankar, Phys. Rev. D [**95**]{}, no. 11, 115038 (2017) doi:10.1103/PhysRevD.95.115038 \[arXiv:1606.03164 \[hep-ph\]\]. S. Iguro, T. Kitahara, Y. Omura, R. Watanabe and K. Yamamoto, JHEP [**1902**]{}, 194 (2019) doi:10.1007/JHEP02(2019)194 \[arXiv:1811.08899 \[hep-ph\]\]. R. Alonso, B. Grinstein and J. Martin Camalich, Phys. Rev. Lett.  [**118**]{}, no. 8, 081802 (2017) doi:10.1103/PhysRevLett.118.081802 \[arXiv:1611.06676 \[hep-ph\]\]. A. G. Akeroyd and C. H. Chen, Phys. Rev. D [**96**]{}, no. 7, 075011 (2017) doi:10.1103/PhysRevD.96.075011 \[arXiv:1708.04072 \[hep-ph\]\]. X. Q. Li, Y. D. Yang and X. Zhang, JHEP [**1608**]{}, 054 (2016) doi:10.1007/JHEP08(2016)054 \[arXiv:1605.09308 \[hep-ph\]\]. F. Feruglio, P. Paradisi and A. Pattori, Phys. Rev. Lett.  [**118**]{}, no. 1, 011801 (2017) doi:10.1103/PhysRevLett.118.011801 \[arXiv:1606.00524 \[hep-ph\]\]. F. Feruglio, P. Paradisi and A. Pattori, JHEP [**1709**]{}, 061 (2017) doi:10.1007/JHEP09(2017)061 \[arXiv:1705.00929 \[hep-ph\]\]. M. Algueró, B. Capdevila, A. Crivellin, S. Descotes-Genon, P. Masjuan, J. Matias and J. Virto, Eur. Phys. J. C [**79**]{}, no. 8, 714 (2019) doi:10.1140/epjc/s10052-019-7216-3 \[arXiv:1903.09578 \[hep-ph\]\]. L. Di Luzio and M. Nardecchia, Eur. Phys. J. C [**77**]{}, no. 8, 536 (2017) doi:10.1140/epjc/s10052-017-5118-9 \[arXiv:1706.01868 \[hep-ph\]\]. D. Buttazzo, A. Greljo, G. Isidori and D. Marzocca, JHEP [**1711**]{}, 044 (2017) doi:10.1007/JHEP11(2017)044 \[arXiv:1706.07808 \[hep-ph\]\]. M. Bordone, C. Cornella, J. Fuentes-Martin and G. Isidori, Phys. Lett. B [**779**]{}, 317 (2018) doi:10.1016/j.physletb.2018.02.011 \[arXiv:1712.01368 \[hep-ph\]\]. M. Bordone, C. Cornella, J. Fuentes-Martín and G. Isidori, JHEP [**1810**]{}, 148 (2018) doi:10.1007/JHEP10(2018)148 \[arXiv:1805.09328 \[hep-ph\]\]. C. Cornella, J. Fuentes-Martin and G. Isidori, JHEP [**1907**]{}, 168 (2019) doi:10.1007/JHEP07(2019)168 \[arXiv:1903.11517 \[hep-ph\]\]. A. Greljo and B. A. Stefanek, Phys. Lett. B [**782**]{}, 131 (2018) doi:10.1016/j.physletb.2018.05.033 \[arXiv:1802.04274 \[hep-ph\]\]. D. Bečirević and O. Sumensari, JHEP [**1708**]{}, 104 (2017) doi:10.1007/JHEP08(2017)104 \[arXiv:1704.05835 \[hep-ph\]\]. J. E. Camargo-Molina, A. Celis and D. A. Faroughy, Phys. Lett. B [**784**]{}, 284 (2018) doi:10.1016/j.physletb.2018.07.051 \[arXiv:1805.04917 \[hep-ph\]\]. A. Crivellin, D. Müller and T. Ota, JHEP [**1709**]{}, 040 (2017) doi:10.1007/JHEP09(2017)040 \[arXiv:1703.09226 \[hep-ph\]\]. D. Bečirević, I. Doršner, S. Fajfer, N. Košnik, D. A. Faroughy and O. Sumensari, Phys. Rev. D [**98**]{}, no. 5, 055003 (2018) doi:10.1103/PhysRevD.98.055003 \[arXiv:1806.05689 \[hep-ph\]\]. I. Doršner, S. Fajfer and N. Košnik, Eur. Phys. J. C [**77**]{}, no. 6, 417 (2017) doi:10.1140/epjc/s10052-017-4987-2 \[arXiv:1701.08322 \[hep-ph\]\]. I. Doršner, S. Fajfer, D. A. Faroughy and N. Košnik, JHEP [**1710**]{}, 188 (2017) doi:10.1007/JHEP10(2017)188 \[arXiv:1706.07779 \[hep-ph\]\]. J. Grygier [*et al.*]{} \[Belle Collaboration\], Phys. Rev. D [**96**]{}, no. 9, 091101 (2017) Addendum: \[Phys. Rev. D [**97**]{}, no. 9, 099902 (2018)\] doi:10.1103/PhysRevD.97.099902, 10.1103/PhysRevD.96.091101 \[arXiv:1702.03224 \[hep-ex\]\]. S. Schael [*et al.*]{} \[ALEPH and DELPHI and L3 and OPAL and SLD Collaborations and LEP Electroweak Working Group and SLD Electroweak Group and SLD Heavy Flavour Group\], Phys. Rept.  [**427**]{}, 257 (2006) doi:10.1016/j.physrep.2005.12.006 \[hep-ex/0509008\]. W. Dekens, J. de Vries, M. Jung and K. K. Vos, JHEP [**1901**]{}, 069 (2019) doi:10.1007/JHEP01(2019)069 \[arXiv:1809.09114 \[hep-ph\]\]. A. Greljo, J. Martin Camalich and J. D. Ruiz-Álvarez, Phys. Rev. Lett.  [**122**]{}, no. 13, 131803 (2019) doi:10.1103/PhysRevLett.122.131803 \[arXiv:1811.07920 \[hep-ph\]\]. M. Aaboud [*et al.*]{} \[ATLAS Collaboration\], JHEP [**1801**]{}, 055 (2018) doi:10.1007/JHEP01(2018)055 \[arXiv:1709.07242 \[hep-ex\]\]. A. M. Sirunyan [*et al.*]{} \[CMS Collaboration\], JHEP [**1707**]{}, 121 (2017) doi:10.1007/JHEP07(2017)121 \[arXiv:1703.03995 \[hep-ex\]\]. CMS Collaboration \[CMS Collaboration\], CMS-PAS-SUS-16-036. A. M. Sirunyan [*et al.*]{} \[CMS Collaboration\], Eur. Phys. J. C [**78**]{}, 707 (2018) doi:10.1140/epjc/s10052-018-6143-z \[arXiv:1803.02864 \[hep-ex\]\]. Y. S. Amhis [*et al.*]{} \[HFLAV Collaboration\], arXiv:1909.12524 \[hep-ex\]. J. P. Lees [*et al.*]{} \[BaBar Collaboration\], Phys. Rev. Lett.  [**109**]{}, 101802 (2012) doi:10.1103/PhysRevLett.109.101802 \[arXiv:1205.5442 \[hep-ex\]\]. J. P. Lees [*et al.*]{} \[BaBar Collaboration\], Phys. Rev. D [**88**]{}, no. 7, 072012 (2013) doi:10.1103/PhysRevD.88.072012 \[arXiv:1303.0571 \[hep-ex\]\]. M. Huschle [*et al.*]{} \[Belle Collaboration\], Phys. Rev. D [**92**]{}, no. 7, 072014 (2015) doi:10.1103/PhysRevD.92.072014 \[arXiv:1507.03233 \[hep-ex\]\]. R. Aaij [*et al.*]{} \[LHCb Collaboration\], Phys. Rev. Lett.  [**115**]{}, no. 11, 111803 (2015) Erratum: \[Phys. Rev. Lett.  [**115**]{}, no. 15, 159901 (2015)\] doi:10.1103/PhysRevLett.115.159901, 10.1103/PhysRevLett.115.111803 \[arXiv:1506.08614 \[hep-ex\]\]. S. Hirose [*et al.*]{} \[Belle Collaboration\], Phys. Rev. Lett.  [**118**]{}, no. 21, 211801 (2017) doi:10.1103/PhysRevLett.118.211801 \[arXiv:1612.00529 \[hep-ex\]\]. S. Hirose [*et al.*]{} \[Belle Collaboration\], Phys. Rev. D [**97**]{}, no. 1, 012004 (2018) doi:10.1103/PhysRevD.97.012004 \[arXiv:1709.00129 \[hep-ex\]\]. R. Aaij [*et al.*]{} \[LHCb Collaboration\], Phys. Rev. Lett.  [**120**]{}, no. 17, 171802 (2018) doi:10.1103/PhysRevLett.120.171802 \[arXiv:1708.08856 \[hep-ex\]\]. \*\*\* Not Found with lookup: ’find eprint 1709.02505’ A. Abdesselam [*et al.*]{} \[Belle Collaboration\], arXiv:1904.08794 \[hep-ex\]. D. Bigi and P. Gambino, Phys. Rev. D [**94**]{}, no. 9, 094008 (2016) doi:10.1103/PhysRevD.94.094008 \[arXiv:1606.08030 \[hep-ph\]\]. F. U. Bernlochner, Z. Ligeti, M. Papucci and D. J. Robinson, Phys. Rev. D [**95**]{}, no. 11, 115008 (2017) Erratum: \[Phys. Rev. D [**97**]{}, no. 5, 059902 (2018)\] doi:10.1103/PhysRevD.95.115008, 10.1103/PhysRevD.97.059902 \[arXiv:1703.05330 \[hep-ph\]\]. D. Bigi, P. Gambino and S. Schacht, JHEP [**1711**]{}, 061 (2017) doi:10.1007/JHEP11(2017)061 \[arXiv:1707.09509 \[hep-ph\]\]. S. Jaiswal, S. Nandi and S. K. Patra, JHEP [**1712**]{}, 060 (2017) doi:10.1007/JHEP12(2017)060 \[arXiv:1707.09977 \[hep-ph\]\]. R. Aaij [*et al.*]{} \[LHCb Collaboration\], Phys. Rev. Lett.  [**122**]{}, no. 19, 191801 (2019) doi:10.1103/PhysRevLett.122.191801 \[arXiv:1903.09252 \[hep-ex\]\]. M. Bordone, G. Isidori and A. Pattori, Eur. Phys. J. C [**76**]{}, no. 8, 440 (2016) doi:10.1140/epjc/s10052-016-4274-7 \[arXiv:1605.07633 \[hep-ph\]\]. R. Aaij [*et al.*]{} \[LHCb Collaboration\], JHEP [**1708**]{}, 055 (2017) doi:10.1007/JHEP08(2017)055 \[arXiv:1705.05802 \[hep-ex\]\]. H. Na [*et al.*]{} \[HPQCD Collaboration\], Phys. Rev. D [**92**]{}, no. 5, 054510 (2015) Erratum: \[Phys. Rev. D [**93**]{}, no. 11, 119906 (2016)\] doi:10.1103/PhysRevD.93.119906, 10.1103/PhysRevD.92.054510 \[arXiv:1505.03925 \[hep-lat\]\]. J. A. Bailey [*et al.*]{} \[MILC Collaboration\], Phys. Rev. D [**92**]{}, no. 3, 034506 (2015) doi:10.1103/PhysRevD.92.034506 \[arXiv:1503.07237 \[hep-lat\]\]. M. Atoui, D. Becirevic, V. Morénas and F. Sanfilippo, PoS LATTICE [**2013**]{}, 384 (2014) doi:10.22323/1.187.0384 \[arXiv:1311.5071 \[hep-lat\]\].
{ "pile_set_name": "ArXiv" }
--- abstract: 'We introduce a new algorithm for the numerical computation of Nash equilibria of competitive two-player games. Our method is a natural generalization of gradient descent to the two-player setting where the update is given by the Nash equilibrium of a regularized bilinear local approximation of the underlying game. It avoids oscillatory and divergent behaviors seen in alternating gradient descent. Using numerical experiments and rigorous analysis, we provide a detailed comparison to methods based on *optimism* and *consensus* and show that our method avoids making any unnecessary changes to the gradient dynamics while achieving exponential (local) convergence for (locally) convex-concave zero sum games. Convergence and stability properties of our method are robust to strong interactions between the players, without adapting the stepsize, which is not the case with previous methods. In our numerical experiments on non-convex-concave problems, existing methods are prone to divergence and instability due to their sensitivity to interactions among the players, whereas we never observe divergence of our algorithm. The ability to choose larger stepsizes furthermore allows our algorithm to achieve faster convergence, as measured by the number of model evaluations.' author: - | Florian Sch[ä]{}fer\ Computing and Mathematical Sciences\ California Institute of Technology\ Pasadena, CA 91125\ `florian.schaefer@caltech.edu` Anima Anandkumar\ Computing and Mathematical Sciences\ California Institute of Technology\ Pasadena, CA 91125\ `anima@caltech.edu` bibliography: - 'refs.bib' title: Competitive Gradient Descent --- \[section\] \[theorem\] \[theorem\][Lemma]{} \[theorem\][Remark]{} Introduction ============ **Competitive optimization:** Whereas traditional optimization is concerned with a single agent trying to optimize a cost function, competitive optimization extends this problem to the setting of multiple agents each trying to minimize their own cost function, which in general depends on the actions of all agents. The present work deals with the case of two such agents: $$\begin{aligned} \label{eqn:game} &\min_{x \in {\mathbb{R}}^m} f(x,y),\ \ \ \min_{y \in {\mathbb{R}}^n} g(x,y)\end{aligned}$$ for two functions $f,g:{\mathbb{R}}^m \times {\mathbb{R}}^n \longrightarrow {\mathbb{R}}$.\ In single agent optimization, the solution of the problem consists of the minimizer of the cost function. In competitive optimization, the right definition of *solution* is less obvious, but often one is interested in computing Nash– or strategic equilibria: Pairs of strategies, such that no player can decrease their costs by unilaterally changing their strategies. If $f$ and $g$ are not convex, finding a global Nash equilibrium is typically impossible and instead we hope to find a “good” local Nash equilibrium. **The benefits of competition:** While competitive optimization problems arise naturally in mathematical economics and game/decision theory [@nisan2007algorithmic], they also provide a highly expressive and transparent language to formulate algorithms in a wide range of domains. In optimization [@bertsimas2011theory] and statistics [@huber2009robust] it has long been observed that competitive optimization is a natural way to encode robustness requirements of algorithms. More recently, researchers in machine learning have been using multi-agent optimization to design highly flexible objective functions for reinforcement learning [@liu2016proximal; @pfau2016connecting; @pathak2017curiosity; @wayne2014hierarchical; @vezhnevets2017feudal] and generative models [@goodfellow2014generative]. We believe that this approach has still a lot of untapped potential, but its full realization depends crucially on the development of efficient and reliable algorithms for the numerical solution of competitive optimization problems. **Gradient descent/ascent and the cycling problem:** For differentiable objective functions, the most naive approach to solving  is gradient descent ascent (GDA), whereby both players independently change their strategy in the direction of steepest descent of their cost function. Unfortunately, this procedure features oscillatory or divergent behavior even in the simple case of a bilinear game ($f(x,y) = x^{\top} y = -g(x,y)$) (see Figure \[fig:bilinear\_strong\]). In game-theoretic terms, GDA lets both players choose their new strategy optimally with respect to the last move of the other player. Thus, the cycling behaviour of GDA is not surprising: It is the analogue of *“Rock! Paper! Scissors! Rock! Paper! Scissors! Rock! Paper!...”* in the eponymous hand game. While gradient descent is a reliable basic *workhorse* for single-agent optimization, GDA can not play the same role for competitive optimization. At the moment, the lack of such a *workhorse* greatly hinders the broader adoption of methods based on competition. **Existing works:** Most existing approaches to stabilizing GDA follow one of three lines of attack.\ In the special case $f = -g$, the problem can be written as a minimization problem $\min_{x} F(x)$, where $F(x) {\coloneqq}\max_{y} f(x,y)$. For certain structured problems, [@gilpin2007gradient] use techniques from convex optimization [@nesterov2005excessive] to minimize the implicitly defined $F$. For general problems, the two-scale update rules proposed in [@goodfellow2014generative; @heusel2017gans; @metz2016unrolled] can be seen as an attempt to approximate $F$ and its gradients.\ In GDA, players pick their next strategy based on the last strategy picked by the other players. Methods based on *follow the regularized leader* [@shalev2007convex; @grnarova2017online], *fictitious play* [@brown1951iterative], *predictive updates* [@yadav2017stabilizing], *opponent learning awareness* [@foerster2018learning], and *optimism* [@rakhlin2013online; @daskalakis2017training; @mertikopoulos2019optimistic] propose more sophisticated heuristics that the players could use to predict each other’s next move. Algorithmically, many of these methods can be considered variations of the *extragradient method* [@korpelevich1977extragradient](see also [@facchinei2003finite]\[Chapter 12\]). Finally, some methods directly modify the gradient dynamics, either by promoting convergence through gradient penalties [@mescheder2017numerics], or by attempting to disentangle convergent *potential* parts from rotational *Hamiltonian* parts of the vector field [@balduzzi2018mechanics; @letcher2019differentiable; @gemp2018global].\ **Our contributions:** Our main *conceptual* objection to most existing methods is that they lack a clear game-theoretic motivation, but instead rely on the ad-hoc introduction of additional assumptions, modifications, and model parameters.\ Their main *practical* shortcoming is that to avoid divergence the stepsize has to be chosen inversely proportional to the magnitude of the interaction of the two players (as measured by $D_{xy}^2 f$, $D_{xy}^2 g$).\ On the one hand, the small stepsize results in slow convergence. On the other hand, a stepsize small enough to prevent divergence will not be known in advance in most problems. Instead it has to be discovered through tedious trial and error, which is further aggravated by the lack of a good diagnostic for improvement in multi-agent optimization (which is given by the objective function in single agent optimization).\ We alleviate the above mentioned problems by introducing a novel algorithm, *competitive gradient descent* (CGD) that is obtained as a natural extension of gradient descent to the competitive setting. Recall that in the single player setting, the gradient descent update is obtained as the optimal solution to a regularized linear approximation of the cost function. In the same spirit, the update of CGD is given by the Nash equilibrium of a regularized *bilinear* approximation of the underlying game. The use of a bilinear– as opposed to linear approximation lets the local approximation preserve the competitive nature of the problem, significantly improving stability. We prove (local) convergence results of this algorithm in the case of (locally) convex-concave zero-sum games. We also show that stronger interactions between the two players only improve convergence, without requiring an adaptation of the stepsize. In comparison, the existing methods need to reduce the stepsize to match the increase of the interactions to avoid divergence, which we illustrate on a series of polynomial test cases considered in previous works.\ We begin our numerical experiments by trying to use a GAN on a bimodal Gaussian mixture model. Even in this simple example, trying five different (constant) stepsizes under RMSProp, the existing methods diverge. The typical solution would be to decay the learning rate. However even with a constant learning rate, CGD succeeds with all these stepsize choices to approximate the main features of the target distribution. In fact, throughout our experiments we *never* saw CGD diverge. In order to measure the convergence speed more quantitatively, we next consider a nonconvex matrix estimation problem, measuring computational complexity in terms of the number of gradient computations performed. We observe that all methods show improved speed of convergence for larger stepsizes, with CGD roughly matching the convergence speed of optimistic gradient descent [@daskalakis2017training], at the same stepsize. However, as we increase the stepsize, other methods quickly start diverging, whereas CGD continues to improve, thus being able to attain significantly better convergence rates (more than two times as fast as the other methods in the noiseless case, with the ratio increasing for larger and more difficult problems). For small stepsize or games with weak interactions on the other hand, CGD automatically invests less computational time per update, thus gracefully transitioning to a cheap correction to GDA, at minimal computational overhead. We believe that the robustness of CGD makes it an excellent candidate for the fast and simple training of machine learning systems based on competition, hopefully helping them reach the same level of automatization and ease-of-use that is already standard in minimization based machine learning. Competitive gradient descent {#sec:cgd} ============================ We propose a novel algorithm, which we call *competitive gradient descent* (CGD), for the solution of competitive optimization problems $\min_{x \in {\mathbb{R}}^m} f(x,y),\ \min_{y \in {\mathbb{R}}^n} g(x,y)$, where we have access to function evaluations, gradients, and Hessian-vector products of the objective functions. [^1] $(x_{N},y_{N})$ **How to linearize a game:** To motivate this algorithm, we remind ourselves that gradient descent with stepsize $\eta$ applied to the function $f:{\mathbb{R}}^m \longrightarrow {\mathbb{R}}$ can be written as $$x_{k+1} = {\operatorname{argmin}}\limits_{x \in {\mathbb{R}}^m} (x^{\top} - x_{k}^{\top}) \nabla_x f(x_k) + \frac{1}{2\eta} \|x - x_{k}\|^2.$$ This models a (single) player solving a local linear approximation of the (minimization) game, subject to a quadratic penalty that expresses her limited confidence in the global accuracy of the model. The natural generalization of this idea to the competitive case should then be given by the two players solving a local approximation of the true game, both subject to a quadratic penalty that expresses their limited confidence in the accuracy of the local approximation.\ In order to implement this idea, we need to find the appropriate way to generalize the linear approximation in the single agent setting to the competitive setting: *How to linearize a game?*. **Linear or Multilinear:** GDA answers the above question by choosing a linear approximation of $f,g: {\mathbb{R}}^m \times {\mathbb{R}}^n \longrightarrow {\mathbb{R}}$. This seemingly natural choice has the flaw that linear functions can not express any interaction between the two players and are thus unable to capture the competitive nature of the underlying problem. From this point of view it is not surprising that the convergent modifications of GDA are, implicitly or explicitly, based on higher order approximations (see also [@li2017limitations]). An equally valid generalization of the linear approximation in the single player setting is to use a *bilinear* approximation in the two-player setting. Since the bilinear approximation is the lowest order approximation that can capture some interaction between the two players, we argue that the natural generalization of gradient descent to competitive optimization is not GDA, but rather the update rule $(x_{k+1},y_{k+1}) = (x_k,y_k) + (x,y)$, where $(x,y)$ is a Nash equilibrium of the game [^2] $$\begin{aligned} \begin{split} \label{eqn:localgame} \min_{x \in {\mathbb{R}}^m} x^{\top} \nabla_x f &+ x^{\top} D_{xy}^2 f y + y^{\top} \nabla_y f + \frac{1}{2\eta} x^{\top} x \\ \min_{y \in {\mathbb{R}}^n} y^{\top} \nabla_y g &+ y^{\top} D_{yx}^2 g x + x^{\top} \nabla_x g + \frac{1}{2\eta} y^{\top} y. \end{split}\end{aligned}$$ Indeed, the (unique) Nash equilibrium of the Game  can be computed in closed form. \[thm:uniqueNash\] Among all (possibly randomized) strategies with finite first moment, the only Nash equilibrium of the Game  is given by $$\begin{aligned} \label{eqn:nash} &x = -\eta \left( {\operatorname{Id}}- \eta^2 D_{xy}^2f D_{yx}^2 g \right)^{-1} \left( \nabla_{x} f - \eta D_{xy}^2f \nabla_{y} g \right) \\ &y = -\eta \left( {\operatorname{Id}}- \eta^2 D_{yx}^2g D_{xy}^2 f \right)^{-1} \left( \nabla_{y} g - \eta D_{yx}^2g \nabla_{x} f \right), \end{aligned}$$ given that the matrix inverses in the above expression exist. [^3] Let $X,Y$ be randomized strategies. By subtracting and adding ${\mathbb{E}}[X]^2/(2\eta), {\mathbb{E}}[Y]^2/(2\eta)$, and taking expectations, we can rewrite the game as $$\begin{aligned} &\min_{{\mathbb{E}}[X] \in {\mathbb{R}}^m} {\mathbb{E}}[X]^{\top} \nabla_x f + {\mathbb{E}}[X]^{\top} D_{xy}^2 f {\mathbb{E}}[Y] + {\mathbb{E}}[Y]^{\top} \nabla_y f + \frac{1}{2\eta} {\mathbb{E}}[X]^{\top} {\mathbb{E}}[X] + \frac{1}{2\eta} {\operatorname{Var}}[X]\\ &\min_{{\mathbb{E}}[Y] \in {\mathbb{R}}^n} {\mathbb{E}}[Y]^{\top} \nabla_y g + {\mathbb{E}}[Y]^{\top} D_{yx}^2 g {\mathbb{E}}[X] + {\mathbb{E}}[X]^{\top} \nabla_x g + \frac{1}{2\eta} {\mathbb{E}}[Y]^{\top} {\mathbb{E}}[Y] + \frac{1}{2\eta} {\operatorname{Var}}[Y].\end{aligned}$$ Thus, the objective value for both players can always be improved by decreasing the variance while keeping the expectation the same, meaning that the optimal value will always (and only) be achieved by a deterministic strategy. We can then replace the ${\mathbb{E}}[X], {\mathbb{E}}[Y]$ with $x,y$, set the derivative of the first expression with respect to $x$ and of the second expression with respect to $y$ to zero, and solve the resulting system of two equations for the Nash equilibrium $(x,y)$. According to Theorem \[thm:uniqueNash\], the Game  has exactly one optimal pair of strategies, which is deterministic. Thus, we can use these strategies as an update rule, generalizing the idea of local optimality from the single– to the multi agent setting and obtaining Algorithm \[alg:CGD\]. **What I think that they think that I think ... that they do**: Another game-theoretic interpretation of CGD follows from the observation that its update rule can be written as $$\label{eqn:whatIthink} \begin{pmatrix} \Delta x\\ \Delta y \end{pmatrix} = - \begin{pmatrix} {\operatorname{Id}}& \eta D_{xy}^2 f \\ \eta D_{yx}^2 g & {\operatorname{Id}}\end{pmatrix}^{-1} \begin{pmatrix} \nabla_{x} f\\ \nabla_{y} g \end{pmatrix}.$$ Applying the expansion $ \lambda_{\max} (A) < 1 \Rightarrow \left( {\operatorname{Id}}- A \right)^{-1} = \lim_{N \rightarrow \infty} \sum_{k=0}^{N} A^k$ to the above equation, we observe that the first partial sum ($N = 0$) corresponds to the optimal strategy if the other player’s strategy stays constant (GDA). The second partial sum ($N = 1$) corresponds to the optimal strategy if the other player thinks that the other player’s strategy stays constant (LCGD, see Figure \[fig:ingredients\]). The third partial sum ($N = 2$) corresponds to the optimal strategy if the other player thinks that the other player thinks that the other player’s strategy stays constant, and so forth, until the Nash equilibrium is recovered in the limit. For small enough $\eta$, we could use the above series expansion to solve for $(\Delta x, \Delta y)$, which is known as Richardson iteration and would recover high order LOLA [@foerster2018learning]. However, expressing it as a matrix inverse will allow us to use optimal Krylov subspace methods to obtain far more accurate solutions with fewer gradient evaluations.\ **Rigorous results on convergence and local stability:** We will now show some basic convergence results for CGD, the proofs of which we defer to the appendix. Our results are restricted to the case of a zero-sum game ($f = -g$), but we expect that they can be extended to games that are dominated by competition. To simplify notation, we define $$\bar{D} {\coloneqq}({\operatorname{Id}}+ \eta^2 D_{xy}^2f D_{yx}^2f)^{-1} \eta^2 D_{xy}^2f D_{yx}^2f, \quad \tilde{D} {\coloneqq}({\operatorname{Id}}+ \eta^2 D_{yx}^2f D_{xy}^2f)^{-1} \eta^2 D_{yx}^2f D_{xy}^2f.$$ We furthermore define the spectral function $h_{\pm}(\lambda) {\coloneqq}\min(3\lambda, \lambda)/2$. \[thm:convergence\] If $f$ is two times continiously differentiable with $L$-Lipschitz continuous mixed Hessian, $f$ is convex-concave or $D_{xx}^2 f, D_{yy}^2f$ are $L$-Lipschitz continuous, and the diagonal blocks of its Hessian are bounded as $\eta \|D_{xx}^2 f\|, \eta \|D_{yy}^2 f\| \leq 1$, we have $$\begin{aligned} &\left\|\nabla_x f\left(x_{k+1}, y_{k+1}\right) \right\|^2 + \left\|\nabla_y f\left(x_{k+1}, y_{k+1}\right) \right\|^2 - \left\|\nabla_{x} f \right\|^2 - \left\|\nabla_{y} f \right\|^2 \leq \\ &- \nabla_{x} f^{\top} \left( \eta h_{\pm} \left(D_{xx}^2 f\right) + \bar{D} -32 L \eta^2\|\nabla_{x} f\| \right) \nabla_x f - \nabla_{y} f^{\top} \left( \eta h_{\pm}\left(-D_{yy}^2 f\right) + \tilde{D} -32 L \eta^2 \|\nabla_y f\| \right) \nabla_{y}f \end{aligned}$$ Under suitable assumptions on the curvature of $f$, Theorem \[thm:convergence\] implies results on the convergence of CGD. Under the assumptions of Theorem \[thm:convergence\], if for $\alpha > 0$ $$\Bigl( \eta h_{\pm} \left(D_{xx}^2 f\right) + \bar{D} -32 L \eta^2\|\nabla_{x} f(x_0, y_0)\| \Bigr), \Bigl( \eta h_{\pm}\left(-D_{yy}^2 f\right) + \tilde{D} -32 L \eta^2 \|\nabla_y f(x_0, y_0)\| \Bigr) \succeq \alpha {\operatorname{Id}},$$ for all $(x,y) \in {\mathbb{R}}^{m + n}$, then CGD started in $(x_0, y_0)$ converges at exponential rate with exponent $\alpha$ to a critical point. Furthermore, we can deduce the following local stability result. Let $(x^*,y^*)$ be a critical point ($(\nabla_{x} f, \nabla_{y} f) = (0,0)$) and assume furthermore that $\lambda_{\min} {\coloneqq}\min \left(\lambda_{\min} \left(\eta D_{xx}^2 f + \bar{D}\right), \lambda_{\min} \left(-\eta D_{yy}^2 f + \bar{D} \right) \right) > 0$ and $f \in C^2(\mathbb{R}^{m + n})$ with Lipschitz continuous mixed Hessian. Then there exists a neighbourhood $\mathcal{U}$ of $(x^*,y^*)$, such that CGD started in $(x_{1},y_{1}) \in \mathcal{U}$ converges to a point in $\mathcal{U}$ at an exponential rate that depends only on $\lambda_{\min}$. The results on local stability for existing modifications of GDA, including those of [@mescheder2017numerics; @daskalakis2017training; @mertikopoulos2019optimistic] (see also [@liang2018interaction]) all require the stepsize to be chosen inversely proportional to an upper bound on $\sigma_{\max} (D_{xy}^2f)$ and indeed we will see in our experiments that the existing methods are prone to divergence under strong interactions between the two players (large $\sigma_{\max}(D_{xy}^2f)$). In contrast to these results, our convergence results *only improve* as the interaction between the players becomes stronger. **Why not use $D_{xx}^2f$ and $D_{yy}^2g$?:** The use of a bilinear approximation that contains some, but not all second order terms is unusual and begs the question why we do not include the diagonal blocks of the Hessian in Equation  resulting in the damped and regularized Newton’s method $$\label{eqn:newton} \begin{pmatrix} \Delta x\\ \Delta y \end{pmatrix} = - \begin{pmatrix} {\operatorname{Id}}+ \eta D_{xx}^2 f & \eta D_{xy}^2 f \\ \eta D_{yx}^2 g & {\operatorname{Id}}+ \eta D_{yy}^2 g \end{pmatrix}^{-1} \begin{pmatrix} \nabla_{x} f\\ \nabla_{y} g \end{pmatrix}.$$ For the following reasons we believe that the bilinear approximation is preferable both from a practical and conceptual point of view. - *Conditioning of matrix inverse:* One advantage of competitive gradient descent is that in many cases, including all zero-sum games, the condition number of the matrix inverse in Algorithm \[alg:CGD\] is bounded above by $\eta^2 \|D_{xy}\|^2$. If we include the diagonal blocks of the Hessian in a non-convex-concave problem, the matrix can even be singular as soon as $\eta \|D_{xx}^2 f\| \geq 1$ or $\eta \|D_{yy}^2 g\| \geq 1$. - *Irrational updates:* We can only expect the update rule  to correspond to a local Nash equilibrium if the problem is convex-concave or $\eta \|D_{xx}^2 f\|, \eta \|D_{yy}^2 g\| < 1$. If these conditions are violated it can instead correspond to the players playing their *worst* as opposed to best strategy based on the quadratic approximation, leading to behavior that contradicts the game-interpretation of the problem. - *Lack of regularity:* For the inclusion of the diagonal blocks of the Hessian to be helpful at all, we need to make additional assumptions on the regularity of $f$, for example by bounding the Lipschitz constants of $D_{xx}^2 f$ and $D_{yy}^2g$. Otherwise, their value at a given point can be totally uninformative about the global structure of the loss functions (consider as an example the minimization of $x \mapsto x^2 + \epsilon^{3/2} \sin(x/\epsilon)$ for $\epsilon \ll 1$). Many problems in competitive optimization, including GANs, have the form $f(x,y) = \Phi(\mathcal{G}(x), \mathcal{D}(y)), g(x,y) = \Theta(\mathcal{G}(x), \mathcal{D}(y))$, where $\Phi, \Theta$ are *smooth* and *simple*, but $\mathcal{G}$ and $\mathcal{D}$ might only have first order regularity. In this setting, the bilinear approximation has the advantage of fully exploiting the first order information of $\mathcal{G}$ and $\mathcal{D}$, without assuming them to have higher order regularity. This is because the bilinear approximations of $f$ and $g$ then contains only the first derivatives of $\mathcal{G}$ and $\mathcal{D}$, while the quadratic approximation contains the second derivatives $D_{xx}^2 \mathcal{G}$ and $D_{yy}^2 \mathcal{D}$ and therefore needs stronger regularity assumptions on $\mathcal{G}$ and $\mathcal{D}$ to be effective. - *No spurious symmetry:* One reason to favor full Taylor approximations of a certain order in single-player optimization is that they are invariant under changes of the coordinate system. For competitive optimization, a change of coordinates of $(x,y) \in {\mathbb{R}}^{m +n}$ can correspond, for instance, to taking a decision variable of one player and giving it to the other player. This changes the underlying game significantly and thus we do *not* want our approximation to be invariant under this transformation. Instead, we want our local approximation to only be invariant to coordinate changes of $x \in {\mathbb{R}}^m$ and $y \in {\mathbb{R}}^{n}$ *in separation*, that is to block-diagonal coordinate changes on ${\mathbb{R}}^{m+n}$. *Mixed* order approximations (bilinear, biquadratic, etc.) have exactly this invariance property and thus are the natural approximation for two-player games. While we are convinced that the right notion of first order competitive optimization is given by quadratically regularized bilinear approximations, we believe that the right notion of second order competitive optimization is given by *cubically* regularized *biquadratic* approximations, in the spirit of [@nesterov2006cubic]. Consensus, optimism, or competition? {#sec:comparison} ==================================== We will now show that many of the convergent modifications of GDA correspond to different subsets of four common ingredients. *Consensus optimization* (ConOpt) [@mescheder2017numerics], penalises the players for non-convergence by adding the squared norm of the gradient at the next location, $\gamma \|\nabla_x f(x_{k+1},y_{k+1}), \nabla_x f(x_{k+1},y_{k+1})\|^2$ to both player’s loss function (here $\gamma \geq 0$ is a hyperparameter). As we see in Figure \[fig:ingredients\], the resulting gradient field has two additional Hessian corrections. [@balduzzi2018mechanics; @letcher2019differentiable] observe that any game can be written as the sum of a *potential game* (that is easily solved by GDA), and a *Hamiltonian game* (that is easily solved by ConOpt). Based on this insight, they propose *symplectic gradient adjustment* that applies (in its simplest form) ConOpt only using the skew-symmetric part of the Hessian, thus alleviating the problematic tendency of ConOpt to converge to spurious solutions. The same algorithm was independently discovered by [@gemp2018global], who also provide a detailed analysis in the case of linear-quadratic GANs.\ [@daskalakis2017training] proposed to modify GDA as $$\begin{aligned} \label{eqn:updateOGDA} \Delta x &= - \left( \nabla_x f(x_{k},y_{k}) + \left( \nabla_x f(x_{k},y_{k}) - \nabla_x f(x_{k-1},y_{k-1}) \right) \right) \\ \Delta y &= - \left( \nabla_y g(x_{k},y_{k}) + \left( \nabla_y g(x_{k},y_{k}) - \nabla_y g(x_{k-1},y_{k-1}) \right) \right),\end{aligned}$$ which we will refer to as optimistic gradient descent ascent (OGDA). By interpreting the differences appearing in the update rule as finite difference approximations to Hessian vector products, we see that (to leading order) OGDA corresponds to yet another second order correction of GDA (see Figure \[fig:ingredients\]). It will also be instructive to compare the algorithms to linearized competitive gradient descent (LCGD), which is obtained by skipping the matrix inverse in CGD (which corresponds to taking only the leading order term in the limit $\eta D_{xy}^2f \rightarrow 0$) and also coincides with first order LOLA [@foerster2018learning]. As illustrated in Figure \[fig:ingredients\], these six algorithms amount to different subsets of the following four terms. $$\begin{aligned} & \text{GDA: } &\Delta x = &&&- \nabla_x f&\\ & \text{LCGD: } &\Delta x = &&&- \nabla_x f& &-\eta D_{xy}^2 f \nabla_y f&\\ & \text{SGA: } &\Delta x = &&&- \nabla_x f& &- \gamma D_{xy}^2 f \nabla_y f& & & \\ & \text{ConOpt: } &\Delta x = &&&- \nabla_x f& &- \gamma D_{xy}^2 f \nabla_y f& &- \gamma D_{xx}^2 f \nabla_x f& \\ & \text{OGDA: } &\Delta x \approx &&&- \nabla_x f& &-\eta D_{xy}^2 f \nabla_y f& &+\eta D_{xx}^2 f \nabla_x f& \\ & \text{CGD: } &\Delta x = &\left({\operatorname{Id}}+ \eta^2 D_{xy}^2 f D_{yx}^2 f\right)^{-1}&\bigl( &- \nabla_x f& &-\eta D_{xy}^2 f \nabla_y f& & & \bigr) \end{aligned}$$ 1. \[item:grad\] The *gradient term* $-\nabla_{x}f$, $\nabla_{y}f$ which corresponds to the most immediate way in which the players can improve their cost. 2. \[item:comp\] The *competitive term* $-D_{xy}f \nabla_yf$, $D_{yx}f \nabla_x f$ which can be interpreted either as anticipating the other player to use the naive (GDA) strategy, or as decreasing the other players influence (by decreasing their gradient). 3. \[item:consensus\] The *consensus term* $ \pm D_{xx}^2 \nabla_x f$, $\mp D_{yy}^2 \nabla_y f$ that determines whether the players prefer to decrease their gradient ($\pm = +$) or to increase it ($\pm = -$). The former corresponds the players seeking consensus, whereas the latter can be seen as the opposite of consensus.\ (It also corresponds to an approximate Newton’s method. [^4]) 4. \[item:equilibrium\] The *equilibrium term* $({\operatorname{Id}}+ \eta^2 D_{xy}^2 D_{yx}^2 f)^{-1}$, $({\operatorname{Id}}+ \eta^2 D_{yx}^2 D_{xy}^2 f)^{-1}$, which arises from the players solving for the Nash equilibrium. This term lets each player prefer strategies that are less vulnerable to the actions of the other player. Each of these is responsible for a different feature of the corresponding algorithm, which we can illustrate by applying the algorithms to three prototypical test cases considered in previous works. - We first consider the bilinear problem $f(x,y) = \alpha xy$ (see Figure \[fig:bilinear\_strong\]). It is well known that GDA will fail on this problem, for any value of $\eta$. For $\alpha = 1.0$, all the other methods converge exponentially towards the equilibrium, with ConOpt and SGA converging at a faster rate due to the stronger gradient correction ($\gamma > \eta$). If we choose $\alpha = 3.0$, OGDA, ConOpt, and SGA fail. The former diverges, while the latter two begin to oscillate widely. If we choose $\alpha = 6.0$, all methods but CGD diverge. - In order to explore the effect of the consensus Term \[item:consensus\], we now consider the convex-concave problem $f(x,y) = \alpha(x^2 - y^2)$ (see Figure \[fig:quad\]). For $\alpha = 1.0$, all algorithms converge at an exponential rate, with ConOpt converging the fastest, and OGDA the slowest. The consensus promoting term of ConOpt accelerates convergence, while the competition promoting term of OGDA slows down the convergence. As we increase $\alpha$ to $\alpha = 3.0$, the OGDA and ConOpt start failing (diverge), while the remaining algorithms still converge at an exponential rate. Upon increasing $\alpha$ further to $\alpha = 6.0$, all algorithms diverge. - We further investigate the effect of the consensus Term \[item:consensus\] by considering the concave-convex problem $f(x,y) = \alpha( -x^2 + y^2)$ (see Figure \[fig:quad\]). The critical point $(0,0)$ does not correspond to a Nash-equilibrium, since both players are playing their *worst possible strategy*. Thus it is highly undesirable for an algorithm to converge to this critical point. However for $\alpha = 1.0$, ConOpt does converge to $(0,0)$ which provides an example of the consensus regularization introducing spurious solutions. The other algorithms, instead, diverge away towards infinity, as would be expected. In particular, we see that SGA is correcting the problematic behavior of ConOpt, while maintaining its better convergence rate in the first example. As we increase $\alpha$ to $\alpha \in \{3.0,6.0\}$, the radius of attraction of $(0,0)$ under ConOpt decreases and thus ConOpt diverges from the starting point $(0.5,0.5)$, as well. The first experiment shows that the inclusion of the competitive Term \[item:comp\] is enough to solve the cycling problem in the bilinear case. However, as discussed after Theorem \[thm:convergence\], the convergence results of existing methods in the literature are not break down as the interactions between the players becomes too strong (for the given $\eta$). The first experiment illustrates that this is not just a lack of theory, but corresponds to an actual failure mode of the existing algorithms. The experimental results in Figure \[fig:matrixGAN\] further show that for input dimensions $m,n > 1$, the advantages of CGD can not be recovered by simply changing the stepsize $\eta$ used by the other methods.\ While introducing the competitive term is enough to fix the cycling behaviour of GDA, OGDA and ConOpt (for small enough $\eta$) add the additional consensus term to the update rule, with opposite signs.\ In the second experiment (where convergence is desired), OGDA converges in a smaller parameter range than GDA and SGA, while only diverging slightly faster in the third experiment (where divergence is desired).\ ConOpt, on the other hand, converges faster than GDA in the second experiment, for $\alpha = 1.0$ however, it diverges faster for the remaining values of $\alpha$ and, what is more problematic, it converges to a spurious solution in the third experiment for $\alpha = 1.0$.\ Based on these findings, the consensus term with either sign does not seem to systematically improve the performance of the algorithm, which is why we suggest to only use the competitive term (that is, use LOLA/LCGD, or CGD, or SGA). ![The first 50 iterations of GDA, LCGD, ConOpt, OGDA, and CGD with parameters $\eta = 0.2$ and $\gamma = 1.0$. The objective function is $f(x,y) = \alpha x^{\top}y$ for, from left to right, $\alpha \in \{1.0, 3.0, 6.0\}$. (Note that ConOpt and SGA coincide on a bilinear problem)[]{data-label="fig:bilinear_strong"}](figures/bilinear_strong_alpha1.png "fig:") ![The first 50 iterations of GDA, LCGD, ConOpt, OGDA, and CGD with parameters $\eta = 0.2$ and $\gamma = 1.0$. The objective function is $f(x,y) = \alpha x^{\top}y$ for, from left to right, $\alpha \in \{1.0, 3.0, 6.0\}$. (Note that ConOpt and SGA coincide on a bilinear problem)[]{data-label="fig:bilinear_strong"}](figures/bilinear_strong_alpha3.png "fig:") ![The first 50 iterations of GDA, LCGD, ConOpt, OGDA, and CGD with parameters $\eta = 0.2$ and $\gamma = 1.0$. The objective function is $f(x,y) = \alpha x^{\top}y$ for, from left to right, $\alpha \in \{1.0, 3.0, 6.0\}$. (Note that ConOpt and SGA coincide on a bilinear problem)[]{data-label="fig:bilinear_strong"}](figures/bilinear_strong_alpha6.png "fig:") ![We measure the (non-)convergence to equilibrium in the separable convex-concave– ($f(x,y) = \alpha( x^2 - y^2 )$, left three plots) and concave convex problem ($f(x,y) = \alpha( -x^2 + y^2 )$, right three plots), for $\alpha \in \{1.0,3.0,6.0\}$. (Color coding given by , , , the y-axis measures $\log_{10}(\|(x_{k},y_{k})\|)$ and the x-axis the number of iterations $k$. Note that convergence is desired for the first problem, while *divergence* is desired for the second problem.[]{data-label="fig:quad"}](figures/quadratic_equi_alpha1.png "fig:") ![We measure the (non-)convergence to equilibrium in the separable convex-concave– ($f(x,y) = \alpha( x^2 - y^2 )$, left three plots) and concave convex problem ($f(x,y) = \alpha( -x^2 + y^2 )$, right three plots), for $\alpha \in \{1.0,3.0,6.0\}$. (Color coding given by , , , the y-axis measures $\log_{10}(\|(x_{k},y_{k})\|)$ and the x-axis the number of iterations $k$. Note that convergence is desired for the first problem, while *divergence* is desired for the second problem.[]{data-label="fig:quad"}](figures/quadratic_equi_alpha3.png "fig:") ![We measure the (non-)convergence to equilibrium in the separable convex-concave– ($f(x,y) = \alpha( x^2 - y^2 )$, left three plots) and concave convex problem ($f(x,y) = \alpha( -x^2 + y^2 )$, right three plots), for $\alpha \in \{1.0,3.0,6.0\}$. (Color coding given by , , , the y-axis measures $\log_{10}(\|(x_{k},y_{k})\|)$ and the x-axis the number of iterations $k$. Note that convergence is desired for the first problem, while *divergence* is desired for the second problem.[]{data-label="fig:quad"}](figures/quadratic_equi_alpha6.png "fig:") ![We measure the (non-)convergence to equilibrium in the separable convex-concave– ($f(x,y) = \alpha( x^2 - y^2 )$, left three plots) and concave convex problem ($f(x,y) = \alpha( -x^2 + y^2 )$, right three plots), for $\alpha \in \{1.0,3.0,6.0\}$. (Color coding given by , , , the y-axis measures $\log_{10}(\|(x_{k},y_{k})\|)$ and the x-axis the number of iterations $k$. Note that convergence is desired for the first problem, while *divergence* is desired for the second problem.[]{data-label="fig:quad"}](figures/quadratic_noequi_alpha1.png "fig:") ![We measure the (non-)convergence to equilibrium in the separable convex-concave– ($f(x,y) = \alpha( x^2 - y^2 )$, left three plots) and concave convex problem ($f(x,y) = \alpha( -x^2 + y^2 )$, right three plots), for $\alpha \in \{1.0,3.0,6.0\}$. (Color coding given by , , , the y-axis measures $\log_{10}(\|(x_{k},y_{k})\|)$ and the x-axis the number of iterations $k$. Note that convergence is desired for the first problem, while *divergence* is desired for the second problem.[]{data-label="fig:quad"}](figures/quadratic_noequi_alpha3.png "fig:") ![We measure the (non-)convergence to equilibrium in the separable convex-concave– ($f(x,y) = \alpha( x^2 - y^2 )$, left three plots) and concave convex problem ($f(x,y) = \alpha( -x^2 + y^2 )$, right three plots), for $\alpha \in \{1.0,3.0,6.0\}$. (Color coding given by , , , the y-axis measures $\log_{10}(\|(x_{k},y_{k})\|)$ and the x-axis the number of iterations $k$. Note that convergence is desired for the first problem, while *divergence* is desired for the second problem.[]{data-label="fig:quad"}](figures/quadratic_noequi_alpha6.png "fig:") Implementation and numerical results ==================================== We briefly discuss the implementation of CGD.\ **Computing Hessian vector products:** First, our algorithm requires products of the mixed Hessian $v \mapsto D_{xy}f v$, $v \mapsto D_{yx}g v$, which we want to compute using automatic differentiation. As was already observed by [@pearlmutter1994fast], Hessian vector products can be computed at minimal overhead over the cost of computing gradients, by combining forward– and reverse mode automatic differentiation. To this end, a function $x \mapsto \nabla_y f(x,y)$ is defined using reverse mode automatic differentiation. The Hessian vector product can then be evaluated as $D_{xy}^2 f v = \left. \frac{\partial}{\partial{h}} \nabla_y f(x + h v, y) \right|_{h = 0}$, using forward mode automatic differentiation. Many AD frameworks, like Autograd (<https://github.com/HIPS/autograd>) and ForwardDiff(<https://github.com/JuliaDiff/ForwardDiff.jl>, [@revels2016forward]) together with ReverseDiff(<https://github.com/JuliaDiff/ReverseDiff.jl>) support this procedure. In settings where we are only given access to gradient evaluations but cannot use automatic differentiation to compute Hessian vector products, we can instead approximate them using finite differences. **Matrix inversion for the equilibrium term**: Similar to a *truncated Newton’s method* [@nocedal2006numerical], we propose to use iterative methods to approximate the inverse-matrix vector products arising in the equilibrium term \[item:equilibrium\]. We will focus on zero-sum games, where the matrix is always symmetric positive definite, making the conjugate gradient (CG) algorithm the method of choice. For nonzero sum games we recommend using the GMRES or BCGSTAB (see for example [@saad2003iterative] for details). We suggest terminating the iterative solver after a given relative decrease of the residual is achieved ($\| M x - y \| \leq \epsilon \|x\|$ for a small parameter $\epsilon$, when solving the system $Mx = y$). In our experiments we choose $\epsilon = 10^{-6}$. Given the strategy $\Delta x$ of one player, $\Delta y$ is the optimal counter strategy which can be found without solving another system of equations. Thus, we recommend in each update to only solve for the strategy of one of the two players using Equation , and then use the optimal counter strategy for the other player. The computational cost can be further improved by using the last round’s optimal strategy as a a warm start of the inner CG solve. An appealing feature of the above algorithm is that the number of iterations of CG adapts to the difficulty of solving the equilibrium term \[item:equilibrium\]. If it is easy, we converge rapidly and CGD thus *gracefully reduces to LCGD*, at only a small overhead. If it is difficult, we might need many iterations, but correspondingly the problem would be very hard without the preconditioning provided by the equilibrium term. **Experiment: Fitting a bimodal distribution:** We use a simple GAN to fit a Gaussian mixture model with two modes, in two dimensions (see supplement for details). We apply SGA, ConOpt ($\gamma = 1.0$), OGDA, and CGD for stepsize $\eta \in \{0.4, 0.1, 0.025, 0.005\}$ together with RMSProp ($\rho = 0.9)$. In each case, CGD produces an reasonable approximation of the input distribution without any mode collapse. In contrast, all other methods diverge after some initial cycling behaviour! Reducing the steplength to $\eta = 0.001$, did not seem to help, either. While we do not claim that the other methods can not be made work with proper hyperparameter tuning, this result substantiates our claim that CGD is significantly more robust than existing methods for competitive optimization. For more details and visualizations of the whole trajectories, consult the supplementary material. ![For all methods, initially the players cycle between the two modes (first column). For all methods but CGD, the dynamics eventually become unstable (middle column). Under CGD, the mass eventually distributes evenly among the two modes (right column). (The arrows show the update of the generator and the colormap encodes the logit output by the discriminator.)](figures/plot_iter_207_grad_824.png "fig:") ![For all methods, initially the players cycle between the two modes (first column). For all methods but CGD, the dynamics eventually become unstable (middle column). Under CGD, the mass eventually distributes evenly among the two modes (right column). (The arrows show the update of the generator and the colormap encodes the logit output by the discriminator.)](figures/plot_iter_315_grad_1256.png "fig:") ![For all methods, initially the players cycle between the two modes (first column). For all methods but CGD, the dynamics eventually become unstable (middle column). Under CGD, the mass eventually distributes evenly among the two modes (right column). (The arrows show the update of the generator and the colormap encodes the logit output by the discriminator.)](figures/plot_iter_83_grad_1060.png "fig:") ![For all methods, initially the players cycle between the two modes (first column). For all methods but CGD, the dynamics eventually become unstable (middle column). Under CGD, the mass eventually distributes evenly among the two modes (right column). (The arrows show the update of the generator and the colormap encodes the logit output by the discriminator.)](figures/plot_iter_214_grad_852.png "fig:") ![For all methods, initially the players cycle between the two modes (first column). For all methods but CGD, the dynamics eventually become unstable (middle column). Under CGD, the mass eventually distributes evenly among the two modes (right column). (The arrows show the update of the generator and the colormap encodes the logit output by the discriminator.)](figures/plot_iter_316_grad_1260.png "fig:") ![For all methods, initially the players cycle between the two modes (first column). For all methods but CGD, the dynamics eventually become unstable (middle column). Under CGD, the mass eventually distributes evenly among the two modes (right column). (The arrows show the update of the generator and the colormap encodes the logit output by the discriminator.)](figures/plot_iter_98_grad_2028.png "fig:") **Experiment: Estimating a covariance matrix:** To show that CGD is also competitive in terms of computational complexity we consider the noiseless case of the covariance estimation example used by [@daskalakis2017training]\[Appendix C\], We study the tradeoff between the number of evaluations of the forward model (thus accounting for the inner loop of CGD) and the residual and observe that for comparable stepsize, the convergence rate of CGD is similar to the other methods. However, due to CGD being convergent for larger stepsize it can beat the other methods by more than a factor two (see supplement for details). ![We plot the decay of the residual after a given number of model evaluations, for increasing problem sizes and $\eta \in \{0.005, 0.025, 0.1, 0.4\}$. Experiments that are not plotted diverged.[]{data-label="fig:matrixGAN"}](apfigures/cvest_d_20.png "fig:") ![We plot the decay of the residual after a given number of model evaluations, for increasing problem sizes and $\eta \in \{0.005, 0.025, 0.1, 0.4\}$. Experiments that are not plotted diverged.[]{data-label="fig:matrixGAN"}](apfigures/cvest_d_40.png "fig:") ![We plot the decay of the residual after a given number of model evaluations, for increasing problem sizes and $\eta \in \{0.005, 0.025, 0.1, 0.4\}$. Experiments that are not plotted diverged.[]{data-label="fig:matrixGAN"}](apfigures/cvest_d_60.png "fig:") Conclusion and outlook ====================== We propose a novel and natural generalization of gradient descent to competitive optimization. Besides its attractive game-theoretic interpretation, the algorithm shows improved robustness properties compared to the existing methods, which we study using a combination of theoretical analysis and computational experiments. We see four particularly interesting directions for future work. First, we would like to further study the practical implementation and performance of CGD, developing it to become a useful tool for practitioners to solve competitive optimization problems. Second, we would like to study extensions of CGD to the setting of more than two players. As hinted in Section \[sec:cgd\], a natural candidate would be to simply consider multilinear quadratically regularized local models, but the practical implementation and evaluation of this idea is still open. Third, we believe that second order methods can be obtained from biquadratic approximations with cubic regularization, thus extending the cubically regularized Newton’s method of [@nesterov2006cubic] to competitive optimization. Fourth, a convergence proof in the nonconvex case analogue to [@lee2016gradient] is still out of reach in the competitive setting. A major obstacle to this end is the identification of a suitable measure of progress (which is given by the function value in the setting in the single agent setting), since norms of gradients can not be expected to decay monotonously for competitive dynamics in non-convex-concave games. ### Acknowledgments {#acknowledgments .unnumbered} A. Anandkumar is supported in part by Bren endowed chair, Darpa PAI, Raytheon, and Microsoft, Google and Adobe faculty fellowships. F. Sch[ä]{}fer gratefully acknowledges support by the Air Force Office of Scientific Research under award number FA9550-18-1-0271 (Games for Computation and Learning) and by Amazon AWS under the Caltech Amazon Fellows program. We thank the reviewers for their constructive feedback, which has helped us improve the paper. Proofs of convergence ===================== To shorten the expressions below, we set $a {\coloneqq}\nabla_{x}f(x_{k})$, $b {\coloneqq}\nabla_{y} f(x_{k}, y_{k})$, $H_{xx} {\coloneqq}D_{xx}^2 f(x_{k}, y_{k})$, $H_{yy} {\coloneqq}D_{yy}^2 f(x_{k}, y_{k})$, $N {\coloneqq}D_{xy}^2 f(x_{k}, y_{k})$, ${\tilde{N}}{\coloneqq}\eta N$, ${\tilde{M}}{\coloneqq}{\tilde{N}}^{\top} {\tilde{N}}$, and ${\bar{M}}{\coloneqq}{\tilde{N}}{\tilde{N}}^{\top}$. Letting $(x,y)$ be the update step of CGD and using Taylor expansion, we obtain $$\begin{aligned} &\left( \nabla_x f(x + x_k, y + y_k \right)^2 + \left( \nabla_y f(x + x_k, y + y_k \right)^2 -\|a\|^2 - \|b\|^2 \\ \leq& 2 x^{\top} H_{xx} a + 2 x^{\top} N b + 2 a^{\top} N y + 2b^{\top} H_{yy} y \\ &+ 4L( \|x\|^2 + \|y\|^2)(\|a\| + \|b\|)\\ =& + 2 \eta \left(- a^{\top} - b^{\top}{\tilde{N}}^{\top}\right) \left( {\operatorname{Id}}+ {\bar{M}}\right)^{-1} H_{xx} a \\ &+ 2 x^{\top} N b + 2 a^{\top} N y \\ &+ 2 \eta b^{\top} H_{yy} \left( {\operatorname{Id}}+ {\tilde{M}}\right)^{-1} \left( b - {\tilde{N}}^{\top}a \right)\\ &+ 4L( \|x\|^2 + \|y\|^2)(\|a\| + \|b\|) = \ldots, \end{aligned}$$ By expanding zero to $\pm 2 \eta b^{\top} {\tilde{N}}^{\top} \left({\operatorname{Id}}+ {\bar{M}}\right)^{-1}H_{xx}a$ and $\pm 2 \eta b^{\top}H_{yy}\left( {\operatorname{Id}}+ {\tilde{M}}\right)^{-1}{\tilde{N}}^{\top} a $, we obtain $$\begin{aligned} \ldots =& -2 \eta a^{\top} H_{xx} a + 2 \eta a^{\top} {\bar{M}}\left( {\operatorname{Id}}+ {\bar{M}}\right)^{-1} H_{xx} a \\ &- 2 \eta b^{\top}{\tilde{N}}^{\top} \left( {\operatorname{Id}}+ {\bar{M}}\right)^{-1} H_{xx}a\\ &+ 2 x^{\top} N b + 2 a^{\top} N y \\ &+ 2 \eta b^{\top} H_{yy} b + b^{\top}H_{yy} \left( {\operatorname{Id}}+ {\tilde{M}}\right)^{-1} {\tilde{M}}b\\ &- 2 \eta b^{\top} H_{yy} \left( {\operatorname{Id}}+ {\tilde{M}}\right)^{-1}{\tilde{N}}^{\top}a \\ &+ 4L( \|x\|^2 + \|y\|^2)(\|a\| + \|b\|) = \ldots. \end{aligned}$$ We now plug the update rule of CGD into $x$ and $y$ and observe that ${\tilde{N}}^{\top}({\operatorname{Id}}+ {\bar{M}})^{-1} = ({\operatorname{Id}}+ {\tilde{M}})^{-1}{\tilde{N}}^{\top}$ to obtain $$2 x^{\top} N b + 2 a^{\top} N y = -2 a^{\top} \left( {\operatorname{Id}}+ {\bar{M}}\right)^{-1} {\bar{M}}a - 2 b^{\top}\left({\operatorname{Id}}+ {\tilde{M}}\right)^{-1} {\tilde{M}}b.$$ By plugging this into our main computation, we obtain $$\begin{aligned} \ldots =& -2 \eta a^{\top} H_{xx} a + 2 \eta a^{\top} {\bar{M}}\left( {\operatorname{Id}}+ {\bar{M}}\right)^{-1} H_{xx} a \\ &- 2 \eta b^{\top}{\tilde{N}}^{\top} \left( {\operatorname{Id}}+ {\bar{M}}\right)^{-1} H_{xx}a\\ &- 2a^{\top} \left( {\operatorname{Id}}+ {\bar{M}}\right)^{-1} {\bar{M}}a - 2b^{\top} \left( {\operatorname{Id}}+ {\tilde{M}}\right)^{-1} {\tilde{M}}b\\ &+ 2\eta b^{\top} H_{yy} b - 2 \eta b^{\top}H_{yy} \left( {\operatorname{Id}}+ {\tilde{M}}\right)^{-1} {\tilde{M}}b\\ &- 2\eta b^{\top} H_{yy} \left( {\operatorname{Id}}+ {\tilde{M}}\right)^{-1} {\tilde{N}}^{\top}a \\ &+ 4L( \|x\|^2 + \|y\|^2)(\|a\| + \|b\|) \leq \ldots. \end{aligned}$$ By positivity of squares, we have $$\begin{aligned} 2 \eta a^{\top} {\bar{M}}\left( {\operatorname{Id}}+ {\bar{M}}\right)^{-1} H_{xx} a &\leq a^{\top} \left({\bar{M}}\left({\operatorname{Id}}+ {\bar{M}}\right)^{-1} \right)^2 a + a^{\top} \left( \eta H_{xx} \right)^2 a \\ -2 \eta b^{\top} H_{yy} \left( {\operatorname{Id}}+ {\tilde{M}}\right)^{-1} {\tilde{M}}b &\leq b^{\top} \left({\tilde{M}}\left({\operatorname{Id}}+ {\tilde{M}}\right)^{-1} \right)^2 b + b^{\top} \left( \eta H_{yy}\right)^2 b. \end{aligned}$$ For $\lambda \in [-1,1]$ we have $-2 \lambda + \lambda^2 = 2 \lambda \left( 1 - \lambda/2 \right) \leq - h_{\pm}\left(\lambda)\right)$ from which we deduce the result. Theorem 2.4 follows from Theorem 2.3 by relatively standard arguments: Since $\nabla_{x}f(x^{*},y^{*}), \nabla_{x}f(x^{*},y^{*}) = 0 $ and the gradient and Hessian of $f$ are continuous, there exists a neighbourhood $\mathcal{V}$ of $(x^{*},y^{*})$ such that for all possible starting points $(x_1,y_1) \in \mathcal{V}$, we have $\|(\nabla_{x} f(x_2,y_2), \nabla_{y} f(x_2, y_2)\| \leq (1-\lambda_{\min}/4) \|(\nabla_{x} f(x_1,y_1), \nabla_{y} f(x_1, y_1)\|$. Then, by convergence of the geometric series there exists a closed neighbourhood $\mathcal{U} \subset \mathcal{V}$ of $(x^{*},y^{*})$, such that for $(x_0,y_0) \in \mathcal{U}$ we have $(x_k,y_k) \in \mathcal{V}, \forall k \in \mathbb{N}$ and thus $(x_k,y_k)$ converges at an exponential rate to a point in $\mathcal{U}$. Details regarding the experiments ================================= Experiment: Estimating a covariance matrix ------------------------------------------ We consider the problem $-g(V,W) = f(W,V) = \sum_{ijk} W_{ij}\left(\hat{\Sigma}_{ij} - (V\hat{\Sigma} V^{\top})_{i,j}\right)$, where the $\hat{\Sigma}$ are empirical covariance matrices obtained from samples distributed according to $\mathcal{N}(0,\Sigma)$. For our experiments, the matrix $\Sigma$ is created as $\Sigma = U U^T$, where the entries of $U \in \mathbb{R}^{d \times d}$ are distributed i.i.d. standard Gaussian. We consider the algorithms OGDA, SGA, ConOpt, and CGD, with $\gamma = 1.0$, $\epsilon = 10^{-6}$ and let the stepsizes range over $\eta \in \{0.005, 0.025, 0.1, 0.4\}$. We begin with the deterministic case $\hat{\Sigma} = \Sigma$, corresponding to the limit of large sample size. We let $d \in \{20, 40, 60\}$ and evaluate the algorithms according to the trade-off between the number of forward evaluations and the corresponding reduction of the residual $\|W+W^{\top}\|_{\operatorname{FRO}}/2+ \|UU^{\top} - V V^{\top}\|_{\operatorname{FRO}}$, starting with a random initial guess (the same for all algorithms) obtained as $W_{1} = \delta W$, $V_{1} = U + \delta V$, where the entries of $\delta W, \delta V$ are i.i.d uniformly distributed in $[-0.5,0.5]$. We count the number of “forward passes” per outer iteration as follows. - OGDA: 2 - SGA: 4 - ConOpt: $6$ - CGD: 4 + 2 $*$ number of CG iterations The results are summarized in Figure \[fig:matrixDet\]. We see consistently that for the same stepsize, CGD has convergence rate comparable to that of OGDA. However, as we increase the stepsize the other methods start diverging, thus allowing CGD to achieve significantly better convergence rates by using larger stepsizes. For larger dimensions ($d\in \{40, 60\}$) OGDA, SGA, and ConOpt become even more unstable such that OGDA with the smallest stepsize is the only other method that still converges, although at a much slower rate than CGD with larger stepsizes. ![The decay of the residual as a function of the number of forward iterations ($d = 20, 40, 60$, from top to bottom). **Note that missing combinations of algorithms and stepsizes correspond to divergent experiments**. While the exact behavior of the different methods is subject to some stochasticity, results as above were typical during our experiments.[]{data-label="fig:matrixDet"}](apfigures/cvest_d_20.png "fig:") ![The decay of the residual as a function of the number of forward iterations ($d = 20, 40, 60$, from top to bottom). **Note that missing combinations of algorithms and stepsizes correspond to divergent experiments**. While the exact behavior of the different methods is subject to some stochasticity, results as above were typical during our experiments.[]{data-label="fig:matrixDet"}](apfigures/cvest_d_40.png "fig:") ![The decay of the residual as a function of the number of forward iterations ($d = 20, 40, 60$, from top to bottom). **Note that missing combinations of algorithms and stepsizes correspond to divergent experiments**. While the exact behavior of the different methods is subject to some stochasticity, results as above were typical during our experiments.[]{data-label="fig:matrixDet"}](apfigures/cvest_d_60.png "fig:") We now consider the stochastic setting, where at each iteration a new $\hat{\Sigma}$ is obtained as the empirical covariance matrix of $N$ samples of $\mathcal{N}(0,\Sigma)$, for $N \in \{100, 1000, 10000\}$. ![The decay of the residual as a function of the number of forward iterations in the stochastic case with $d = 20$ and batch sizes of $100, 1000, 10000$, from top to bottom).[]{data-label="fig:matrixStoch"}](apfigures/cvest_NUM_BATCH_100.png "fig:") ![The decay of the residual as a function of the number of forward iterations in the stochastic case with $d = 20$ and batch sizes of $100, 1000, 10000$, from top to bottom).[]{data-label="fig:matrixStoch"}](apfigures/cvest_NUM_BATCH_1000.png "fig:") ![The decay of the residual as a function of the number of forward iterations in the stochastic case with $d = 20$ and batch sizes of $100, 1000, 10000$, from top to bottom).[]{data-label="fig:matrixStoch"}](apfigures/cvest_NUM_BATCH_10000.png "fig:") In this setting, the stochastic noise very quickly dominates the error, preventing CGD from achieving significantly better approximations than the other algorithms, while other algorihtms decrease the error more rapidly, initially. It might be possible to improve the performance of our algorithm by lowering the accuracy of the inner linea system solve, following the intuition that in a noisy environment, a very accurate solve is not worth the cost. However, even without tweaking $\epsilon$ it is noticable than the trajectories of CGD are less noisy than those of the other algorithms, and it is furthermore the only algorithm that does not diverge for any of the stepsizes. It is interesting to note that the trajectories of CGD are consistently more regular than those of the other algorithms, for comparable stepsizes. Experiment: Fitting a bimodal distribution ------------------------------------------ We use a GAN to fit a Gaussian mixture of two Gaussian random variables with means $\mu_{1} = (0,1)^{\top}$ and $\mu_{2} = (2^{-1/2}, 2^{-1/2})^{\top}$, and standard deviation $\sigma = 0.1$ Generator and discriminator are given by dense neural nets with four hidden layers of $128$ units each that are initialized as orthonormal matrices, and ReLU as nonlinearities after each hidden layer. The generator uses 512-variate standard Gaussian noise as input, and both networks use a linear projection as their final layer. At each step, the discriminator is shown 256 real, and 256 fake examples. We interpret the output of the discriminator as a logit and use sigmoidal crossentropy as a loss function. We tried stepsizes $\eta \in \{0.4, 0.1, 0.025, 0.005\}$ together with RMSProp ($\rho = 0.9)$ and applied SGA, ConOpt ($\gamma = 1.0$), OGDA, and CGD. Note that the RMSProp version of CGD with diagonal scaling given by the matrices $S_x$, $S_y$ is obtained by replacing the quadratic penalties $x^{\top}x/(2 \eta)$ and $y^{\top}y/(2 \eta)$ in the local game by $x^{\top} S_x^{-1}x/(2 \eta)$ and $y^{\top} S_x^{-1}y/(2 \eta)$, and carrying out the remaining derivation as before. This also allows to apply other adaptive methods like ADAM. On all methods, the generator and discriminator are initially chasing each other across the strategy space, producing the typical cycling pattern. When using SGA, ConOpt, or OGDA, however, eventually the algorithm diverges with the generator either mapping all the mass far away from the mode, or collapsing the generating map to become zero. Therefore, we also tried decreasing the stepsize to $0.001$, which however did not prevent the divergence. For CGD, after some initial cycles the generator starts splitting the mass and distributes is roughly evenly among the two modes. During our experiments, this configuration appeared to be robust. [^1]: Here and in the following, unless otherwise mentioned, all derivatives are evaluated in the point $(x_k, y_k)$ [^2]: We could alternatively use the penalty $(x^{\top}x + y^{\top}y)/(2 \eta)$ for both players, without changing the solution. [^3]: We note that the matrix inverses exist for all but one value of $\eta$, and for all $\eta$ in the case of a zero sum game. [^4]: Applying a damped and regularized Newton’s method to the optimization problem of Player 1 would amount to choosing $x_{k+1} = x_{k} - \eta({\operatorname{Id}}+ \eta D_{xx}^2)^{-1} f \nabla_x f \approx x_{k} - \eta( \nabla_xf - \eta D_{xx}^{2}f \nabla_x f)$, for $\|\eta D_{xx}^2f\| \ll 1$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present low-resolution 5.5-35$\mu$m spectra for 103 galaxies from the 12$\mu$m Seyfert sample, a complete unbiased 12$\mu$m flux limited sample of local Seyfert galaxies selected from the [ *IRAS*]{} Faint Source Catalog, obtained with the Infrared Spectrograph (IRS) on-board [*Spitzer*]{} Space Telescope. For 70 of the sources observed in the IRS mapping mode, uniformly extracted nuclear spectra are presented for the first time. We performed an analysis of the continuum emission, the strength of the Polycyclic Aromatic Hydrocarbon (PAH) and astronomical silicate features of the sources. We find that on average, the 15-30$\mu$m slope of the continuum is $<\alpha_{15-30}>$=-0.85$\pm$0.61 for Seyfert 1s and -1.53$\pm$0.84 for Seyfert 2s, and there is substantial scatter in each type. Moreover, nearly 32% of Seyfert 1s, and 9% of Seyfert 2s, display a peak in the mid-infrared spectrum at 20$\mu$m, which is attributed to an additional hot dust component. The PAH equivalent width decreases with increasing dust temperature, as indicated by the global infrared color of the host galaxies. However, no statistical difference in PAH equivalent width is detected between the two Seyfert types, 1 and 2, of the same bolometric luminosity. The silicate features at 9.7 and 18$\mu$m in Seyfert 1 galaxies are rather weak, while Seyfert 2s are more likely to display strong silicate absorption. Those Seyfert 2s with the highest silicate absorption also have high infrared luminosity and high absorption (hydrogen column density N$_H>$10$^{23}$ cm$^{-2}$) as measured from the X-rays. Finally, we propose a new method to estimate the AGN contribution to the integrated 12$\mu$m galaxy emission, by subtracting the “star formation” component in the Seyfert galaxies, making use of the tight correlation between PAH 11.2$\mu$m luminosity and 12$\mu$m luminosity for star forming galaxies.' author: - 'Yanling Wu, Vassilis Charmandaris, Jiasheng Huang, Luigi Spinoglio, Silvia Tommasin' title: 'Spitzer/IRS 5-35$\mu$m Low-Resolution Spectroscopy of the 12$\mu$m Seyfert Sample' --- Introduction ============ for which most of their nuclear and often bolometric luminosities is produced in Active galaxies are galaxies in which one detects radiation from their nucleus which is due to accretion onto a super-massive black hole (SMBH) located at the center. The spectrum of an Active Galactic Nucleus (AGN) is typically flat in $\nu$f$_{\nu}$. The fraction of the energy emitted from the AGN compared with the total bolometric emission of the host can range from a few percent in moderated luminosity systems (L$_{bol} <10^{11}$L$_{\odot}$) to more than 90% in quasars (L$_{bol} >10^{12}$L$_{\odot}$) [see @Ho08 and references therein]. As a subclass, Seyfert galaxies are the nearest and brightest AGNs, with 2-10keV X-ray luminosities less than $\sim10^{44}$ergs$^{-1}$ and their observed spectral line emission originates principally from highly ionized gas. Seyferts have been studied at many wavelengths, from X-rays, ultraviolet,optical, to infrared (IR) and radio. The analysis of their optical spectra has lead to the identification of two types, Seyfert 1s (Sy 1s) and Seyfert 2s (Sy 2s), with the type 1s displaying features of both broad (FWHM$>$2000 km s$^{-1}$) and narrow emission lines, while the type 2s only narrow-line emission. The differences between the two Seyfert types have been an intense field of study for many years. Are they due to intrinsic differences in their physical properties, or are they simply a result of dust obscuration that hides the broad-line region in Sy 2s? A so-called unified model has been proposed [see @Antonucci93; @Urry95], suggesting that Sy 1s and Sy 2s are essentially the same objects viewed at different angles. A dust torus surrounding the central engine blocks the optical light when viewed edge on (Sy 2s) and allows the nucleus to be seen when viewed face on (Sy 1s). Optical spectra in polarized light [@Antonucci85] have indeed demonstrated for several Sy 2s the presence of broad lines, confirming for these objects the validity of the unified model. However, the exact nature of this orientation-dependent obscuration is not clear yet. Recently, more elaborate models, notably the ones of @Elitzur08, @Nenkova08, and @Thompson09 suggest that the same observational constraints can also be explained with discrete dense molecular clouds, without the need of a torus geometry. The study of Seyfert galaxies is interesting also from a cosmological perspective, as they trace the build up of SMBHs at the centers of galaxies. Observations up to 10keV have established that the cosmic X-ray background (CXB) is mostly due to Seyferts with a peak in their redshift distribution at z$\sim$0.7 [@Hasinger05]. Furthermore, theoretical modeling of the observed number counts suggests that CXB at 30keV is also dominated by obscured Seyferts at z$\sim$0.7 [@Gilli07; @Worsley05]. Given the strong ionization field produced by the accretion disk surrounding a SMBH, the dust present can be heated to near sublimation temperatures, making an AGN appear very luminous in the mid-infrared (mid-IR). Mid-IR spectroscopy is a powerful tool to examine the nature of the emission from AGNs, as well as the nuclear star-formation activity. Since IR observations are much less affected by dust extinction than those at shorter wavelengths, they have been instrumental in the study of obscured emission from optically thick regions in AGNs. This is crucial to understand the physical process of galaxy evolution. With the advent of the [*Infrared Space Observatory (ISO)*]{}, local Seyferts have been studied by several groups [see @Verma05 for a review]. Mid-IR diagnostic diagrams to quantitatively disentangle the emission from AGNs, starbursts and quiescent star-forming (SF) regions have been proposed, using both spectroscopy and broad-band photometry [i.e. @Genzel98; @Laurent00]. The recent launch of the [ *Spitzer*]{} Space Telescope [@Werner04] has enabled the study of AGN with substantially better sensitivity and spatial resolution. In particular, using the Infrared Spectrograph (IRS[^1]) [@Houck04a] on board [*Spitzer*]{}, @Weedman05 demonstrated early into the mission the variety in the morphology displayed by the mid-IR spectra of eight classical AGNs. Since then, large samples of AGNs have been studied in detail, in an effort to quantify their mid-IR properties [@Buchanan06; @Sturm06; @Deo07; @Gorjian07; @Hao07; @Tommasin08]. In addition, new mid-IR diagnostics have been developed to probe the physics of more complex sources, such as luminous and ultra luminous infrared galaxies (LIRGs/ULIRGs), which may also harbor AGNs. These were based on correlating the strength of the Polycyclic Aromatic Hydrocarbons (PAHs), high excitation fine-structure lines, as well as silicate features [i.e. @Armus07; @Spoon07; @Charmandaris08; @Nardini08]. The extended 12$\mu$m galaxy sample is a flux-limited (down to 0.22Jy at 12$\mu$m) sample of 893 galaxies selected from the IRAS Faint Source Catalog 2 [@Rush93]. As discussed by @Spinoglio89, all galaxies emit a nearly constant fraction ($\sim$7%) of their bolometric luminosity at 12$\mu$m. As a result, selecting active galaxies based on their rest frame 12$\mu$m fluxes is the best approach to reduce selection bias due to the variations in their intrinsic spectral energy distributions (SED). A total of 116 objects from this sample have been optically classified as Seyfert galaxies (53 Sy 1s and 63 Sy 2s), providing one of the largest IR selected unbiased AGN sample. This sample also has ancillary data in virtually all wavelengths, thus making it the most complete data set for systematically studying the fundamental issues of AGNs in the infrared. Low-resolution 5.5-35$\mu$m Spitzer/IRS spectra of 51 Seyferts from the 12$\mu$m sample have been published by @Buchanan06, who focused on the study of the Seyfert types and the shape of mid-IR SED using principal component analysis. Based on this analysis and comparing with radio data, where available, they estimate the starburst contribution to the observed spectrum and find it to appear stronger in Sy 2s, in contrast to the unified model. However, high resolution [*Spitzer*]{} spectroscopy on 29 objects by @Tommasin08 does not find a clear indication of stronger star formation in Sy 2s than Sy 1s. In this paper, we extend earlier work and study the mid-IR properties and nature of the dust enshrouded emission from 103 Seyferts of the 12$\mu$m Seyferts, nearly 90% of the whole sample. We use low resolution Spitzer/IRS spectra, focusing mainly on their broad emission and absorption features and provide for the first time our measurements of the PAH emission and strength of Si absorption features. Our observations and data reduction are presented in §2. In §3, we show our analysis on the mid-IR continuum shape, PAH emission and silicate strength of Seyfert galaxies. A new method to separate the star-formation and AGN contribution in the 12$\mu$m continuum is proposed in §4. Finally, we summarize our conclusions in §5. Observations and Data Reduction =============================== Since the launch of [*Spitzer*]{} in August 2003, a large fraction of the 12$\mu$m Seyfert galaxies have been observed by various programs using the low-resolution (R$\sim$64-128) and high-resolution (R$\sim$600) modules of Spitzer/IRS. These observations are publicly available in the [*Spitzer*]{} archive. A total of 84 galaxies have been observed with the low-resolution spectral mapping mode of the IRS by the GO program “Infrared SEDs of Seyfert Galaxies: Starbursts and the Nature of the Obscuring Medium” (PID: 3269), and mid-IR spectra extracted from the central regions of the maps for 51 galaxies were published by @Buchanan06. Spectra for the remaining sources have been taken as part of a number of Guaranteed, Open Time, as well as Legacy programs with program identifications (PID) 14, 61, 86, 96, 105, 159, 3237, 3624, 30291, and 30572. For the purpose of this work, which is mainly to use the PAH emission features to diagnose starburst and AGN contribution, we are focusing on the low-resolution IRS spectra (Short Low, SL: 5-15$\mu$m; Long Low, LL: 15-37$\mu$m). We performed a complete search of the Spitzer Science Center (SSC) data archive and retrieved a total of 103 sources with at least Short Low observations. Among these objects, 47 are optically classified as Sy 1s and 56 as Sy 2s[^2]. We adopt the spectral classification of @Rush93 for the Seyfert types. Even though in a few cases, the classification may be ambiguous or may have changed, we do not expect our results, which are of statistical nature, to be affected. The complete list of the objects analyzed in this paper including their coordinates, IRAS fluxes, IR luminosity, redshift, Seyfert type and [*Spitzer*]{} program identification number are presented in Table \[tab1\]. The redshift and luminosity distribution of the 12$\mu$m Seyfert sample and the galaxies with IRS data studied in this paper is displayed in Figure \[fig:z\_L\]. With the exception of the data from the SINGS Legacy program (PID 159), of which we directly used spectra available at the SSC[^3], all other raw datasets are retrieved from the [ *Spitzer*]{} archive and reduced in the following manner. All except three[^4] of the 12$\mu$m Seyferts with cz$<$10,000kms$^{-1}$ have been observed with the IRS spectral mapping mode (PID 3269). This enabled us to also study the circumnuclear activity and examine the contribution of the host galaxy emission to the nuclear spectrum. Using datasets from this program, mid-IR spectra obtained from the central slit placement of each map with point-source extraction were published by @Buchanan06. However, as these authors have noted, since the observations had been designed as a spectral map, blind telescope pointing was used. As a result, no effort was made to accurately acquire each target and to ensure that the central slit of each map was indeed well centered on the source. Moreover, for sources where the mid-IR emission is extended, using a point-source extraction method would likely result in an overestimate of the flux densities, due to the slit loss correction function applied. This would consequently affect the SED of the galaxy. For most of the data reduction in this paper, we used The CUbe Builder for IRS Spectra Maps [@Smith07b CUBISM], in combination with an image convolution method. This method is developed explicitly for IRS mapping mode observations where the map size is small and involves the following steps. Spectral cubes were built using CUBISM from the IRS maps. Sky subtraction was performed by differencing the on and off source observations of the same order in each module (SL and LL). The observations from PID 3269 were designed to minimize redundancy and maximize the number of sources that can be observed using a limited amount of telescope time. As a result these maps consisted of only 13 steps in SL and 5 steps in LL with no repetition and for a source with extended mid-IR emission, the small map may not encompass the entire source, e.g. NGC1365. To obtain an accurate SED from the extracted region of the galaxy, one needs to ensure that the same fraction of source fluxes at all wavelengths is sampled. Since the point spread function (PSF) changes from 5 to 35$\mu$m, we adopted an image convolution method to account for the change in the full width half maximum (FWHM) of the PSF: 2-dimensional images at each wavelength were convolved to the resolution at the longest wavelength (35$\mu$m). Then low-resolution spectra were extracted with matched apertures, chosen to encompass the whole nuclear emission. Even though the image convolution method dilutes the fluxes included in the extraction aperture, it does ensure an accurate SED shape, especially for small maps that cannot encompass the extended emission for the source. A comparison of the LL spectra before and after image convolution for NGC1365 can be found in Figure \[fig:ngc1365\]. A number of tests with varying sizes for the spectral extraction aperture were performed in order to select the optimum size. An aperture of 4$\times$3 pixels in LL was adopted, which corresponds to an angular size of 20.4$\arcsec$$\times$15.3$\arcsec$. Then a complete 5–35$\mu$m spectrum of all galaxies centered on their nucleus was extracted[^5]. The main motivation behind this choice was to ensure that we could produce an accurate overall SED of the extracted region, even in cases where the emission was extended. This was essential so that we have a reliable measurement of the continuum emission, in order to calculate the mid-IR slope, and to estimate the strength of the silicate features. Further tests were performed by extracting just the SL spectra of the sample using the smallest aperture possible (2$\times$2 pixels in SL or 3.6$\arcsec\times3.6\arcsec$). It was found that, in general, the measured fluxes and EWs of the PAH features did not differ more than 20% from the integrated spectra over the larger apertures, which indicates that these objects are likely to be less than 20% more extended than the point-spread-function of point sources. This suggests that most of the sources were not very extended in the mid-IR and more than 80% of their flux originates from a point source unresolved to Spitzer. Scaling the spectra between different orders and modules was not needed for most of the cases, and when an order mismatch was detected, the SL spectra were scaled to match LL. In Table \[tab2\], where we present our measurements, the sizes of the extraction apertures as well as their corresponding projected linear sizes on the sky are also listed. A histogram of the physical size of extraction aperture for the whole sample is presented in Figure \[fig:size\_hist\] with the dotted line, while in the same figure, we also show the distribution for sources that are observed in spectral mapping mode with the solid line. All the sources that are extracted from a projected size of more than 20 kpc are observed using the staring mode and reduced with point-source extraction method. For data obtained with the IRS staring mode, the reduction was done in the following manner. We started from intermediate pipeline products, the “droop” files, which only lacked stray light removal and flat-field correction. Individual pointings to each nod position of the slit were co-added using median averaging. Then on and off source images were differenced to remove the contribution of the sky emission. Spectra from the final 2-D images were extracted with the Spectral Modeling, Analysis, and Reduction Tool [SMART, @Higdon04] in a point source extraction mode, which scaled the extraction aperture with wavelength to recover the same fraction of the diffraction limited instrumental PSF. Note that since the width of both the SL and LL slits is 2 pixels (3.6$\arcsec$ and 10.2$\arcsec$ respectively), no information could be retrieved along this direction from areas of the galaxy which were further away. The spectra were flux calibrated using the IRS standard star $\alpha$ Lac, for which accurate template was available [@Cohen03]. Finally, using the first order of LL (LL1, 20–36$\mu$m) spectrum to define the absolute value of the continuum, the flux calibrated spectra of all other low-resolution orders were scaled to it. For nine sources in the 12$\mu$m Seyfert sample, both spectral mapping and staring mode observations had been obtained and were available in the SSC data archive. For these galaxies, the spectra were extracted following the above mentioned recipes for all observations. In order to ascertain again how extended the emission from these source was, we compared the resulting spectra of the same source. With two exceptions, NGC4151 and NGC7213, all other seven spectra obtained in staring mode required a scaling factor larger than 1.15 between the SL and LL modules. This further suggests that the nuclear emission in those sources is indeed extended, and point-source extraction may not be appropriate. For galaxies with only staring mode observations, we also checked the difference between the overlap region ($\sim15\mu$m) in the SL and LL modules. None of them require a scaling factor more than 1.15, suggesting that those objects are point-like at least along the direction of the IRS slits. Results ======= Global Mid-IR spectra of Seyfert Galaxies ----------------------------------------- It has been well established that the mid-IR spectra of Seyfert galaxies display a variety of features [see @Clavel00; @Verma05; @Weedman05; @Buchanan06; @Hao07 and references therein]. This is understood since, despite the optical classification of their nuclear activity, emission from the circumnuclear region, as well as of the host galaxy, also influences the integrated mid-IR spectrum of the source. Our complete 12$\mu$m selected Seyfert sample provides an unbiased framework to study the statistics on their mid-IR properties. Earlier work by @Buchanan06 on just 51 AGN from this sample presented a grouping of them based on their continuum shapes and spectral features. In this section, we explore the global shape of the mid-IR spectra for the complete Spitzer/IRS set of data on the 12$\mu$m Seyferts. We examine if the average mid-IR spectrum of a Sy 1 galaxy is systematically different from that of a Sy 2. The IRS spectra for 47 Sy 1s and 54 Sy 2s with full 5.5-35$\mu$m spectral coverage, normalized at the wavelength of 22$\mu$m, are averaged and plotted in Figure \[fig:avespect\]. For comparison, we over-plot the average starburst template from @Brandl06. It is clear that the mid-IR continuum slope of the average Sy 1 spectrum is shallower than that of Sy 2, while the starburst template has the steepest spectral slope, indicating a different mixture of hot/cold dust component in these galaxies [also see @Hao07]. This would be consistent with the interpretation that our mid-IR spectra of Sy 2s display a strong starburst contribution, possibly due to circumnuclear star formation activity included in the aperture we used to extract the spectra from, consistent with the findings of @Buchanan06 that star formation is extended and it is not a purely nuclear phenomenon. PAH emission, which is a good tracer of star formation activity [@Forster04], can be detected in the average spectra of both Seyfert types, while it is most prominent in the average starburst spectrum. PAH emission originates from photo-dissociation region (PDR) and can easily be destroyed by the UV/X-ray photons in strong a radiation field produced near massive stars and/or an accretion disk surrounding a SMBH [@Voit92; @Laurent00; @Clavel00; @Sturm02; @Verma05; @Weedman05]. In the 12$\mu$m Seyfert sample, we detect PAH emission in 37 Sy 1s and 53 Sy 2s, that is 78% and 93% for each type respectively. This is expected since the apertures we used to extract the mid-IR spectra for the 12$\mu$m sample correspond in most cases to areas of more than 1 kpc in linear dimensions (see Figure \[fig:size\_hist\]). As a result, emission from the PDRs associated with the extended circumnuclear region and the disk of the host galaxy is also encompassed within the observed spectrum. High ionization fine-structure lines, such as \[NeV\]14.32$\mu$m/24.32$\mu$m, are clearly detected even in the low-resolution average spectrum of Sy 1. This signature is also visible, though rather weak, in the average spectrum of Sy 2, while it is absent in the average starburst template. Due to photons of excitation energy higher than 97eV and typically originating from the accretion disk of an AGN, \[NeV\] serves as an unambiguous indicator of an AGN. Even though the low-resolution module of IRS was not designed for studying fine-structure lines, we are still able to detect \[NeV\] emission in 29 Sy 1s and 32 Sy 2s, roughly 60% for both types. Another high ionization line, \[OIV\]25.89$\mu$m, with ionization potential of 54eV, also appears in both Seyfert types (42 Sy 1s and 41 Sy 2s), and is stronger in the average spectrum of Sy 1. The \[OIV\] emission line can be powered by shocks in intense star forming regions or AGNs [see @Lutz98; @Schaerer99; @Bernard-Salas09; @Hao09]. In our sample it is probably powered by both, given the large aperture we adopted for spectral extraction. More details and a complete analysis of mid-IR fine-structure lines for 29 galaxies from the 12$\mu$m Seyfert sample are presented in @Tommasin08, while the work for the entire sample is in progress (Tommasin et al. 2009). An atlas with mid-IR low-resolution spectra of the 12$\mu$m Seyfert sample is included at the end of this paper[^6]. We find that a fraction of our sources show a flattening or a local maximum in the mid-IR continuum at $\sim$20$\mu$m, which had also been noted as a “broken power-law” in some Seyfert galaxies by @Buchanan06. Another more extreme such case was the metal-poor blue compact dwarf galaxy SBS0335-052E [@Houck04b], and it was interpreted with the possible absence of larger cooler dust grains in the galaxy. Since the change of continuum slope appears at $\sim$20$\mu$m, we use the flux ratio at 20 and 30$\mu$m, F$_{20}$/F$_{30}$, to identify these objects. After further examination of the spectra, we find 15 Sy 1s and 4 Sy 2s, which have a F$_{20}$/F$_{30}$$\ge$0.95. We call these objects “20$\mu$m peakers”. To analyse the properties of “20$\mu$m peakers”, we plot the average IRS spectra of the 19 sources in Figure \[fig:ave\_peaker\]. As most of these sources are type 1 Seyferts (15 out of 19), we also overplot the average Sy 1 spectrum for comparison. In addition to their characteristic continuum shape, a number of other differences between the “20$\mu$m peakers” and Sy 1s are also evident. PAH emission, which is clearly detected in the average Sy 1 spectrum, appears to be rather weak in the average “20$\mu$m peaker” spectrum. The high-ionization lines of \[NeV\] and \[OIV\] are seen in both spectra with similar strength, while low-ionization lines, especially \[NeII\] and \[SIII\], are much weaker in the average spectrum of “20$\mu$m peakers”. If we calculate the infrared color of a galaxy using the ratio of F$_{25}$/F$_{60}$ (see section 3.3 for more detailed discussion on this issue), we find an average value of 0.75 for the “20$\mu$m peakers”, while it is 0.30 for the other “non-20$\mu$m peaker” Sy 1s in the 12$\mu$m sample. Finally, the average IR luminosities of the “20$\mu$m peakers” and Sy 1s do not show significant difference, with log(L$_{\rm IR}/L_\odot$)=10.96 for the former and log(L$_{\rm IR}/L_\odot$)=10.86 for the latter. These results are consistent with the “20$\mu$m peakers” being AGNs with a dominant hot dust emission from a small grain population heated to effective temperatures of $\sim$ 150K and a possible contribution due to the distinct emissivity of astronomical silicates at 18$\mu$m. Their radiation field must also be stronger than a typical Sy 1, since it destroys the PAH molecules around the nuclear region more efficiently. We should stress that unlike SBS0335-052E, whose global IR SED peaks at $\sim$30$\mu$m, and likely has limited quantities of cold dust, the mid-IR peak we observe in these objects at $\sim$20$\mu$m is likely a local one[^7]. This becomes more evident in Figure \[fig:peaker\_sed\], where we also include the scaled average 60 and 100$\mu$m flux densities for the “20$\mu$m peakers”. It is clear that their far-IR emission increases with wavelength, thus confirming the presence of ample quantities cold dust in these objects. To contrast the global SED of these objects with the whole Seyfert sample we include in Figures \[fig:s1\_sed\] and \[fig:s2\_sed\] the average SED of the Sy 1s and Sy 2s of the sample. All SEDs have been normalized as in Figure \[fig:avespect\] at 22$\mu$m. Unlike the “20$\mu$m peakers” one can easily observe the regular increase of the flux from $\sim$15 to 60$\mu$m in the average Sy 1 and Sy 2 SEDs. A more detailed analysis of the dust properties of the “20$\mu$m peakers” in comparison with typical active galaxies will be presented in a future paper. The PAH emission in the 12$\mu$m Seyferts ----------------------------------------- In this section, we explore some of the properties of the PAH emission in our sample and contrast our findings to the previous work. To quantify the strength of PAH emission, we follow the usual approach and measure the fluxes and equivalent widths (EWs) of the 6.2 and 11.2$\mu$m PAH features from their mid-IR spectra. Even though the 7.7$\mu$m PAH is the strongest of all PAHs [@Smith07a], we prefer not to include it in our analysis. This is due to the fact that its measurement is affected by absorption and emission features next to it and depends strongly both on the assumptions of the various measurement methods (spline or Drude profile fitting) as well as the underlying continuum. Furthermore, it often spans between the two SL orders, which could also affect its flux estimate. The 6.2 and 11.2$\mu$m PAH EWs are derived by integrating the flux of the features above an adopted continuum and then divide by the continuum fluxes in the integration range. The baseline is determined by fitting a spline function to the selected points (5.95-6.55$\mu$m for the 6.2$\mu$m PAH and 10.80-11.80$\mu$m for the 11.2$\mu$m PAH). The PAH EWs as well as the integrated fluxes are listed in Table \[tab2\]. The errors including both flux calibration and measurement are estimated to be less than $\sim$15% on average. The first study on the PAH properties of a large sample of Seyferts was presented by @Clavel00, using ISO/PHOT-S 2.5–11$\mu$m spectra and ISO/CAM broad band mid-IR imaging. The authors suggested that there was a statistical difference in the strength of the PAH emission and the underlying hot continuum ($\sim7\mu$m) emission between type 1 and type 2 objects. They also found that Sy 2s had stronger PAHs than Sy 1s, while Sy 1s had higher hot continuum associated with emission from the AGN torus. This was consistent with an orientation depended depression of the continuum in Sy 2s. The interpretation of these results was challenged by @Lutz04, who attributed it to the large (24$\arcsec\times24\arcsec$) aperture of ISO/PHOT-S, and the possible contamination from the host galaxy. More recently, @Deo07 have found a relation between the 6.2$\mu$m PAH EWs and the 20-30$\mu$m spectral index[^8] , with a steeper spectral slope seen in galaxies with a stronger starburst contribution. This is understood since galaxies hosting an AGN are “warmer” and have an IR SED peaking at shorter wavelengths thus appearing flatter in the mid-IR (see also next section). Given the global correlations between star formation activity and PAH strength [@Forster04; @Peeters04; @Wu05; @Calzetti05; @Calzetti07 i.e.], star-forming galaxies are expected to also have stronger PAH features. In Figure \[fig:pah\_index\], we plot the 15-30$\mu$m[^9] spectral index for the 12$\mu$m Seyfert sample as a function of their 11.2$\mu$m PAH EWs. The diamonds indicate the starburst galaxies from @Brandl06[^10]. A general trend of the PAH EWs decreasing as a function of 15-30$\mu$m spectral index is observed in Figure \[fig:pah\_index\], even though it is much weaker than the anti-correlation presented by @Deo07. Starburst galaxies are located on the upper left corner of the plot, having very steep spectral slopes, with $<\alpha_{15-30}>$=-3.02$\pm$0.50, and large PAH EWs, nearly 0.7$\mu$m. Seyfert galaxies spread over a considerably larger range in spectral slopes as well as PAH EWs. Sy 1s and Sy 2s are mixed on the plot. On average, the 15-30$\mu$m spectral index is $<\alpha_{15-30}>$=-0.85$\pm$0.61 for Sy 1s and $<\alpha_{15-30}>$=-1.53$\pm$0.84 for Sy 2s. Note that although the mean spectral slope is slightly steeper for Sy 2s, there is substantial scatter, as is evident by the uncertainties of the mean for each types (see also Figure 7 and 8). It is well known that the flux ratio of different PAH emission bands is a strong function of PAH size and their ionization state [@Draine01]. The 6.2$\mu$m PAH emission is due to C-C stretching mode and the 11.2$\mu$m feature is produced by C-H out-of-plane bending mode [@Draine03]. In Figure \[fig:pah\_hist\], we display a histogram of the 11.2$\mu$m to 6.2$\mu$m PAH flux ratios for the 12$\mu$m Seyferts. Given the relatively small number of starburst galaxies in the @Brandl06 sample (16 sources), we also included 20 HII galaxies from the SINGS sample of @Smith07a, thus increasing the number of SF galaxies to 36 sources and making its comparison with the Seyferts more statistically meaningful. However, since @Smith07a adopted multiple Drude profile fitting for the measurement of PAH features, for reasons of consistency, we re-measured the PAH fluxes of the 20 SINGS galaxies with our spline fitting method. From Figure \[fig:pah\_hist\], we can see that both the Seyferts and SF galaxies, indicated by the solid and dashed line respectively, appear to have very similar distribution of PAH 11.2$\mu$m/6.2$\mu$m band ratios. The average PAH flux ratios for the Seyfert sample is 0.94$\pm$0.37 while for the SF galaxies is 0.87$\pm$0.24, which agree within 1-$\sigma$. This is also consistent with the findings of @Shi07, who reported similar 11.2$\mu$m/7.7$\mu$m flux ratios between a sample of higher redshift AGNs (PG, 2MASS and 3CR objects) and the SINGS HII galaxies. This implies that even though the harsh radiation field in AGNs may destroy a substantial amount of the circumnuclear PAH molecules, and does so preferentially – the smaller PAHs being destroyed first [@Draine01; @Smith07a] – it likely does not do so over a large volume. Enough molecules in the circumnuclear regions do remain intact and as a result, the aromatic features that we observe from Seyferts are essentially identical to those in SF galaxies. The relative strength of PAH emission can also be used to examine the validity of the unified AGN model. As mentioned earlier, this model attributes the variation in AGN types as the result of dust obscuration and relative orientation of the line of sight to the nucleus [@Antonucci93]. Sy 1s and Sy 2s are intrinsically the same but appear to be different in the optical, mainly because of the much larger extinction towards the nuclear continuum of Sy 2s when viewed edge on. The latest analysis of the IRS high-resolution spectra of 87 galaxies from the 12$\mu$m Seyfert sample shows that the average 11.2$\mu$m PAH EW is 0.29$\pm$0.38$\mu$m for Sy 1s and 0.37$\pm$0.35$\mu$m for Sy 2s (Tommasin et al. 2009, in preparation). As we show in Table \[tab3\], the 11.2$\mu$m PAH EWs of the whole 12$\mu$m Seyfert sample (90 objects excluding upper limits) is 0.21$\pm$0.22$\mu$m for the Sy 1s and 0.38$\pm$0.30$\mu$m for the Sy 2s. The difference we observe in the PAH EWs between the two Seyfert types is somewhat larger than the one reported by Tommasin et al. (2009, in preparation), though still consistent within 1$\sigma$. This further suggests that there is little discernible difference between Sy 1s and Sy 2s at this wavelength. If the observed AGN emission in the infrared does not depend on the line of sight of the observer, one can compare the circumnuclear PAH emission of Sy 1s and Sy 2s of similar bolometric luminosities to test the unified model. If we bin the sources according to their IR luminosity, we find an average PAH EW of 0.22$\pm$0.23$\mu$m for Sy 1s and 0.40$\pm$0.30$\mu$m for Sy 2s with L$_{\rm IR}<$10$^{11}$L$_\odot$; while the average PAH EWs are 0.19$\pm$0.19$\mu$m for Sy 1s and 0.37$\pm$0.30$\mu$m for Sy 2s with L$_{\rm IR}\ge$10$^{11}$L$_\odot$ (see Table 3 for a summary of these results). The difference between the two Seyfert types is still less than 1$\sigma$. This result is not in agreement with the findings of Buchanan et al. (2006) who based on a principle component analysis find that in their subset of 51 galaxies the Sy 2s show a stronger starburst eigenvector/template contribution than the Sy 1s. As the authors suggest this might be due to a bias of selection effects. Our result can be interpreted as indicating that there is some, but not substantial obscuration in the mid-IR. As a consequence we are able to probe deep into the nuclear region, sampling most of the volume responsible for the mid-IR emission. This result is consistent with a similar finding of @Buchanan06 who compared the mid-IR to radio ratio for their sample. They concluded that the observed factor of $\sim$2 difference between the two types would imply either a smooth torus which is optically thin in the mid-IR or a clumpy one containing a steep radial distribution of optically thick dense clumps [@Nenkova08]. Cold/Warm AGN diagnostics ------------------------- The [*IRAS*]{} 25 and 60$\mu$m flux ratio has long been used to define the infrared color (“warm” or “cold”) of a galaxy, with “warm” galaxies having a ratio of F$_{25}$/F$_{60}>$0.2 [@Sanders88]. In Figure \[fig:warm\_cold\], we plot the 11.2$\mu$m PAH EW as a function of the flux ratio between F$_{25}$ and F$_{60}$ for the 12$\mu$m Seyfert sample. The [*IRAS*]{} 25 and 60$\mu$m fluxes were compiled from @Rush93 and @Sanders03 and are listed in Table \[tab1\]. The aperture of the [*IRAS*]{} broad band filters is on the order of a few arcminutes on the sky[^11] and typically encompass the whole galaxy, while the PAH EW is measured from a spectrum of a smaller region centered on the nucleus of the galaxy (see Table \[tab2\]). Nevertheless, we observe a clear trend of the 11.2$\mu$m PAH EW decreasing with F$_{25}$/F$_{60}$ ratio in Figure \[fig:warm\_cold\]. On this plot, we also include the SF galaxies from @Brandl06 and @Smith07a. All SF galaxies appear to be clustered on the top left corner of the plot, having high PAH EWs and low F$_{25}$/F$_{60}$ values, suggesting strong star formation and cooler dust temperatures. The observed suppression of PAH emission seen in the warm AGNs implies that the soft X-ray and UV radiation of the accretion disk, which destroys the PAH molecules, is also reprocessed by the dust and dominates the mid- and far-IR colors. More specifically, warm Sy 1s have an average 11.2$\mu$m PAH EW of 0.10$\pm$0.12$\mu$m, while for Sy 2s the value is 0.18$\pm$0.24$\mu$m. Similarly for the cold sources, the average PAH EW is 0.40$\pm$0.23$\mu$m for Sy 1s and 0.59$\pm$0.19$\mu$m for Sy 2s. We observe a $\sim$3$\sigma$ difference in the PAH EWs between the cold and warm sources, independent of their Seyfert type. This indicates that as the emission from the accretion disk surrounding the SMBH of the active nucleus contributes progressively more to the IR luminosity, its radiation field also destroys more of the PAH molecules and thus diminishes their mid-IR emission. The trend of PAH EWs decreasing with F$_{25}$/F$_{60}$ has also been detected in a large sample of ULIRGs studied by @Desai07. The luminosities of the 12$\mu$m Seyfert sample are more comparable with LIRGs, thus our work extends the correlation of @Desai07 to a lower luminosity range. This is rather interesting since deep photometric surveys with [*Spitzer*]{} can now probe normal galaxies as well as LIRGs at z$\sim$1 [@Lefloch05], a fraction of which are known to host an AGN based on optical spectra and mid-IR colors [@Fadda02; @Stern05; @Brand06]. We have also investigated the dependence of the 6.2$\mu$m PAH EW on the F$_{25}$/F$_{60}$ ratio for our sample. A similar trend of the 6.2$\mu$m EWs decreasing with F$_{25}$/F$_{60}$ is observed as well, though there is larger scatter than the one seen with the 11.2$\mu$m feature. This is probably due to the fact that the 6.2$\mu$m PAH EW is intrinsically fainter and only upper limits could be measured for a number of source (see Table \[tab2\]) . Despite the scatter, this trend is still a rather important finding, because for high redshift galaxies (z$>$2.5), the 6.2 and 7.7$\mu$m PAHs might be the only features available in the wavelength range covered by the IRS, thus measuring them can reveal essential information on the star-formation luminosity and dust composition of high-redshift galaxies [see @Houck05; @Yan05; @Teplitz07; @Weedman08]. The Silicate Strength of the 12$\mu$m Seyferts ---------------------------------------------- In the mid-IR regime, one can examine not only the structure of complex organic molecules and determine their aromatic or aliphatic nature, but can also probe the chemistry of dust grains [see @vanDishoeck04]. One of the most prominent continuum features in the 5-35$\mu$m range is the one associated with the presence of astronomical silicates in the grains, which are characterized by two peaks in the emissivity at 9.7 and 18$\mu$m [see @Dudley97]. The silicate features had been detected in absorption in SF galaxies, protostars and AGN for over 30 years [ie @Gillett75; @Rieke75], but it was the advent of space observatories such as [*ISO*]{} and [*Spitzer*]{}, which allowed for the first time of study over a wide range of astronomical objects and physical conditions. According to the unified model, an edge-on view through the cool dust in the torus will cause the silicate feature to be seen in absorption, while with a face-on view, the hot dust at the inner surface of the torus will cause an emission feature for the silicate [@Efstathiou95]. Even though silicate in emission at 9.7$\mu$m had already been observed in the SF region N66 [@Contursi00], emission at both 9.7$\mu$m and 18$\mu$m in AGNs and Quasars was detected with [*Spitzer*]{} [@Siebenmorgen05; @Hao05; @Sturm05], providing strong support for the unified model [@Antonucci93]. Using Spitzer/IRS data, @Hao07 compiled a large, though inhomogeneous sample of AGNs and ULIRGs, and uniformly studied the silicate features in these galaxies. Using the same sample, @Spoon07 proposed a new diagnostic of mid-IR galaxy classification based on the strength of silicate and PAH features. To put in the same context the properties of the silicate feature in the 12$\mu$m Seyfert sample, we also measured the strength of the silicate at 9.7$\mu$m, using the definition and approach of @Spoon07 as: $$S_{\mathrm{sil}}={\mathrm{ln}} \frac{f_{\mathrm{obs}}(9.7\mu m)}{f_{\mathrm{cont}}(9.7\mu m)}$$ \ where f$_{\mathrm{cont}}$(9.7$\mu$m) is the flux density of a local mid-IR continuum, defined from the 5-35$\mu$m IRS spectrum. Sources with silicate in emission have positive strength and those in absorption negative. @Buchanan06 did identify two AGN with silicate emission and two more with broad silicate absorption out of the 51 sources they studied. In this paper, we provide for the first time measurements on the silicate features for this complete sample. We follow the prescription of @Spoon07 and @Sirocky08 for the continuum definition and identify the sources to be PAH-dominated, continuum-dominated and absorption-dominated. The values of S$_{\rm sil}$ measured from the 12$\mu$m Seyferts can be found in Table \[tab2\]. In Figure \[fig:sil\]a, we plot the 11.2$\mu$m PAH EWs as a function of the 9.7$\mu$m silicate strength[^12]. We observe that most Sy 1s are located close to S$_{\rm sil}$=0 and the range of their silicate strength is rather narrow, with the exception of one galaxy, UGC5101, which is also one of the ULIRGs of the Bright Galaxy Sample [@Armus04; @Armus07]. This is in agreement with the results of @Hao07, who found that Sy 1s are equally likely to display the 9.7$\mu$m silicates feature in emission and in absorption. The Sy 2s have a larger scatter in the value of the silicate strength, with most of them showing the feature in absorption. The average silicate strength of the 12$\mu$m Seyfert sample is -0.07$\pm$0.29 for Sy 1s and -0.46$\pm$0.73 for Sy 2s, while the median values are -0.02 for Sy 1s and -0.23 for Sy 2s. Overall, though selected at 12$\mu$m, most objects have a rather weak silicate strength, with only 18 Sy 2s and 2 Sy1s displaying values of S$_{\rm sil}<$-0.5. We also examined the dependence of the silicate strength with the IR luminosity of the objects and plot it in Figure \[fig:sil\]b. Except for one galaxy (NGC7172), all the sources with deep silicate absorption features have IR luminosities larger than 10$^{12}$L$_{\odot}$, and thus are also classified as ULIRGs. In Sy 1 galaxies, even when their luminosities are larger than 10$^{12}$L$_\odot$, the 9.7$\mu$m silicate strength is still near zero, while high-luminosity Sy 2s are more likely to have deep silicate absorption features. We also compare the silicate strength with galaxy color, as defined in §3.3. In Figure \[fig:S\_color\_nh\]a, we plot the silicate strength as a function of the IRAS flux ratio of F25/F60. Since the majority of the galaxies do not have strong silicate features, no clear correlation between the two parameters is seen. We notice that galaxies with S$_{\rm sil}<-1$ also appear to have colder colors. This indicates that more dust absorption is present in sources with the colder IR SEDs, even though many sources with small F25/F60 ratios do not display any silicate absorption features. Finally, we investigate the relation between the mid-IR silicate strength and the hydrogen column density[^13], as measured from the X-rays. The latter can measure directly the absorption in active galaxies: the power law spectrum in the 2-10keV range may be affected by a cutoff due to photo-electric absorption, from which column densities of $10^{22} - 10^{24} cm^{-2}$ are derived [eg. @Maiolino98]. One should note though that due to the substantially smaller physical size of the nuclear region emitting in (hard) X-rays this measurement is more affected by the clumpiness of the intervening absorbing medium. The well established observational fact that Sy 2s are preferentially more obscured than Sy 1s, as has been shown both in the optical and in the X-ray spectra, is also apparent in the mid-IR spectra by our results given in Figure \[fig:S\_color\_nh\]b. We find that sources with weak silicate absorption or emission features span over all values of the column densities. However, most of the sources with a strong silicate absorption (S$_{\rm sil}<$ -0.5) have a $N_H > 10^{23}$ (cm$^{-2}$) (11 of 15 sources), and Sy 2s dominate this group (13 of 15 sources). What powers the 12$\mu$m luminosity in the 12$\mu$m Seyferts? ============================================================= The use of the global infrared dust emission as a tracer of the absorbed starlight and associated star formation rate has been known since the first results of IRAS (see Kennicutt 1998 and references there in). At 12$\mu$m, the flux obtained from the [*IRAS*]{} broadband filter is dominated by the continuum emission, though it could also be affected by several other discrete factors, including the silicate features, the PAH emission, fine-structure lines, etc. In Figure \[fig:L12\], we present the usual plot of L$_{\rm 12\mu m}$/L$_{\rm IR}$ versus the total IR luminosity[^14] for the Seyfert galaxies and SF galaxies. A clear correlation between these two parameters, originally presented for the 12$\mu$m sample by @Spinoglio95, is seen. The two Seyfert types do not show significant differences. SF galaxies appear to have a lower fractional 12$\mu$m luminosity when compared to Seyferts of similar total IR luminosity. This can be explained by the presence of hot dust emission (T$>$300K) originating from regions near the active nucleus, contributing more strongly at the shortest wavelengths (5 to 15$\mu$m) of the IR SED. This is consistent with the results of @Spinoglio95, who have shown that the 12$\mu$m luminosity is $\sim$15% of the bolometric luminosity[^15] in AGNs [@Spinoglio89], while only $\sim$7% in starburst and normal galaxies. Following these early IRAS results, a number of studies have explored the issue of distinguishing AGN from the star formation signatures in the mid-IR and to determine the fractional contribution of each component to the IR luminosity for local [i.e. @Genzel98; @Laurent00; @Peeters04; @Buchanan06; @Farrah07; @Nardini08] and high redshift sources [i.e. @Brand06; @Weedman06; @Sajina07]. More recently, using the \[OIV\]25.89$\mu$m line emission as an extinction free tracer of the AGN power, @Melendez08b were able to decompose the stellar and AGN contribution to the \[NeII\]12.81$\mu$m line. These authors have compiled a sample from existing [*Spitzer*]{} observations by @Deo07 [@Tommasin08; @Sturm02; @Weedman05], as well as X-ray selected sources from @Melendez08a. They found that Sy 1 and Sy 2 galaxies are different in terms of the relative AGN/starburst contribution to the infrared emission, with star formation being responsible for $\sim$25% of the mid- and far-IR continuum in Sy 1s, and nearly half of what was estimated for Sy 2s. In Figure \[fig:L\_pah\]a and \[fig:L\_pah\]b, we plot L$_{\rm 11.2\mu m PAH}$/L$_{\rm FIR}$ and L$_{\rm 11.2\mu m PAH}$/L$_{\rm IR}$ as a function of the far-infrared (FIR) luminosity[^16] and the total IR luminosity for the Seyferts and SF galaxies. For both the SF galaxies and the Seyferts, their PAH luminosity appear to have a nearly constant fraction of their FIR luminosity, which would be expected since both quantities have been used as indicators of the star-formation rate. However, for a given PAH luminosity, Seyfert galaxies display an excess in the total IR luminosity compared to starburst systems. This is also understood as the total IR luminosity is the sum of mid-IR and FIR luminosity and consequently it is affected by the AGN emission in the mid-IR. We propose a simple method to the mid-IR excess and estimate the AGN contribution to the 12$\mu$m luminosity. In Figure \[fig:L12\_pah\], we plot the L$_{\rm 11.2\mu m PAH}$/L$_{\rm 12\mu m}$ versus the 12$\mu$m luminosity for the Seyfert and SF galaxies. There is a clear correlation for SF galaxies with an average L(11.2$\mu$m PAH)/L(12$\mu$m) ratio of 0.044$\pm$0.010. Since there is no AGN contamination in the 12$\mu$m luminosity for these galaxies, we can attribute all mid-IR continuum emission to star formation. Seyfert galaxies display a larger scatter on this plot, and we decompose their 12$\mu$m luminosity into two parts: one contributed by the star formation activity, which is proportional to their PAH luminosity, and one due to dust heated by the AGN. If we assume that the star formation component in the 12$\mu$m luminosity of Seyferts is associated with the 11.2$\mu$m PAH luminosity in the same manner as in SF galaxies, then we can estimate the star formation contribution to the integrated 12$\mu$m luminosity of the Seyfert sample. Subtracting this SF contribution from the total 12$\mu$m luminosity, we can obtain, in a statistical sense, the corresponding AGN contribution. To check the validity of this method, we plot in Figure \[fig:agnfraction\] the “AGN fraction” as a function of the IRAC 8$\mu$m to [*IRAS*]{} 12$\mu$m flux ratios. We define “AGN fraction” as the AGN luminosity estimated using the above method divided by the total 12$\mu$m luminosity: AGN fraction (12$\mu$m) = (L$_{\rm 12\mu m}$-L$_{\rm SF}$)/L$_{\rm 12\mu m}$. The IRAC 8$\mu$m flux will be dominated by PAH emission when PAHs are present, thus normalizing by the 12$\mu$m flux provides an estimate of the PAH EW[^17]. As one would expect, examining the Seyferts for which an AGN fraction was not a lower limit, an anti-correlation between the two parameters is visible. This suggests that our method of decomposing the 12$\mu$m luminosity is reasonable. Since the scatter in the linear fit for SF galaxies in Figure \[fig:L12\_pah\] is $\sim$25%, this translates directly to the “AGN fraction” we have obtained. We estimate the uncertainty of our calculated “AGN fraction” to be no better than $\sim$25%. Conclusions =========== We have analyzed Spitzer/IRS data for a complete unbiased sample of Seyfert galaxies selected from the [*IRAS*]{} Faint Source Catalog based on their 12$\mu$m fluxes. We extended earlier work on the same sample by @Buchanan06 who have published spectra for 51 objects and explored the continuum shapes and the differences between Seyfert types. In our study, we present 5–35$\mu$m low-resolution spectra for 103 objects, nearly 90% of the whole 12$\mu$m Seyfert sample. The main results of our study are: 1\. The 12$\mu$m Seyferts display a variety of mid-IR spectral shapes. The mid-IR continuum slopes of Sy 1s and Sy 2s are on average $<\alpha_{15-30}>$=-0.85$\pm$0.61 and -1.53$\pm$0.84 respectively, though there is substantial scatter for both types. We identify a group of objects with a local maximum in their mid-IR continuum at $\sim$20$\mu$m, which is likely due to the presence of a warm $\sim$150 K dust component and 18$\mu$m emission from astronomical silicates. Emission lines, such as the \[NeV\]14.3$\mu$m/24.3$\mu$m and \[OIV\]25.9$\mu$m lines, known to be a signature of an AGN are stronger in the average spectra of Sy 1s than those of Sy 2s. 2\. PAH emission is detected in both Sy 1s and Sy 2s, with no statistical difference in the relative strength of PAHs between the two types. This suggests that the volume responsible for the bulk of their emission is likely optically thin at $\sim$12$\mu$m. 3\. The 11.2$\mu$m PAH EW of the 12$\mu$m Seyfert sample correlates well with the IRAS color of the galaxies as indicated by the flux ratio of F$_{25}$/F$_{60}$. PAH emission is more suppressed in warmer galaxies, in which the strong AGN activity may destroy the PAH molecules. 4\. The 9.7$\mu$m silicate feature is rather weak in Sy 1s (S$_{\rm sil}$=-0.07$\pm$0.29) while Sy 2s mostly display silicate in absorption (S$_{\rm sil}$=-0.46$\pm$0.73). Deep silicate absorption is observed in high luminosity Sy 2s which are classified as ULIRGs, and those with high hydrogen column density estimated from their X-ray emission. 5\. The FIR luminosities of the 12$\mu$m Seyferts are dominated by star-formation. Their mid-IR luminosity increases by the additional AGN contribution. A method to estimate the AGN contribution to the 12$\mu$m luminosity, in a statistical sense, has been proposed and applied to the sample. We would like to acknowledge L. Armus, M. Malkan, Y. Shi and M. Elvis for helpful science discussions. We thank J.D. Smith with help on the use of CUBISM to extract spectral mapping data. We would also like to thank H. Spoon and L. Hao for help in measuring the silicate features. We thank an anonymous referee whose comments help to improve this manuscript. V.C. acknowledges partial support from the EU ToK grant 39965. L.S. and S.T. acknowledge support from the Italian Space Agency (ASI). Antonucci, R. R. J., & Miller, J. S. 1985, , 297, 621 Antonucci, R. 1993, , 31, 473 Armus, L., et al. 2004, , 154, 184 Armus, L., et al. 2007, , 656, 148 Bassani, L., et al. 2006, , 636, L65 Bernard-Salas, J. et al. 2009, , submitted Brand, K., et al. 2006, , 644, 143 Brandl, B. R., et al. 2006, , 653, 1129 Buchanan, C. L., Gallimore, J. F., O’Dea, C. P., Baum, S. A., Axon, D. J., Robinson, A., Elitzur, M., & Elvis, M. 2006, , 132, 401 Calzetti, D., et al. 2005, , 633, 871 Calzetti, D., et al. 2007, , 666, 870 Charmandaris, V. 2008, Infrared Diagnostics of Galaxy Evolution, 381, 3 Clavel, J., et al. 2000, , 357, 839 Cohen, M., Megeath, T.G., Hammersley, P.L., Martin-Luis, F., & Stauffer, J. 2003, , 125, 2645 Contursi, A., et al. 2000, , 362, 310 de Graauw, T., et al. 1996, , 315, L49 Deo, R. P., Crenshaw, D. M., Kraemer, S. B., Dietrich, M., Elitzur, M., Teplitz, H., & Turner, T. J. 2007, , 671, 124 Desai, V., et al. 2007, , 669, 810 Desert, F. X., & Dennefeld, M. 1988, , 206, 227 Draine, B. T. 2003, , 41, 241 Draine, B. T., & Li, A. 2001, , 551, 807 Dudley, C. C., & Wynn-Williams, C. G. 1997, , 488, 720 Efstathiou, A., & Rowan-Robinson, M. 1995, , 273, 649 Elitzur, M. 2008, New Astronomy Review, 52, 274 Engelbracht, C. W., Rieke, G. H., Gordon, K. D., Smith, J.-D. T., Werner, M. W., Moustakas, J., Willmer, C. N. A., & Vanzi, L. 2008, , 678, 804 Fadda, D., Flores, H., Hasinger, G., Franceschini, A., Altieri, B., Cesarsky, C. J., Elbaz, D., & Ferrando, P. 2002, , 383, 838 Farrah, D., et al. 2007, , 667, 149 F[ö]{}rster Schreiber, N. M., Roussel, H., Sauvage, M., & Charmandaris, V. 2004, , 419, 501 Galliano, F., Madden, S. C., Tielens, A. G. G. M., Peeters, E., & Jones, A. P. 2008, , 679, 310 Genzel, R., et al. 1998, , 498, 579 Gillett, F. C., Kleinmann, D. E., Wright, E. L., & Capps, R. W. 1975, , 198, L65 Gilli, R., Comastri, A., & Hasinger, G. 2007, , 463, 79 Gorjian, V., Cleary, K., Werner, M. W., & Lawrence, C. R. 2007, , 655, L73 Hao, L., et al. 2005, , 625, L75 Hao, L., Weedman, D. W., Spoon, H. W. W., Marshall, J. A., Levenson, N. A., Elitzur, M., & Houck, J. R. 2007, , 655, L77 Hao, L., Wu, Y., Charmandaris, V., et al. 2009,, (submitted) Hasinger, G., Miyaji, T., & Schmidt, M. 2005, , 441, 417 Higdon, S. J. U., et al. 2004, , 116, 975 Ho, L. C. 2008, , 46, 475 Houck, J. R., et al. 2004a, , 154, 18 Houck, J. R., et al. 2004b, , 154, 211 Houck, J. R., et al. 2005, , 622, 105 Kennicutt, R. C., Jr. 1998, , 36, 189 Kennicutt, R. C., Jr., et al. 2003, , 115, 928 Laurent, O., Mirabel, I. F., Charmandaris, V., Gallais, P., Madden, S. C., Sauvage, M., Vigroux, L., & Cesarsky, C. 2000, , 359, 887 Le Floc’h, E., et al. 2005, , 632, 169 Li, M. P., Shi, Q. J., & Li, A. 2008, , 391, L49 Lutz, D., Maiolino, R., Spoon, H. W. W., & Moorwood, A. F. M. 2004, , 418, 465 Lutz, D., Spoon, H. W. W., Rigopoulou, D., Moorwood, A. F. M., & Genzel, R. 1998, , 505, L103 Maiolino, R., Salvati, M., Bassani, L., Dadina, M., della Ceca, R., Matt, G., Risaliti, G., & Zamorani, G. 1998, , 338, 781 Markwardt, C. B., Tueller, J., Skinner, G. K., Gehrels, N., Barthelmy, S. D., & Mushotzky, R. F. 2005, , 633, L77 Mel[é]{}ndez, M., et al. 2008a, , 682, 94 Mel[é]{}ndez, M., Kraemer, S. B., Schmitt, H. R., Crenshaw, D. M., Deo, R. P., Mushotzky, R. F., & Bruhweiler, F. C. 2008b, , 689, 95 Nardini, E., Risaliti, G., Salvati, M., Sani, E., Imanishi, M., Marconi, A., & Maiolino, R. 2008, , 385, L130 Nenkova, M., Sirocky, M. M., Nikutta, R., Ivezi[ć]{}, [Ž]{}., & Elitzur, M. 2008, , 685, 160 Peeters, E., Hony, S., Van Kerckhoven, C., Tielens, A. G. G. M., Allamandola, L. J., Hudgins, D. M., & Bauschlicher, C. W. 2002, , 390, 1089 Peeters, E., Spoon, H. W. W., & Tielens, A. G. G. M. 2004, , 613, 986 Rieke, G. H., & Low, F. J. 1975, , 199, L13 Roche, P. F., Aitken, D. K., Smith, C. H., & Ward, M. J. 1991, , 248, 606 Rush, B., Malkan, M. A., & Spinoglio, L. 1993, , 89, 1 Sajina, A., Yan, L., Armus, L., Choi, P., Fadda, D., Helou, G., & Spoon, H. 2007, , 664, 713 Sanders, D. B., Soifer, B. T., Elias, J. H., Neugebauer, G., & Matthews, K. 1988, , 328, L35 Sanders, D. B., & Mirabel, I. F. 1996, , 34, 749 Sanders, D. B., Mazzarella, J. M., Kim, D.-C., Surace, J. A., & Soifer, B. T. 2003, , 126, 1607 Sazonov, S., Revnivtsev, M., Krivonos, R., Churazov, E., & Sunyaev, R. 2007, , 462, 57 Schaerer, D., & Stasi[ń]{}ska, G. 1999, , 345, L17 Shi, Y., et al. 2007, , 669, 841 Shu, X. W., Wang, J. X., Jiang, P., Fan, L. L., & Wang, T. G. 2007, , 657, 167 Siebenmorgen, R., Haas, M., Kr[ü]{}gel, E., & Schulz, B. 2005, , 436, L5 Sirocky, M. M., Levenson, N. A., Elitzur, M., Spoon, H. W. W., & Armus, L. 2008, , 678, 729 Smith, J. D. T., et al. 2007a, , 656, 770 Smith,J.D.T., et al., 2007b, , 119, 1133 Spinoglio, L., & Malkan, M. A. 1989, , 342, 83 Spinoglio, L., Malkan, M. A., Rush, B., Carrasco, L., & Recillas-Cruz, E. 1995, , 453, 616 Spoon, H. W. W., Marshall, J. A., Houck, J. R., Elitzur, M., Hao, L., Armus, L., Brandl, B. R., & Charmandaris, V. 2007, , 654, L49 Stern, D., et al. 2005, , 631, 163 Sturm, E., Lutz, D., Verma, A., Netzer, H., Sternberg, A., Moorwood, A. F. M., Oliva, E., & Genzel, R. 2002, , 393, 821 Sturm, E., et al. 2005, , 629, L21 Sturm, E., et al. 2006, , 653, L13 Teplitz, H. I., et al. 2007, , 659, 941 Thompson, G. D., Levenson, N. A., Uddin, S. A., & Sirocky, M. M. 2009, , 697, 182 Tommasin, S., Spinoglio, L., Malkan, M. A., Smith, H., Gonz[á]{}lez-Alfonso, E., & Charmandaris, V. 2008, , 676, 836 Urry, C. M., & Padovani, P. 1995, , 107, 803 van Dishoeck, E. F. 2004, , 42, 119 Verma, A., Charmandaris, V., Klaas, U., Lutz, D., & Haas, M. 2005, Space Science Reviews, 119, 355 Voit, G. M. 1992, , 258, 84 Weedman, D. W., et al. 2005, , 633, 706 Weedman, D., et al. 2006, , 653, 101 Weedman, D. & Houck, J. R.,  2008, , 686, 127 Werner, M. W., et al. 2004, , 154, 1 Worsley, M. A., et al. 2005, , 357, 1281 Wu, H., Cao, C., Hao, C.-N., Liu, F.-S., Wang, J.-L., Xia, X.-Y., Deng, Z.-G., & Young, C. K.-S. 2005, , 632, L79 Yan, L., et al. 2005, , 628, 604 [lrrrrrrrccc]{} Mrk335 & 00h06m19.5s & +20d12m10s & 0.27 & 0.45 & 0.35 & 0.57 & 10.72 & 0.026 & Sy 1 & 3269\ Mrk938 & 00h11m06.5s & -12d06m26s & 0.35 & 2.39 & 17.05 & 16.86 & 11.48 & 0.020 & Sy 2 & 3269\ E12-G21 & 00h40m46.1s & -79d14m24s & 0.22 & 0.19 & 1.51 & 3.22 & 11.03 & 0.030 & Sy 1 & 3269\ Mrk348 & 00h48m47.1s & +31d57m25s & 0.49 & 1.02 & 1.43 & 1.43 & 10.62 & 0.015 & Sy 2 & 3269\ IZw1 & 00h53m34.9s & +12d41m36s & 0.47 & 1.17 & 2.24 & 2.87 & 11.95 & 0.061 & Sy 1 & 14\ NGC424 & 01h11m27.6s & -38d05m00s & 1.22 & 1.76 & 2.00 & 1.74 & 10.67 & 0.012 & Sy 2 & 3269\ NGC526A & 01h23m54.4s & -35d03m56s & 0.23 & 0.48 & 2.31 & 4.08 & 10.78 & 0.019 & Sy 1 & 30572\ NGC513 & 01h24m26.8s & +33d47m58s & 0.25 & 0.48 & 0.41 & 1.32 & 10.52 & 0.020 & Sy 2 & 3269\ F01475-0740 & 01h50m02.7s & -07d25m48s & 0.32 & 0.84 & 1.10 & 1.05 & 10.62 & 0.018 & Sy 2 & 3269\ NGC931 & 02h28m14.5s & +31d18m42s & 0.62 & 1.42 & 2.80 & 5.66 & 10.92 & 0.017 & Sy 1 & 3269\ NGC1056 & 02h42m48.3s & +28d34m27s & 0.34 & 0.48 & 5.33 & 10.20 & 9.93 & 0.005 & Sy 2 & 3269\ NGC1097 & 02h46m19.0s & -30d16m30s & 2.96 & 7.30 & 53.35 & 104.79 & 10.78 & 0.004 & Sy 2 & 159\ NGC1125 & 02h51m40.3s & -16d39m04s & 0.32 & 1.00 & 3.71 & 4.04 & 10.46 & 0.011 & Sy 2 & 3269\ NGC1143/4 & 02h55m12.2s & -00d11m01s & 0.26 & 0.62 & 5.35 & 11.60 & 10.46 & 0.029 & Sy 2 & 3269\ M-2-8-39 & 03h00m30.6s & -11d24m57s & 0.35 & 0.46 & 0.54 & 0.85 & 10.95 & 0.029 & Sy 2 & 3269\ NGC1194 & 03h03m49.1s & -01d06m13s & 0.28 & 0.85 & 0.92 & 0.71 & 10.34 & 0.014 & Sy 2 & 3269\ NGC1241 & 03h11m14.6s & -08d55m20s & 0.33 & 0.60 & 4.37 & 10.74 & 10.75 & 0.014 & Sy 2 & 3269\ NGC1320 & 03h24m48.7s & -03d02m32s & 0.33 & 1.32 & 2.21 & 2.82 & 10.21 & 0.009 & Sy 2 & 3269\ NGC1365 & 03h33m36.4s & -36d08m25s & 5.12 & 14.28 & 94.31 & 165.67 & 11.23 & 0.005 & Sy 1 & 3269\ NGC1386 & 03h36m46.2s & -35d59m57s & 0.52 & 1.46 & 5.92 & 9.55 & 9.53 & 0.003 & Sy 2 & 3269\ F03450+0055 & 03h47m40.2s & +01d05m14s & 0.29 & 0.39 & 0.87 & 3.92 & 11.10 & 0.031 & Sy 1 & 3269\ NGC1566 & 04h20m00.4s & -54d56m16s & 1.91 & 3.02 & 22.53 & 58.05 & 10.61 & 0.005 & Sy 1 & 159\ 3C120 & 04h33m11.1s & +05d21m16s & 0.43 & 0.67 & 1.55 & 4.82 & 11.33 & 0.033 & Sy 1 & 86\ F04385-0828 & 04h40m54.9s & -08d22m22s & 0.59 & 1.70 & 2.91 & 3.55 & 10.82 & 0.015 & Sy 2 & 3269\ NGC1667 & 04h48m37.1s & -06d19m12s & 0.63 & 0.71 & 6.27 & 14.92 & 11.02 & 0.015 & Sy 2 & 3269\ E33-G2 & 04h55m58.9s & -75d32m28s & 0.24 & 0.47 & 0.82 & 1.84 & 10.52 & 0.018 & Sy 2 & 3269\ M-5-13-17 & 05h19m35.8s & -32d39m28s & 0.23 & 0.57 & 1.28 & 2.34 & 10.28 & 0.012 & Sy 1 & 3269\ F05189-2524 & 05h21m01.5s & -25d21m45s & 0.74 & 3.47 & 13.25 & 11.84 & 12.17 & 0.043 & Sy 2 & 86\ Mrk6 & 06h52m12.2s & +74d25m37s & 0.26 & 0.73 & 1.25 & 0.90 & 10.63 & 0.019 & Sy 1 & 3269\ Mrk9 & 07h36m57.0s & +58d46m13s & 0.23 & 0.39 & 0.76 & 0.98 & 11.15 & 0.040 & Sy 1 & 3269\ Mrk79 & 07h42m32.8s & +49d48m35s & 0.36 & 0.73 & 1.55 & 2.35 & 10.90 & 0.022 & Sy 1 & 3269\ F07599+6508 & 08h04m33.1s & +64d59m49s & 0.33 & 0.54 & 1.75 & 1.47 & 12.57 & 0.148 & Sy 1 & 105\ NGC2639 & 08h43m38.1s & +50d12m20s & 0.24 & 0.27 & 2.03 & 7.18 & 10.34 & 0.011 & Sy 1 & 3269\ F08572+3915 & 09h00m25.4s & +39d03m54s & 0.33 & 1.76 & 7.30 & 4.77 & 12.15 & 0.058 & Sy 2 & 105\ Mrk704 & 09h18m26.0s & +16d18m19s & 0.42 & 0.60 & 0.36 & 0.45 & 10.97 & 0.029 & Sy 1 & 704\ UGC5101 & 09h35m51.6s & +61d21m11s & 0.25 & 1.02 & 11.68 & 19.91 & 12.00 & 0.039 & Sy 1 & 105\ NGC2992 & 09h45m42.0s & -14d19m35s & 0.63 & 1.38 & 7.51 & 17.22 & 10.51 & 0.008 & Sy 1 & 3269\ Mrk1239 & 03h10m53.7s & -02d33m11s & 0.76 & 1.21 & 1.68 & 2.42 & 11.32 & 0.029 & Sy 1 & 3269\ NGC3031 & 09h55m33.2s & +69d03m55s & 5.86 & 5.42 & 44.73 & 174.02 & 9.70 & 0.001 & Sy 1 & 159\ 3C234 & 10h01m49.5s & +28d47m09s & 0.22 & 0.35 & 0.24 & 0.34 & 12.42 & 0.185 & Sy 1 & 3624\ NGC3079 & 10h01m57.8s & +55d40m47s & 2.54 & 3.61 & 50.67 & 104.69 & 10.62 & 0.004 & Sy 2 & 3269\ NGC3227 & 10h23m30.6s & +19d51m54s & 0.94 & 1.83 & 8.42 & 17.30 & 9.97 & 0.004 & Sy 1 & 96\ NGC3511 & 11h03m23.8s & -23d05m12s & 1.03 & 0.83 & 8.98 & 21.87 & 9.95 & 0.004 & Sy 1 & 3269\ NGC3516 & 11h06m47.5s & +72d34m07s & 0.39 & 0.96 & 2.09 & 2.73 & 10.17 & 0.009 & Sy 1 & 3269\ M+0-29-23 & 11h21m12.2s & -02d59m03s & 0.48 & 0.76 & 5.85 & 9.18 & 11.36 & 0.025 & Sy 2 & 3269\ NGC3660 & 11h23m32.3s & -08d39m31s & 0.42 & 0.64 & 2.03 & 4.47 & 10.47 & 0.012 & Sy 2 & 3269\ NGC3982 & 11h56m28.1s & +55d07m31s & 0.47 & 0.97 & 7.18 & 16.24 & 9.81 & 0.004 & Sy 2 & 3269\ NGC4051 & 12h03m09.6s & +44d31m53s & 1.35 & 2.20 & 10.53 & 24.93 & 9.66 & 0.002 & Sy 1 & 3269\ UGC7064 & 12h04m43.3s & +31d10m38s & 0.22 & 0.88 & 3.48 & 6.25 & 11.18 & 0.025 & Sy 1 & 3269\ NGC4151 & 12h10m32.6s & +39d24m21s & 2.01 & 4.87 & 6.46 & 8.88 & 9.95 & 0.003 & Sy 1 & 14\ Mrk766 & 12h18m26.5s & +29d48m46s & 0.35 & 1.47 & 3.89 & 4.20 & 10.67 & 0.013 & Sy 1 & 3269\ NGC4388 & 12h25m46.7s & +12d39m44s & 1.01 & 3.57 & 10.27 & 14.22 & 10.73 & 0.008 & Sy 2 & 3269\ 3C273 & 12h29m06.7s & +02d03m09s & 0.82 & 1.43 & 2.09 & 2.53 & 12.93 & 0.158 & Sy 1 & 105\ NGC4501 & 12h31m59.2s & +14d25m14s & 2.29 & 2.98 & 19.68 & 62.97 & 10.98 & 0.008 & Sy 2 & 3269\ NGC4579 & 12h37m43.5s & +11d49m05s & 1.12 & 0.78 & 5.93 & 21.39 & 10.17 & 0.005 & Sy 1 & 159\ NGC4593 & 12h39m39.4s & -05d20m39s & 0.47 & 0.96 & 3.43 & 6.26 & 10.35 & 0.009 & Sy 1 & 3269\ NGC4594 & 12h39m59.4s & -11d37m23s & 0.74 & 0.50 & 4.26 & 22.86 & 9.75 & 0.003 & Sy 1 & 159\ NGC4602 & 12h40m36.8s & -05d07m59s & 0.58 & 0.65 & 4.75 & 13.30 & 10.44 & 0.008 & Sy 1 & 3269\ Tol1238-364 & 12h40m52.8s & -36d45m21s & 0.72 & 2.54 & 8.90 & 13.79 & 10.87 & 0.011 & Sy 2 & 3269\ M-2-33-34 & 12h52m12.4s & -13d24m53s & 0.36 & 0.65 & 1.23 & 2.36 & 10.49 & 0.015 & Sy 1 & 3269\ Mrk231 & 12h56m14.2s & +56d52m25s & 1.83 & 8.84 & 30.80 & 29.74 & 12.54 & 0.042 & Sy 1 & 105\ NGC4922 & 13h01m24.9s & +29d18m40s & 0.27 & 1.48 & 6.21 & 7.33 & 11.31 & 0.024 & Sy 2 & 3237\ NGC4941 & 13h04m13.1s & -05d33m06s & 0.39 & 0.46 & 1.87 & 4.79 & 9.39 & 0.004 & Sy 2 & 30572\ NGC4968 & 13h07m06.0s & -23d40m37s & 0.62 & 1.16 & 2.48 & 3.39 & 10.39 & 0.010 & Sy 2 & 3269\ NGC5005 & 13h10m56.2s & +37d03m33s & 1.65 & 2.26 & 22.18 & 63.40 & 10.20 & 0.003 & Sy 2 & 3269\ NGC5033 & 13h13m27.5s & +36d35m38s & 1.77 & 2.14 & 16.20 & 50.23 & 10.05 & 0.003 & Sy 1 & 159\ M-3-34-63 & 13h22m19.0s & -16d42m30s & 0.95 & 2.88 & 6.22 & 6.37 & 11.38 & 0.021 & Sy 2 & 3269\ NGC5135 & 13h25m44.0s & -29d50m01s & 0.63 & 2.38 & 16.86 & 30.97 & 11.27 & 0.014 & Sy 2 & 3269\ NGC5194 & 13h29m52.7s & +47d11m43s & 7.21 & 9.56 & 97.42 & 221.21 & 10.18 & 0.002 & Sy 2 & 159\ M-6-30-15 & 13h35m53.8s & -34d17m44s & 0.33 & 0.97 & 1.39 & 2.26 & 9.98 & 0.008 & Sy 1 & 3269\ F13349+2438 & 13h37m18.7s & +24d23m03s & 0.61 & 0.72 & 0.85 & 0.90 & 12.32 & 0.108 & Sy 1 & 61\ NGC5256 & 13h38m17.5s & +48d16m37s & 0.32 & 1.07 & 7.25 & 10.11 & 11.51 & 0.028 & Sy 2 & 3269\ Mrk273 & 13h44m42.1s & +55d53m13s & 0.24 & 2.36 & 22.51 & 22.53 & 12.17 & 0.038 & Sy 2 & 105\ I4329A & 13h49m19.2s & -30d18m34s & 1.11 & 2.26 & 2.15 & 2.31 & 10.97 & 0.016 & Sy 1 & 3269\ NGC5347 & 13h53m17.8s & +33d29m27s & 0.30 & 1.22 & 1.43 & 3.33 & 10.04 & 0.008 & Sy 2 & 3269\ Mrk463 & 13h56m02.9s & +18d22m19s & 0.47 & 1.49 & 2.21 & 1.87 & 11.78 & 0.050 & Sy 2 & 105\ NGC5506 & 14h13m14.8s & -03d12m27s & 1.29 & 4.17 & 8.42 & 8.87 & 10.44 & 0.006 & Sy 2 & 3269\ NGC5548 & 14h17m59.5s & +25d08m12s & 0.43 & 0.81 & 1.07 & 2.07 & 10.66 & 0.017 & Sy 1 & 30572\ Mrk817 & 14h36m22.1s & +58d47m39s & 0.38 & 1.42 & 2.33 & 2.35 & 11.35 & 0.031 & Sy 1 & 3269\ NGC5929 & 15h26m06.1s & +41d40m14s & 0.43 & 1.67 & 9.52 & 13.84 & 10.58 & 0.008 & Sy 2 & 3269\ NGC5953 & 15h34m32.4s & +15d11m38s & 0.82 & 1.58 & 11.79 & 19.89 & 10.49 & 0.007 & Sy 2 & 3269\ Arp220 & 15h34m57.1s & +23d30m11s & 0.61 & 8.00 &104.09 & 115.29 & 12.18 & 0.018 & Sy 2 & 105\ M-2-40-4 & 15h48m24.9s & -13d45m28s & 0.41 & 1.45 & 4.09 & 7.06 & 11.32 & 0.025 & Sy 2 & 3269\ F15480-0344 & 15h50m41.5s & -03d53m18s & 0.24 & 0.72 & 1.09 & 4.05 & 11.14 & 0.030 & Sy 2 & 3269\ F19254-7245 & 19h31m21.4s & -72d39m18s & 0.26 & 1.35 & 5.24 & 8.03 & 12.14 & 0.062 & Sy 2 & 105\ NGC6810 & 19h43m34.4s & -58d39m21s & 1.27 & 3.55 & 18.20 & 32.60 & 10.74 & 0.007 & Sy 2 & 3269\ NGC6860 & 20h08m46.9s & -61d06m01s & 0.25 & 0.31 & 0.96 & 2.19 & 10.35 & 0.015 & Sy 1 & 3269\ NGC6890 & 20h18m18.1s & -44d48m25s & 0.36 & 0.80 & 4.01 & 8.26 & 10.27 & 0.008 & Sy 2 & 3269\ Mrk509 & 20h44m09.7s & -10d43m25s & 0.30 & 0.73 & 1.39 & 1.36 & 11.21 & 0.034 & Sy 1 & 86\ I5063 & 20h52m02.3s & -57d04m08s & 1.11 & 3.94 & 5.87 & 4.25 & 10.87 & 0.011 & Sy 2 & 30572\ UGC11680 & 21h07m43.6s & +03d52m30s & 0.37 & 0.86 & 2.97 & 5.59 & 11.23 & 0.026 & Sy 2 & 3269\ NGC7130 & 21h48m19.5s & -34d57m05s & 0.58 & 2.16 & 16.71 & 25.89 & 11.38 & 0.016 & Sy 2 & 3269\ NGC7172 & 22h02m01.9s & -31d52m11s & 0.42 & 0.88 & 5.76 & 12.42 & 10.47 & 0.009 & Sy 2 & 30572\ NGC7213 & 22h09m16.2s & -47d10m00s & 0.65 & 0.81 & 2.70 & 8.99 & 10.01 & 0.006 & Sy 1 & 86\ NGC7314 & 22h35m46.2s & -26d03m01s & 0.55 & 0.96 & 5.24 & 16.57 & 10.00 & 0.005 & Sy 1 & 30572\ M-3-58-7 & 22h49m37.1s & -19d16m26s & 0.25 & 0.98 & 2.60 & 3.62 & 11.30 & 0.031 & Sy 2 & 3269\ NGC7469 & 23h03m15.6s & +08d52m26s & 1.59 & 5.96 & 27.33 & 35.16 & 11.65 & 0.016 & Sy 1 & 3269\ NGC7496 & 23h09m47.3s & -43d25m41s & 0.58 & 1.93 & 10.14 & 16.57 & 10.28 & 0.006 & Sy 2 & 3269\ NGC7582 & 23h18m23.5s & -42d22m14s & 2.30 & 7.39 & 52.20 & 82.86 & 10.91 & 0.005 & Sy 2 & 3269\ NGC7590 & 23h18m54.8s & -42d14m21s & 0.69 & 0.89 & 7.69 & 20.79 & 10.19 & 0.005 & Sy 2 & 3269\ NGC7603 & 23h18m56.6s & +00d14m38s & 0.40 & 0.24 & 1.25 & 2.00 & 11.05 & 0.030 & Sy 1 & 3269\ NGC7674 & 23h27m56.7s & +08d46m45s & 0.68 & 1.92 & 5.36 & 8.33 & 11.57 & 0.029 & Sy 2 & 3269\ CGCG381-051 & 23h48m41.7s & +02d14m23s & 0.51 & 0.18 & 1.75 & 2.76 & 11.19 & 0.031 & Sy 2 & 3269\ [lccccccc]{} Mrk335 & $<$0.074 & $<$5.17 & $<$0.039 & $<$1.33 & 0.157 & 20.4$\times$15.3 & 11.3$\times$8.4\ Mrk938 & 0.440$\pm$0.018 & 33.5$\pm$0.9 & 0.690$\pm$0.042 & 22.1$\pm$0.7 & -0.991 & 20.4$\times$15.3 & 8.6$\times$6.5\ E12-G21 & 0.278$\pm$0.017 & 10.9$\pm$0.3 & 0.325$\pm$0.005 & 7.53$\pm$0.03 & -0.022 & 20.4$\times$15.3 & 13.0$\times$9.8\ Mrk348 & $<$0.083 & $<$5.78 & 0.058$\pm$0.013 & 2.32$\pm$0.50 & -0.333 & 20.4$\times$15.3 & 6.4$\times$4.8\ IZw1 & $<$0.018 & $<$2.90 & 0.018$\pm$0.002 & 2.10$\pm$0.27 & 0.284 & 10.2 & 13.6\ NGC424 & $<$0.024 & $<$6.95 & 0.010$\pm$0.001 & 1.53$\pm$0.09 & -0.111 & 20.4$\times$15.3 & 5.1$\times$3.9\ NGC526A & $<$0.240 & $<$1.60 & $<$0.012 & $<$0.62 & 0.033 & 10.2 & 4.1\ NGC513 & 0.334$\pm$0.107 & 9.50$\pm$1.5 & 0.474$\pm$0.033 & 9.35$\pm$0.37 & 0.149 & 20.4$\times$15.3 & 8.6$\times$6.5\ F01475-0740 & $<$0.177 & $<$4.74 & 0.081$\pm$0.009 & 2.75$\pm$0.22 & 0.188 & 20.4$\times$15.3 & 7.7$\times$5.8\ NGC931 & $<$0.060 & $<$7.47 & 0.065$\pm$0.002 & 4.53$\pm$0.20 & -0.026 & 20.4$\times$15.3 & 7.3$\times$5.5\ NGC1056 & 0.486$\pm$0.125 & 26.9$\pm$3.6 & 0.803$\pm$0.039 & 22.4$\pm$0.5 & 0.084 & 20.4$\times$15.3 & 2.1$\times$1.6\ NGC1097 & 0.327$\pm$0.003 & 103$\pm$3 & 0.657$\pm$0.001 & 116$\pm$1 & 0.099 & 50$\times$33 & 4.2$\times$2.8\ NGC1125 & 0.258$\pm$0.049 & 7.75$\pm$1.17 & 0.426$\pm$0.025 & 7.16$\pm$0.15 & -1.022 & 20.4$\times$15.3 & 4.7$\times$3.5\ NGC1143/4 & 0.343$\pm$0.067 & 18.2$\pm$1.5 & 0.574$\pm$0.028 & 13.0$\pm$0.3 & & 19.8$\times$16.2 & 12.2$\times$10.0\ M-2-8-39 & $<$0.128 & $<$2.98 & $<$0.047 & $<$1.19 & -0.076 & 20.4$\times$15.3 & 12.6$\times$9.4\ NGC1194 & $<$0.059 & $<$5.05 & $<$0.077 & $<$2.47 & -0.978 & 20.4$\times$15.3 & 6.0$\times$4.5\ NGC1241 & 0.461$\pm$0.024 & 8.13$\pm$0.34 & 0.501$\pm$0.026 & 5.01$\pm$0.01 & -0.908 & 20.4$\times$15.3 & 6.0$\times$4.5\ NGC1320 & 0.082$\pm$0.019 & 6.86$\pm$1.24 & 0.074$\pm$0.003 & 4.60$\pm$0.11 & -0.065 & 20.4$\times$15.3 & 3.8$\times$2.9\ NGC1365 & 0.368$\pm$0.003 & 173$\pm$1 & 0.432$\pm$0.015 & 120$\pm$3 & -0.229 & 20.4$\times$15.3 & 2.1$\times$1.6\ NGC1386 & 0.053$\pm$0.022 & 5.89$\pm$1.81 & 0.133$\pm$0.002 & 9.13$\pm$0.07 & -0.542 & 20.4$\times$15.3 & 1.3$\times$1.0\ F03450+0055 & $<$0.103 & $<$6.58 & $<$0.038 & $<$1.62 & 0.027 & 20.4$\times$15.3 & 13.5$\times$10.1\ NGC1566 & 0.223$\pm$0.034 & 27.6$\pm$2.8 & 0.470$\pm$0.004 & 32.2$\pm$0.4 & 0.105 & 50$\times$33 & 5.2$\times$3.4\ 3C120 & $<$0.015 & $<$1.35 & 0.014$\pm$0.002 & 0.904$\pm$0.142 & 0.130 & 10.2 & 7.2\ F04385-0828 & $<$0.058 & $<$8.03 & 0.036$\pm$0.011 & 2.16$\pm$0.62 & -0.766 & 20.4$\times$15.3 & 6.4$\times$4.8\ NGC1667 & 0.391$\pm$0.073 & 17.3$\pm$1.5 & 0.731$\pm$0.091 & 15.6$\pm$1.0 & -0.050 & 20.4$\times$15.3 & 6.4$\times$4.8\ E33-G2 & $<$0.102 & $<$6.25 & $<$0.076 & $<$2.70 & -0.247 & 20.4$\times$15.3 & 7.7$\times$5.8\ M-5-13-17 & 0.200$\pm$0.004 & 6.65$\pm$0.06 & 0.193$\pm$0.047 & 5.22$\pm$0.92 & -0.206 & 20.4$\times$15.3 & 5.1$\times$3.9\ F05189-2524 & 0.037$\pm$0.001 & 6.54$\pm$0.16 & 0.062$\pm$0.003 & 8.02$\pm$0.35 & -0.315 & 10.2 & 9.3\ Mrk6 & $<$0.097 & $<$6.32 & 0.044$\pm$0.001 & 1.59$\pm$0.03 & -0.036 & 20.4$\times$15.3 & 8.2$\times$6.1\ Mrk9 & $<$0.142 & $<$8.12 & 0.116$\pm$0.030 & 3.54$\pm$0.82 & 0.050 & 20.4$\times$15.3 & 17.5$\times$13.1\ Mrk79 & $<$0.039 & $<$4.05 & 0.043$\pm$0.011 & 2.22$\pm$0.54 & -0.079 & 20.4$\times$15.3 & 9.5$\times$7.1\ F07599+6508 & 0.027$\pm$0.001 & 3.36$\pm$0.14 & 0.018$\pm$0.001 & 0.97$\pm$0.06 & 0.113 & 10.2 & 35.0\ NGC2639 & $<$0.207 & $<$5.40 & 0.530$\pm$0.036 & 4.70$\pm$0.19 & -0.127 & 20.4$\times$15.3 & 4.7$\times$3.5\ F08572+3915 & $<$0.021 & $<$5.25 & $<$0.099 & $<$2.36 & -3.509 & 10.2 & 12.9\ Mrk704 & $<$0.071 & $<$7.38 & $<$0.029 & $<$1.58 & -0.075 & 20.4$\times$15.3 & 12.6$\times$9.4\ UGC5101 & 0.229$\pm$0.004 & 12.3$\pm$0.2 & 0.423$\pm$0.007 & 10.1$\pm$0.1 & -1.619 & 10.2 & 8.6\ NGC2992 & 0.151$\pm$0.006 & 15.3$\pm$0.2 & 0.237$\pm$0.014 & 15.8$\pm$0.6 & -0.200 & 20.4$\times$15.3 & 3.4$\times$2.6\ Mrk1239 & $<$0.029 & $<$7.18 & 0.027$\pm$0.001 & 3.19$\pm$0.20 & 0.010 & 20.4$\times$15.3 & 12.6$\times$9.4\ NGC3031 & $<$0.034 &$<$16.8 & 0.183$\pm$0.002 & 23.2$\pm$0.2 & -0.035 & 50$\times$33 & 1.0$\times$0.7\ 3C234 & $<$0.019 & $<$0.82 & $<$0.013 & $<$0.45 & -0.007 & 10.2 & 44.6\ NGC3079 & 0.458$\pm$0.006 & 111$\pm$2 & 0.818$\pm$0.050 & 62.6$\pm$2.0 & -0.828 & 20.4$\times$15.3 & 1.7$\times$1.3\ NGC3227 & 0.138$\pm$0.003 & 24.0$\pm$0.5 & 0.249$\pm$0.007 & 29.0$\pm$0.6 & -0.234 & 10.2 & 0.8\ NGC3511 & 0.638$\pm$0.168 & 18.0$\pm$1.5 & 0.764$\pm$0.046 & 12.7$\pm$0.4 & 0.009 & 20.4$\times$15.3 & 1.7$\times$1.3\ NGC3516 & $<$0.061 & $<$6.57 & 0.024$\pm$0.004 & 1.22$\pm$0.24 & -0.158 & 20.4$\times$15.3 & 3.8$\times$2.9\ M+0-29-23 & 0.437$\pm$0.099 & 21.0$\pm$2.5 & 0.619$\pm$0.082 & 16.7$\pm$1.1 & -0.507 & 20.4$\times$15.3 & 10.8$\times$8.1\ NGC3660 & $<$0.434 & $<$5.10 & 0.539$\pm$0.100 & 2.97$\pm$0.34 & -0.020 & 20.4$\times$15.3 & 5.1$\times$3.9\ NGC3982 & 0.467$\pm$0.058 & 14.4$\pm$1.1 & 0.787$\pm$0.049 & 14.0$\pm$0.3 & 0.110 & 20.4$\times$15.3 & 1.7$\times$1.3\ NGC4051 & 0.079$\pm$0.015 & 9.20$\pm$1.20 & 0.114$\pm$0.002 & 9.99$\pm$0.13 & 0.065 & 20.4$\times$15.3 & 0.8$\times$0.6\ UGC7064 & 0.364$\pm$0.015 & 7.90$\pm$0.21 & 0.527$\pm$0.055 & 7.87$\pm$0.43 & -0.031 & 20.4$\times$15.3 & 10.8$\times$8.1\ NGC4151 & $<$0.011 & $<$6.98 & 0.011$\pm$0.002 & 4.82$\pm$0.91 & 0.030 & 10.2 & 0.7\ Mrk766 & 0.036$\pm$0.013 & 3.08$\pm$1.06 & 0.073$\pm$0.023 & 4.38$\pm$1.21 & -0.274 & 20.4$\times$15.3 & 5.6$\times$4.2\ NGC4388 & 0.128$\pm$0.013 & 15.0$\pm$1.2 & 0.203$\pm$0.046 & 14.3$\pm$2.5 & -0.699 & 20.4$\times$15.3 & 3.4$\times$2.6\ 3C273 & $<$0.013 & $<$2.17 & $<$0.014 & $<$1.03 & 0.064 & 10.2 & 37.6\ NGC4501 & $<$0.187 & $<$6.96 & 0.629$\pm$0.051 & 6.91$\pm$0.26 & -0.232 & 20.4$\times$15.3 & 3.4$\times$2.6\ NGC4579 & 0.152$\pm$0.039 & 14.4$\pm$2.6 & 0.262$\pm$0.054 & 8.86$\pm$1.43 & 0.218 & 50$\times$33 & 5.3$\times$3.5\ NGC4593 & 0.044$\pm$0.005 & 4.82$\pm$0.57 & 0.089$\pm$0.014 & 6.10$\pm$0.85 & 0.063 & 20.4$\times$15.3 & 3.8$\times$2.9\ NGC4594 & 0.052$\pm$0.002 & 13.1$\pm$0.5 & 0.161$\pm$0.024 & 7.56$\pm$0.97 & -0.036 & 50$\times$33 & 3.6$\times$2.3\ NGC4602 & 0.326$\pm$0.194 & 4.61$\pm$1.54 & 0.598$\pm$0.038 & 5.21$\pm$0.17 & -0.010 & 20.4$\times$15.3 & 3.4$\times$2.6\ Tol1238-364 & 0.185$\pm$0.003 & 15.1$\pm$0.2 & 0.194$\pm$0.005 & 14.6$\pm$0.3 & -0.312 & 20.4$\times$15.3 & 4.7$\times$3.5\ M-2-33-34 & $<$0.304 & $<$5.11 & 0.205$\pm$0.016 & 3.01$\pm$0.17 & -0.209 & 20.4$\times$15.3 & 6.4$\times$4.8\ Mrk231 & 0.011$\pm$0.001 & 7.50$\pm$0.39 & 0.028$\pm$0.001 & 8.77$\pm$0.35 & -0.640 & 10.2 & 9.2\ NGC4922 & 0.152$\pm$0.008 & 7.22$\pm$0.35 & 0.122$\pm$0.006 & 5.82$\pm$0.29 & & 10.2 & 5.1\ NGC4941 & $<$0.041 & $<$1.06 & 0.038$\pm$0.016 & 0.82$\pm$0.34 & -0.084 & 10.2 & 0.8\ NGC4968 & 0.172$\pm$0.010 & 9.11$\pm$0.44 & 0.127$\pm$0.003 & 6.56$\pm$0.13 & -0.206 & 20.4$\times$15.3 & 4.3$\times$3.2\ NGC5005 & 0.192$\pm$0.015 & 18.2$\pm$1.3 & 0.735$\pm$0.018 & 28.8$\pm$0.1 & -0.425 & 20.4$\times$15.3 & 1.3$\times$1.0\ NGC5033 & 0.319$\pm$0.060 & 76.8$\pm$8.8 & 0.764$\pm$0.065 & 84.4$\pm$3.3 & -0.116 & 50$\times$33 & 3.1$\times$2.1\ M-3-34-63 & 0.915$\pm$0.325 & 4.27$\pm$0.61 & 1.128$\pm$0.215 & 3.24$\pm$0.23 & 0.090 & 20.4$\times$15.3 & 9.1$\times$6.8\ NGC5135 & 0.384$\pm$0.032 & 41.6$\pm$1.9 & 0.594$\pm$0.039 & 39.2$\pm$1.3 & -0.367 & 20.4$\times$15.3 & 6.0$\times$4.5\ NGC5194 & 0.372$\pm$0.014 & 111$\pm$3 & 0.686$\pm$0.010 & 118$\pm$1 & 0.109 & 50$\times$33 & 2.1$\times$1.4\ M-6-30-15 & $<$0.063 & $<$6.33 & 0.052$\pm$0.007 & 3.01$\pm$0.40 & -0.108 & 20.4$\times$15.3 & 3.4$\times$2.6\ F13349+2438 & $<$0.008 & $<$2.06 & $<$0.013 & $<$1.69 & 0.038 & 10.2 & 24.7\ NGC5256 & 0.608$\pm$0.045 & 19.9$\pm$0.9 & 0.545$\pm$0.068 & 10.8$\pm$0.9 & -0.692 & 20.4$\times$15.3 & 12.1$\times$9.1\ Mrk273 & 0.192$\pm$0.003 & 13.0$\pm$0.2 & 0.335$\pm$0.006 & 8.35$\pm$0.12 & -1.746 & 10.2 & 8.2\ I4329A & $<$0.020 & $<$6.20 & $<$0.016 & $<$2.85 & -0.077 & 20.4$\times$15.3 & 6.9$\times$5.1\ NGC5347 & $<$0.112 & $<$4.37 & 0.068$\pm$0.003 & 3.29$\pm$0.15 & -0.251 & 20.4$\times$15.3 & 3.4$\times$2.6\ Mrk463 & $<$0.008 & $<$1.79 & 0.024$\pm$0.002 & 2.78$\pm$0.22 & -0.464 & 10.2 & 11.1\ NGC5506 & 0.023$\pm$0.001 & 10.8$\pm$0.3 & 0.060$\pm$0.004 & 9.80$\pm$0.67 & -0.852 & 20.4$\times$15.3 & 2.6$\times$1.9\ NGC5548 & 0.018$\pm$0.005 & 1.08$\pm$0.32 & 0.047$\pm$0.007 & 2.71$\pm$0.38 & 0.040 & 10.2 & 3.7\ Mrk817 & $<$0.109 & $<$7.60 & $<$0.035 & $<$1.72 & -0.032 & 20.4$\times$15.3 & 13.5$\times$10.1\ NGC5929 & $<$0.480 & $<$4.44 & 0.775$\pm$0.087 & 3.50$\pm$0.22 & 0.245 & 20.4$\times$15.3 & 3.4$\times$2.6\ NGC5953 & 0.684$\pm$0.039 & 52.9$\pm$1.4 & 0.877$\pm$0.050 & 40.7$\pm$0.1 & -0.068 & 20.4$\times$15.3 & 3.0$\times$2.2\ Arp220 & 0.344$\pm$0.006 & 22.2$\pm$0.3 & 0.552$\pm$0.023 & 15.4$\pm$0.6 & -2.543 & 10.2 & 3.9\ M-2-40-4 & 0.066$\pm$0.002 & 8.25$\pm$0.17 & 0.119$\pm$0.004 & 8.29$\pm$0.21 & -0.068 & 20.4$\times$15.3 & 10.8$\times$8.1\ F15480-0344 & $<$0.190 & $<$5.29 & 0.075$\pm$0.016 & 2.45$\pm$0.48 & -0.159 & 20.4$\times$15.3 & 13.0$\times$9.8\ F19254-7245 & 0.064$\pm$0.002 & 5.09$\pm$0.18 & 0.134$\pm$0.006 & 5.06$\pm$0.22 & -1.345 & 10.2 & 13.7\ NGC6810 & 0.419$\pm$0.020 & 56.1$\pm$1.4 & 0.463$\pm$0.002 & 46.4$\pm$0.3 & -0.158 & 20.4$\times$15.3 & 3.0$\times$2.2\ NGC6860 & 0.084$\pm$0.018 & 6.32$\pm$1.28 & 0.095$\pm$0.008 & 3.59$\pm$0.28 & 0.005 & 20.4$\times$15.3 & 6.4$\times$4.8\ NGC6890 & 0.237$\pm$0.020 & 9.66$\pm$0.57 & 0.277$\pm$0.028 & 8.33$\pm$0.58 & -0.054 & 20.4$\times$15.3 & 3.4$\times$2.6\ Mrk509 & 0.042$\pm$0.002 & 4.92$\pm$0.19 & 0.068$\pm$0.002 & 4.77$\pm$0.16 & -0.002 & 10.2 & 7.5\ I5063 & 0.011$\pm$0.002 & 2.14$\pm$0.34 & 0.019$\pm$0.002 & 3.96$\pm$0.40 & -0.263 & 10.2 & 2.4\ UGC11680 & $<$0.334 & $<$7.77 & 0.166$\pm$0.019 & 2.50$\pm$0.25 & 0.142 & 20.4$\times$15.3 & 11.3$\times$8.4\ NGC7130 & 0.493$\pm$0.044 & 32.7$\pm$1.9 & 0.430$\pm$0.001 & 25.2$\pm$0.1 & -0.227 & 20.4$\times$15.3 & 6.9$\times$5.2\ NGC7172 & 0.045$\pm$0.001 & 9.25$\pm$0.29 & 0.204$\pm$0.009 & 7.52$\pm$0.05 & -1.795 & 10.2 & 1.9\ NGC7213 & 0.022$\pm$0.002 & 1.73$\pm$0.20 & 0.037$\pm$0.007 & 2.88$\pm$0.51 & 0.236 & 10.2 & 1.2\ NGC7314 & 0.063$\pm$0.007 & 1.63$\pm$0.18 & 0.087$\pm$0.013 & 1.87$\pm$0.27 & -0.476 & 10.2 & 1.0\ M-3-58-7 & 0.074$\pm$0.002 & 5.58$\pm$0.13 & 0.099$\pm$0.024 & 5.03$\pm$1.10 & -0.047 & 20.4$\times$15.3 & 13.5$\times$10.1\ NGC7469 & 0.293$\pm$0.003 & 60.8$\pm$0.5 & 0.288$\pm$0.012 & 45.4$\pm$1.2 & -0.159 & 20.4$\times$15.3 & 6.9$\times$5.2\ NGC7496 & 0.912$\pm$0.189 & 44.6$\pm$4.5 & 0.590$\pm$0.002 & 21.6$\pm$0.1 & -0.626 & 20.4$\times$15.3 & 2.6$\times$1.9\ NGC7582 & 0.274$\pm$0.023 & 101$\pm$5 & 0.457$\pm$0.006 & 77.1$\pm$0.3 & -0.833 & 20.4$\times$15.3 & 2.2$\times$1.6\ NGC7590 & 0.496$\pm$0.005 & 14.0$\pm$0.1 & 0.854$\pm$0.082 & 13.0$\pm$0.5 & 0.023 & 20.4$\times$15.3 & 2.2$\times$1.6\ NGC7603 & 0.056$\pm$0.004 & 7.08$\pm$0.53 & 0.121$\pm$0.006 & 6.22$\pm$0.24 & 0.211 & 20.4$\times$15.3 & 13.0$\times$9.8\ NGC7674 & 0.132$\pm$0.006 & 14.2$\pm$0.4 & 0.129$\pm$0.013 & 10.0$\pm$0.8 & -0.215 & 20.4$\times$15.3 & 12.6$\times$9.4\ CGCG381-051 & 0.542$\pm$0.031 & 5.07$\pm$0.16 & 0.211$\pm$0.037 & 4.04$\pm$0.47 & 0.316 & 20.4$\times$15.3 & 13.5$\times$10.1\ [lcccc]{} Whole sample & 0.154$\pm$0.216 & 47 & 0.352$\pm$0.312 & 56\ Whole sample & 0.205$\pm$0.215 & 37 & 0.384$\pm$0.299 & 52\ L$_{\rm IR}$$<$10$^{11}$L$_\odot$ & 0.171$\pm$0.235 & 29 & 0.356$\pm$0.318 & 35\ L$_{\rm IR}$$\ge$10$^{11}$L$_\odot$& 0.127$\pm$0.184 & 18 & 0.345$\pm$0.310 & 21\ L$_{\rm IR}$$<$10$^{11}$L$_\odot$ & 0.217$\pm$0.232 & 24 & 0.395$\pm$0.303 & 32\ L$_{\rm IR}$$\ge$10$^{11}$L$_\odot$& 0.185$\pm$0.188 & 13 & 0.367$\pm$0.300 & 20\ F25/F60$>$0.2 & 0.068$\pm$0.116 & 33 & 0.143$\pm$0.237 & 30\ F25/F60$\le$0.2 & 0.358$\pm$0.262 & 14 & 0.592$\pm$0.191 & 26\ F25/F60$>$0.2 & 0.102$\pm$0.119 & 24 & 0.177$\pm$0.237 & 26\ F25/F60$\le$0.2 & 0.397$\pm$0.227 & 13 & 0.592$\pm$0.191 & 26\ [^1]: The IRS was a collaborative venture between Cornell University and Ball Aerospace Corporation funded by NASA through the Jet Propulsion Laboratory and the Ames Research Center. [^2]: The 12$\mu$m Seyferts not included in this study due to lack of Spitzer/IRS SL spectra are 6 Sy 1s (Mrk1034, M-3-7-11, Mrk618, F05563-3820, F15091-2107, E141-G55) and 7 Sy 2s (F00198-7926, F00521-7054, E541-IG12, NGC1068, F03362-1642, E253-G3, F22017+0319). [^3]: The SINGS data products are available at: http://data.spitzer.caltech.edu/popular/sings/. The nuclear spectra were extracted over a 50$\arcsec$$\times$33$\arcsec$ region on the nucleus. [^4]: The three exceptions are NGC1097, NGC1566, NGC5033, and they were also observed by SINGS. [^5]: For two Sy 2s NGC1143/4 and NGC4922, only SL data were available, thus only 5–15$\mu$m spectra were obtained. [^6]: All spectra are also available in electronic format. [^7]: However, some objects, such as Mrk335, Mrk704 and 3C234, do have IRAS ratios of F$_{\rm 25}$/F$_{\rm 60}>$1. [^8]: The spectral index $\alpha$ is defined as log(F1/F2)/log($\nu1$/$\nu2$). [^9]: We choose to use $\alpha_{15-30}$ so that we can directly make use of the spectral index measurement for the starburst galaxies in the @Brandl06 sample. [^10]: We have excluded the 6 galaxies that have AGN signatures from the @Brandl06 starburst sample. [^11]: According to the IRAS Explanatory Supplement Document for unenhanced coadded IRAS images the resolution is approximately 1’$\times$ 5’, 1’$\times$ 5’, 2’$\times$ 5’ and 4’$\times$ 5’ at 12, 25, 60 and 100$\mu$m, respectively ( see http://lambda.gsfc.nasa.gov/product/iras/docs/exp.sup ). [^12]: A similar plot using the 6.2$\mu$m PAH EWs was proposed by @Spoon07 as a mid-IR galaxy classification method [^13]: The values of the hydrogen column density have been taken from @Markwardt05 [@Bassani06; @Sazonov07; @Shu07]. [^14]: Calculated from the [*IRAS*]{} flux densities following the prescription of @Sanders96: L$_{\rm IR}$=5.6$\times$10$^5$D$_{Mpc}^2$($13.48S_{12}+5.16S_{25}+2.58S_{60}+S_{100}$). [^15]: According to @Spinoglio95, the bolometric luminosity is derived by combining the blue photometry, the near-IR and FIR luminosities, as well as an estimate of the flux contribution from cold dust longward of 100$\mu$m. [^16]: Calculated from the [*IRAS*]{} flux densities following the prescription of @Sanders96: L$_{\rm FIR}$=5.6$\times$10$^5$D$_{Mpc}^2$($2.58S_{60}+S_{100}$) [^17]: A similar approach using Spitzer broad band filters was used successfully by @Engelbracht08 to estimate the PAH contribution in starburst galaxies.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The primary goal of this paper is to make a direct comparison between the measured and model-predicted abundances of He, C and N in a sample of 35 well-observed Galactic planetary nebulae (PN). All observations, data reductions, and abundance determinations were performed in house to ensure maximum homogeneity. Progenitor star masses ($M\le4~M_{\odot}$) were inferred using two published sets of post-AGB model tracks and L and T$_{eff}$ values. We conclude the following: 1) the mean values of N/O across the progenitor mass range exceeds the solar value, indicating significant N enrichment in the majority of our objects; 2) the onset of hot bottom burning appears to begin around 2 M$_{\odot}$, i.e., lower than $\sim5~M_{\odot}$ implied by theory; 3) most of our objects show a clear He enrichment, as expected from dredge-up episodes; 4) the average sample C/O value is 1.23, consistent with the effects of third dredge-up; and 5) model grids used to compare to observations successfully span the distribution over metallicity space of all C/O and many He/H data points but mostly fail to do so in the case of N/O. The evident enrichment of N in PN and the general discrepancy between the observed and model-predicted N/O abundance ratios signal the need for extra-mixing as an effect of rotation and/or thermohaline mixing in the models. The unexpectedly high N enrichment that is implied here for low mass stars, if confirmed, will likely impact our conclusions about the source of N in the Universe.' author: - 'R.B.C. Henry' - 'B.G. Stephenson' - 'M.M. Miller Bertolami' - 'K.B. Kwitter' - 'B. Balick' title: 'On the Production of He, C and N by Low and Intermediate Mass Stars: A Comparison of Observed and Model-Predicted Planetary Nebula Abundances' --- Introduction ============ Galaxies evolve chemically because hydrogen-rich interstellar material forms stars which subsequently convert a fraction of the hydrogen into heavier elements. These nuclear products are expelled into the interstellar medium and thereby enrich it. As this cycle is continuously repeated, the mass fraction of metals rises. Additional factors which influence the metal abundances in galaxies include the exchange of gas with the intergalactic medium via inflow and outflow. A crucial component for understanding the rate at which the interstellar abundance of a specific element rises over time is the amount of the element that is synthesized and expelled by a star of a specific mass during its lifetime, i.e., the stellar yield. Generally, stellar yields are estimated by computing stellar evolution models that predict them. These models are constrained using elemental abundance measurements of the material that is cast off from the star in the form of winds propelled by radiation pressure, periodic expulsions by stellar pulsations, or sudden ejection caused by explosions. In the current study, we are interested in the production of He, C and N by low and intermediate mass stars (LIMS), that is, those stars typically considered to occupy the mass range of 1-8 M$_{\odot}$. Stellar models suggest that internal temperatures become sufficiently high either in the cores or outer shells of these stars to drive not only the conversion of H to He via the proton-proton chain reactions, but also the triple alpha process as well as the CN(O) cycle to produce C and N, respectively. Observationally, there is overwhelming evidence that LIMS do indeed synthesize and eventually expel measurable amounts of elements such as He, C, N and perhaps O, as well as s-process elements \[see articles by @herwig05, @kwitter12, @karakas14, @delgado16, @maciel17 and @sterling17\]. However, the impact that LIMS actually have, relative to massive stars on the chemical evolution of these elements in a galaxy, is still very much open for debate. The material that is cast off by LIMS during and after the AGB stage in the form of winds of varying speeds can subsequently form large-scale density enhancements and become photoionized by the UV photons produced by the hot, shrinking stellar remnant, forming a planetary nebula (PN). The photon energy absorbed by the nebula results in the production of detectable emission lines that can be analyzed in detail to infer abundance, temperature and density information about the PN. PN abundance patterns reflect the nature of the chemical composition of the LIMS atmospheres at the end of stellar evolution and are therefore useful in two ways. First, the abundances of alpha, Fe-peak and r-process elements relative to H, especially O/H, Ne/H, S/H, Ar/H and Cl/H in PN, evidently represent the levels of these elements that were present in the interstellar material out of which the progenitor star formed. This conclusion is strongly supported by a recent study by @maciel17. This team has recently compiled and analyzed a database containing abundance measurements of 1318 PN along with a second database containing similar information about 936 H II regions, the latter objects representing the current ISM abundance picture. Through the use of histograms and scatter plots, the authors show that both object types exhibit the same lockstep behavior of Ne/H, S/H and Ar/H, all versus O/H[^1]. This familiar result strongly supports the idea that LIMS do not themselves alter the levels of the alpha elements that were present in the interstellar material out of which they formed. As a result, PN can be used as probes of ISM conditions at the time of progenitor star formation[^2]. Second, and more relevant to our current study, elements such as He, C, N and s-process elements are found to be enriched in PN, and so measurements of their abundances provide valuable information about the nucleosynthesis that occurs during the lifetime of PN progenitor stars. Figures \[he2hvo2h\_BIG\], \[c2ovo2h\_BIG\] and \[n2ovo2h\_BIG\] are plots showing He/H, log(C/O) and log(N/O), respectively, versus 12+log(O/H), where O/H is taken as the gauge of overall metallicity. Each plot contrasts the values for PN (open symbols) with analogous values of objects such as H II regions and F and G dwarfs, all of which measure the interstellar values of the two ratios involved either currently (H II regions) or at the time of their formation (stars). Original data for the MWG Disk PN points in these three figures can be found in @henry00 [@henry04; @milingo10; @kwitter12; @dufour15]. ![He/H versus 12+log(O/H). Open black circles refer to PN located in the MWG disk and taken from our extended sample, filled red squares represent Galactic H II regions from @deharveng00 and the filled magenta triangle shows the solar position [@asplund09]. The position of Orion [@esteban04] is indicated.[]{data-label="he2hvo2h_BIG"}](f1he2hvo2h_BIG.ps){width="6in"} ![log(C/O) versus 12+log(O/H). Open black circles refer to PN located in the MWG disk and taken from our extended sample, red filled circles represent H II regions from Garnett (1995, 1997, 1999), MWG disk stars from @gustafsson99 are shown with blue filled circles, MWG metal poor halo stars from @akerman04 are indicated with orange filled diamonds, green open squares and diamonds indicated LMC and SMC PN by @stanghellini05 [@stanghellini09] respectively, and red filled squares correspond to low metallicity dwarf galaxies by @berg16. The maroon filled triangle and magenta filled diamond represent Orion [@esteban04] and the Sun [@asplund09], respectively.[]{data-label="c2ovo2h_BIG"}](f2c2ovo2h_BIG.ps){width="6in"} ![log(N/O) versus 12+log(O/H). Open black circles refer to PN located in the MWG disk and taken from our extended sample, filled blue circles represent blue compact galaxies [@izotov99], filled red circles are H II regions [@vanzee98], open green squares and open maroon diamonds are PN from the LMC and SMC, respectively, from @stasinska98, and filled orange circles are low metallicity galaxies from @izotov12. The maroon filled triangle and magenta filled diamond represent Orion [@esteban04] and the Sun [@asplund09], respectively. []{data-label="n2ovo2h_BIG"}](f3n2ovo2h_BIG.ps){width="6in"} The relatively narrow horizontal band (especially in the cases of C/O and N/O) populated by the H II regions and stars in each graph demonstrates how He/H, C/O and N/O generally behave as metallicity changes. These patterns of chemical evolution are reflections of the details of stellar evolution and nucleosynthesis, processes which apparently are universal and space invariant. Presumably, when the progenitor stars of the PN in the plots began their lives on the main sequence, they were located along these bands at a position near the PN’s current O/H value. PN values of He/H, C/O and N/O clearly fall above these bands in nearly every case, strongly suggesting that [*He, C, and N have been significantly enriched by nucleosynthesis in nearly all progenitor stars over their lifetimes*]{}. High PN values for these three abundance ratios have been observed previously. For example, @henry90 compiled the He/H and log(N/O) measurements by @aller83 [@aller87] for 84 Galactic PN and found the average values of these two ratios to be 0.11 and -0.38, respectively. From their large sample of southern PN, @kb94 found similar average values for He/H and log(N/O) of 0.115 and -0.33, respectively. In the case of C/O, the log of our average value for objects in the current study (see Table \[abuns\]) is log(C/O)=0.088 compared with 0.06 from @kb94 (see their Table 14)[^3]. In addition, simple eyeball comparisons of the ranges of all three ratios shown in @henry90 with @henry90err [erratum], the figures in @kb94 and our Figs. \[he2hvo2h\_BIG\]-\[n2ovo2h\_BIG\] in the current paper show good consistency among these studies and reinforce the point that these ratios in PN are generally enhanced relative to levels of found in H II regions of similar metallicity. Regarding the apparent N enhancement in PN in particular, theory predicts that the hot bottom burning process that explains the extent of the enrichment occurs in the AGB stage of stars whose progenitors were at least 3-4 M$_{\odot}$ depending upon the star’s metallicity. Yet, based upon the properties of the stellar initial mass function, we also know that in the absence of an unknown selection effect, most of the PN included in these figures must be the products of relatively low mass progenitors, i.e., 1-2 M$_{\odot}$ and should therefore show very little N enrichment. How can we reconcile this observational result with theory? [*The purpose of our investigation here is to confront recently published stellar model predictions of PN abundances with the observed abundances of He, C, N and O.*]{} We consider the models of four different research groups and evaluate each set of models based upon how well they appear to explain the observed abundances. Since these same models also predict the total stellar yield of each element, only a fraction of which is present in the visible nebula, our results can be used to assess the relevance of the yield predictions for use in chemical evolution models. Previous studies comparing model predictions and observations have been carried out by @marigo03 [@marigo11], @stanghellini09, @delgado15, @ventura15, @lugaro16, and @garcia16. The principle method of comparison for these studies features plots of two different element-to-element ratios, e.g., C/O versus N/O, showing both the observed abundances and model tracks computed for a range of stellar masses. Most authors find that abundance trends involving He, C and N can be explained by various amounts of 3rd dredge-up, which elevates C, and hot bottom burning, which does likewise to N. However, explanations of PN abundance patterns based upon progenitor masses is typically not included. Our study augments earlier analyses by also considering each of the ratios of He/H, C/O or N/O separately [*as a function of an object’s progenitor mass*]{}. The sample of PN abundances which we compare to model predictions consists of 35 objects that have previously been observed and analyzed by our group. We have observed all objects in the optical with ground-based telescopes and 13 out of the 35 PN in the UV using either IUE or HST. We describe the PN sample in detail in section 2. Our methods for determining the necessary abundances and progenitor mass for each object are provided in section 3. A description of each stellar modelling code used to predict the PN abundances and stellar yields of He, C and N, along with an analysis of our comparison of theory and observation are presented in section 4. Our summary and conclusions appear in section 5. Object Sample ============= For nearly 25 years our team has been building a spectroscopic database comprising 166 planetary nebulae located primarily in the disk and halo of the Milky Way Galaxy. While a vast majority of the observations have necessarily been restricted to the optical region of the spectrum, i.e., 3700Å to 10,000Å, we have also collected UV data for a smaller sample using both the IUE and HST facilities. Most of these data, along with derived abundances of He, N, O, Ne, S, Ar, and in several cases C, have been published. Because we are currently interested in comparing our observed CNO abundances in PN with theoretical predictions of the abundances of these same elements as a function of initial stellar mass, it is necessary to identify a subset of our database for which we can infer progenitor star masses that are based upon carefully and consistently determined central star luminosities and effective temperatures. Initial stellar masses can be derived by using published values for T$_{eff}$ and log(L/L$_{\odot}$) of each central star to place the star in a theoretical HR diagram. After plotting post-AGB evolutionary tracks labeled by mass in the same diagram, stellar masses can be inferred by interpolating between tracks[^4]. The extensive compilation of stellar data by @frew08 [Tables 9.5 and 9.6, each comprising 210 objects] was adopted as our source of T$_{eff}$ and log(L/L$_{\odot}$) for reasons of consistency. A total of 32 objects with N and O abundances from our database were also listed in the Frew paper. We have also measured C abundances using UV emission lines of C III\] $\lambda\lambda$1907,1909 for 10 of the 32 PN. Besides abundances of N and O, we have determined C abundances for three other objects in our database which are not part of the Frew list and have included these objects in order to maximize the sample size for objects with measured C abundances. Thus, our final object list contains 35 PN (about 1/5 of the objects in our original database), all of which have measured N and O abundances and including 13 objects with measured C abundances. We emphasize the fact that the spectroscopic observations of the 35 PN, as well as the data reductions and abundance analyses, were carried out exclusively by members of our team. Our final sample of 35 objects is listed in Table \[objects\][^5]. For each PN identified in column 1 we provide a morphological description in column 2 and the Peimbert type in column 3. Column 4 indicates the spectral range over which we have observed the object (OP=optical; IUE/HST=UV data source). Finally, columns 5 and 6 list the galactocentric distance in kiloparsecs and the vertical height in parsecs above the Galactic plane for each object. Taking the distance of the Sun from the Galactic center as 8 kpc and the scale height of the thin disk as about 350 pc, we see that most of the PN in our sample are located near the solar neighborhood and within the thin disk. We also note that while the values of the He/H, C/O and N/O abundance ratios over the MW disk are sensitive to metallicity as measured by O/H, the O/H ratio only decreases by 0.23 dex between 6 and 10 kpc in galactocentric distance, assuming an O/H gradient of -0.058 dex/kpc [@henry10]. From Figs. \[he2hvo2h\_BIG\]-\[n2ovo2h\_BIG\], this corresponds to only minor changes in He/H, C/O and N/O, and so we can ignore the effects of the disk’s metallicity gradient. [cccccc]{} FG1 & elliptical; bipolar jets & II & OP & 7.56 & 251.07\ IC2149 & round/complex & II & OP & 8.46 & 276.46\ IC2165 & elliptical & II & HST,OP & 9.98 & -536.80\ IC3568 & round & II & HST,OP & 9.53 & 1642.47\ IC418 & elliptical & II & IUE,OP & 8.92 & -493.40\ IC4593 & elliptical & II & IUE,OP & 6.94 & 1026.64\ N1501 & elliptical & II & OP & 8.59 & 82.12\ N2371 & barrel & II & OP & 9.31 & 478.51\ N2392 & elliptical & II & IUE,OP & 9.17 & 382.74\ N2438 & round & II & OP & 8.95 & 102.01\ N2440 & pinched-waist/multisymmetric & I & HST,OP & 9.23 & 80.22\ N2792 & round & II & OP & 8.39 & 144.41\ N3195 & barrel & I & OP & 7.35 & -688.73\ N3211 & elliptical & II & OP & 7.69 & -162.14\ N3242 & elliptical/shells/ansae & II & HST,OP & 8.18 & 530.62\ N3918 & barrel & II & OP & 7.42 & 151.08\ N5315 & elliptical/multisymmetric & I & HST,OP & 6.93 & -80.02\ N5882 & elliptical/shells & II & HST,OP & 6.64 & 297.52\ N6369 & round/shells & II & OP & 6.46 & 157.97\ N6445 & irregular/lobe-remnants & I & OP & 6.63 & 94.78\ N6537 & pinched-waist & I & OP & 6.04 & 25.83\ N6563 & elliptical/lobes & II & OP & 6.34 & -213.34\ N6567 & unresolved & II & OP & 6.36 & -19.06\ N6572 & elliptical/multisymmetric/lobes & I & OP & 6.58 & 381.92\ N6629 & elliptical/halo & II & OP & 6.04 & -176.04\ N6751 & round/flocculent & I & OP & 6.34 & -206.96\ N6804 & barrel/shell & II & OP & 7.06 & -117.63\ N6826 & elliptical/shell/halo/ansae & II & IUE,OP & 7.96 & 287.77\ N6894 & round & I & OP & 7.64 & -59.88\ N7008 & elliptical & II & OP & 8.07 & 66.97\ N7009 & elliptical/shell/halo/ansae & II & IUE,OP & 7.09 & -822.70\ N7027 & elliptical/multisymmetric/hourglass-shell/halo & II & OP & 7.97 & -51.89\ N7293 & round/shells/bowshocks & I & IUE,OP & 7.90 & -184.75\ N7354 & barrel/shell/jets & I & OP & 8.62 & 64.76\ N7662 & elliptical/shell/halo & II & HST,OP & 8.42 & -380.96\ Table \[journal\] provides the details concerning the observations of each of our 35 sample objects. The name of the PN appears in column 1. Columns 2-7 list the observation date, the telescope(s) and instrument(s) used, the times for the blue and red exposures, and the offset from the central star, respectively. The relevant references for the observations are given in column 8. Beginning with our first project in 1993, all data have been reduced and measured manually by one of us (KBK) using the same techniques throughout. Uncertainties were explicitly measured and calculated in our early papers; then experience taught us that we could estimate them from the lines strengths themselves. ELSA (see §3.1) calculates statistical uncertainties, but no systematics are included. The former are then propagated through to the final intensities and diagnostics. Systematic errors are minimized by employing the same set of atomic data for abundance determinations throughout and by having a homogeneous data reduction and measuring pipeline, all performed by the same individual. The original line strengths are available in the relevant papers provided in Table \[journal\]. [llllllll]{} Fg1 & March-April 1997 & CTIO 1.5-m & Cass spec & 1500 & 900 & & 1\ IC 2149 & 2007 Jan & APO 3.5m & DIS & 90 & 90 & & 2\ IC 2165 & 1996 Dec; HST Cy 19 & KPNO 2.1m & Goldcam & 330 & 90 & & 3,4\ IC 3568 & 1996 May; HST Cy 19 & KPNO 2.1m & Goldcam & 120 & 120 & 4“N & 5,4\ IC 418 & 1996 Dec & KPNO 2.1m & Goldcam & 30 & 440 & 5”N & 6\ IC 4593 & 1996 May; & KPNO 2.1m & Goldcam & 180 & 600 & 3“S & 5\ NGC 1501 & 2007 Jan & APO 3.5m & DIS & 240 & 240 & & 2\ NGC 2371 & 1996 Dec & KPNO 2.1m & Goldcam & 300 & 300 & 9.7”S, 15.7“W & 3\ NGC 2392 & 1996 Dec & KPNO 2.1m & Goldcam & 450 & 2520 & 14”S & 6\ NGC 2438 & 1996 Dec & KPNO 2.1m & Goldcam & 300 & 300 & 16.3“N & 3\ NGC 2440 & 1996 Dec & KPNO 2.1m & Goldcam & 100 & 360 & 4”S & 3,4\ NGC 2792 & March-April 1997 & CTIO 1.5-m & Cass spec& 2400 & 300 & & 1\ NGC 3195 & March-April 1997 & CTIO 1.5-m & Cass spec & 1200 & 900 & & 1\ NGC 3211 & March-April 1997 & CTIO 1.5-m & Cass spec & 480 & 600 & & 1\ NGC 3242 & 1996 Dec & KPNO 2.1m & Goldcam & 450 & 480 & 8“S & 1,4\ NGC 3918 & March-April 1997 & CTIO 1.5-m & Cass spec & 100 & 720 & & 3\ NGC 5315 & 2004 August; HST Cy 19 & CTIO 1.5-m & Cass spec & 961 & 855 & & 7,4\ NGC 5882 & March-April 1997; HST Cy 19 & CTIO 1.5-m & Cass spec & 390 & 480 & & 3,4\ NGC 6369 & 2003 June & KPNO 2.1m & Goldcam & 2000 & 2200 & 10”N & 7\ NGC 6445 & 2003 June & KPNO 2.1m & Goldcam & 1200 & 1200 & 25“N & 7\ NGC 6537 & 2003 June & KPNO 2.1m & Goldcam & 725 & 300 & & 7\ NGC 6563 & March-April 1997 & CTIO 1.5-m & Cass spec & 1200 & 600 & & 1\ NGC 6567 & March-April 1997 & CTIO 1.5-m & Cass spec & 390 & 330 & & 3\ NGC 6572 & 1999 June & KPNO 2.1m & Goldcam & 72 & 72 & & 8\ NGC 6629 & March-April 1997 & CTIO 1.5-m & Cass spec & 420 & 360 & & 1\ NGC 6751 & 2003 June & KPNO 2.1m & Goldcam & 1500 & 1500 & 6”S & 7\ NGC 6804 & 2003 June & KPNO 2.1m & Goldcam & 1800 & 5100 & 10“S & 7\ NGC 6826 & 1996 May; & KPNO 2.1m & Goldcam & 240 & 720 & 9”S & 5\ NGC 6894 & 1999 June & KPNO 2.1m & Goldcam & 600 & 960 & & 8\ NGC 7008 & 2004 August & KPNO 2.1m & Goldcam & 1200 & 1926 & 29“N,11”E & 7\ NGC 7009 & 1996 May; & KPNO 2.1m & Goldcam & 90 & 60 & 9“S & 5\ NGC 7027 & 1996 May & KPNO 2.1m & Goldcam & 25 & 110 & & 3\ NGC 7293 & 1996 Dec & KPNO 2.1m & Goldcam & 1800 & 1800 & 97”E, 171"N & 9\ NGC 7354 & 2003 June & KPNO 2.1m & Goldcam & 3503 & 4500 & & 7\ NGC 7662 & 1999 June; HST Cy 19 & KPNO 2.1m & Goldcam & 90 & 100 & & 3,4\ Methods ======= Nebular Abundances ------------------ We have published abundances of He, N, O and in some cases C previously in papers indicated in the footnote to column 8 in Table \[journal\]. However, we sought to render the abundances more homogeneous by recomputing all of them using the same updated abundance code along with the newly-published ionization correction factors by @delgado14 in the cases of total He, C and O abundances. Ionic abundances were determined using the code ELSA (Emission Line Spectral Analysis), a program whose core is a 5-level atom routine. Emission line strengths and their uncertainties used as input to ELSA were taken from the references listed in Table \[journal\]. We used an updated version of the program originally introduced by @johnson06, where the major change was the addition of a C III\] density diagnostic routine based upon the $\lambda$1907$/\lambda$1909 line strength ratio (C III\] $\lambda$1909 was already included in the program). The important emission lines besides H$\beta$ that were used in the ionic abundance computations for each object were He I $\lambda$5876, He II $\lambda$4686, C III\] $\lambda\lambda$1907,1909, \[N II\] $\lambda$6584, \[O II\] $\lambda$3727, \[O III\] $\lambda$5007 and \[O III\] $\lambda$4363. The resulting ionic abundances and uncertainties with respect to H$^+$ produced by ELSA are presented in Table \[ions\]. The object names are given in column 1 followed by column pairs containing the abundances and uncertainties for each ion labeled in the header. Uncertainties for the ionic abundances are computed internally by ELSA and are the result of contributions from: 1)  the uncertainties in the line strength ratios, e.g., I$_{\lambda}$/I$_{H\beta}$; and 2)  the uncertainties in the reaction rate coefficients (radiative recombination or collisional excitation rate coefficients) that stem from errors in electron temperature. =3.0 in [lcccccccccccc]{} FG1 & 1.16E-01 & 1.26E-02 & 1.37E-02 & 1.88E-03 & & & 1.15E-05 & 1.40E-06 & 1.90E-05 & 5.50E-06 & 2.69E-04 & 2.45E-05\ IC2149 & 1.05E-01 & 1.20E-02 & 8.32E-05 & 4.03E-05 & & & 5.87E-06 & 8.00E-07 & 4.14E-05 & 9.80E-06 & 1.79E-04 & 1.72E-05\ IC2165 & 5.70E-02 & 7.09E-03 & 4.90E-02 & 5.15E-03 & 1.72E-04 & 1.30E-05 & 4.61E-06 & 1.00E-07 & 1.27E-05 & 2.00E-06 & 1.36E-04 & 5.00E-06\ IC3568 & 1.14E-01 & 1.26E-02 & 3.42E-03 & 1.33E-04 & 1.55E-04 & 2.60E-05 & 3.92E-07 & 8.10E-08 & 8.85E-06 & 5.46E-06 & 2.82E-04 & 1.70E-05\ IC418 & 7.00E-02 & 8.26E-03 & & & 3.58E-04 & 2.03E-04 & 3.78E-05 & 7.00E-06 & 8.38E-05 & 2.11E-05 & 6.90E-05 & 6.00E-06\ IC4593 & 1.02E-01 & 1.18E-02 & 5.40E-04 & 8.30E-05 & 8.54E-04 & 1.21E-04 & 3.45E-06 & 9.00E-07 & 4.38E-05 & 2.50E-05 & 3.71E-04 & 3.01E-05\ N1501 & 8.56E-02 & 9.30E-03 & 3.86E-02 & 5.30E-03 & & & 1.95E-06 & 2.00E-07 & 9.89E-06 & 1.94E-06 & 3.20E-04 & 2.56E-05\ N2371 & 2.62E-02 & 2.93E-03 & 8.03E-02 & 1.13E-02 & & & 9.37E-06 & 2.25E-06 & 1.66E-05 & 9.60E-06 & 1.36E-04 & 1.66E-05\ N2392 & 5.80E-02 & 6.50E-03 & 2.10E-02 & 2.91E-03 & 1.26E-04 & 5.90E-05 & 1.37E-06 & 1.32E-06 & 2.53E-05 & 7.50E-06 & 1.14E-04 & 1.60E-05\ N2438 & 7.53E-02 & 1.95E-02 & 2.08E-02 & 2.82E-03 & & & 2.95E-05 & 5.90E-06 & 5.04E-05 & 2.81E-05 & 2.72E-04 & 2.69E-05\ N2440 & 5.35E-02 & 6.89E-03 & 7.39E-02 & 6.72E-03 & 1.42E-04 & 1.60E-05 & 7.46E-05 & 8.00E-06 & 4.38E-05 & 1.50E-05 & 1.25E-04 & 7.00E-06\ N2792 & 1.92E-02 & 2.54E-03 & 9.16E-02 & 1.33E-02 & & & 3.92E-07 & 2.96E-07 & 1.64E-06 & 9.90E-07 & 1.11E-04 & 1.61E-05\ N3195 & 1.24E-01 & 1.35E-02 & 1.16E-02 & 1.62E-03 & & & 9.59E-05 & 1.17E-05 & 1.49E-04 & 4.50E-05 & 3.06E-04 & 2.66E-05\ N3211 & 3.13E-02 & 3.71E-03 & 8.39E-02 & 1.21E-02 & & & 1.22E-06 & 4.60E-07 & 4.39E-06 & 4.21E-06 & 1.85E-04 & 2.58E-05\ N3242 & 6.88E-02 & 1.09E-02 & 4.58E-02 & 3.77E-03 & 1.65E-04 & 8.00E-06 & 5.72E-07 & 5.40E-08 & 5.42E-06 & 1.46E-06 & 2.40E-04 & 5.00E-06\ N3918 & 7.02E-02 & 8.78E-03 & 4.43E-02 & 6.12E-03 & & & 1.00E-05 & 1.70E-06 & 1.98E-05 & 6.00E-06 & 2.62E-04 & 3.24E-05\ N5315 & 1.32E-01 & 1.59E-02 & & & 2.22E-04 & 2.70E-05 & 1.91E-05 & 1.70E-06 & 1.21E-05 & 3.30E-06 & 3.47E-04 & 1.70E-05\ N5882 & 1.03E-01 & 1.33E-02 & 6.92E-03 & 4.65E-04 & 7.83E-05 & 2.06E-05 & 1.81E-06 & 5.00E-08 & 5.34E-06 & 8.90E-07 & 4.11E-04 & 3.10E-05\ N6369 & 1.30E-01 & 1.48E-02 & 1.65E-03 & 2.24E-04 & & & 1.11E-05 & 1.40E-06 & 2.22E-05 & 5.35E-06 & 4.72E-04 & 6.78E-05\ N6445 & 9.73E-02 & 1.05E-02 & 4.02E-02 & 5.50E-03 & & & 8.72E-05 & 1.28E-05 & 1.01E-04 & 3.40E-05 & 3.62E-04 & 3.92E-05\ N6537 & 9.60E-02 & 1.44E-02 & 7.43E-02 & 1.08E-02 & & & 2.89E-05 & 5.90E-06 & 3.25E-06 & 1.08E-06 & 1.10E-04 & 1.72E-05\ N6563 & 1.11E-01 & 1.16E-02 & 1.50E-02 & 2.04E-03 & & & 5.35E-05 & 7.40E-06 & 1.22E-04 & 4.30E-05 & 2.80E-04 & 2.81E-05\ N6567 & 1.01E-01 & 1.44E-02 & 1.37E-03 & 1.08E-02 & & & 1.50E-06 & 3.30E-07 & 4.80E-06 & 1.67E-06 & 2.22E-04 & 2.38E-05\ N6572 & 1.25E-01 & 1.50E-02 & 5.64E-04 & 1.64E-04 & & & 6.89E-06 & 1.19E-06 & 7.27E-06 & 1.70E-06 & 3.72E-04 & 3.63E-05\ N6629 & 1.09E-01 & 1.24E-02 & 1.03E-03 & 3.15E-04 & & & 2.33E-06 & 5.50E-07 & 1.54E-05 & 7.40E-06 & 3.93E-04 & 3.30E-05\ N6751 & 1.36E-01 & 1.51E-02 & & & & & 4.68E-05 & 6.20E-06 & 7.93E-05 & 2.14E-05 & 3.17E-04 & 3.00E-05\ N6804 & 2.10E-02 & 2.67E-03 & 8.86E-02 & 1.27E-02 & & & 1.38E-07 & 9.30E-08 & 1.04E-06 & 2.70E-07 & 1.09E-04 & 1.46E-05\ N6826 & 1.07E-01 & 1.38E-02 & & & 3.93E-04 & 1.18E-04 & 2.01E-06 & 5.50E-06 & 1.67E-05 & 9.70E-06 & 3.59E-04 & 3.20E-05\ N6894 & 1.14E-01 & 1.22E-02 & 1.55E-02 & 2.12E-03 & & & 7.37E-05 & 9.60E-06 & 9.36E-05 & 2.95E-05 & 2.61E-04 & 3.91E-05\ N7008 & 7.80E-02 & 8.39E-03 & 7.02E-02 & 9.79E-03 & & & 1.22E-06 & 2.40E-07 & 2.15E-06 & 1.09E-06 & 3.02E-04 & 3.07E-05\ N7009 & 1.10E-01 & 1.26E-02 & 9.43E-03 & 9.79E-03 & 6.89E-04 & 4.31E-04 & 8.46E-07 & 1.09E-07 & 2.05E-06 & 4.60E-07 & 4.85E-04 & 4.50E-05\ N7027 & 6.16E-02 & 9.36E-03 & 4.35E-02 & 6.11E-03 & & & 6.79E-06 & 1.05E-06 & 6.65E-06 & 8.10E-07 & 1.84E-04 & 2.50E-05\ N7293 & 1.12E-01 & 1.41E-02 & 7.99E-03 & 4.07E-04 & 1.84E-04 & 3.35E-04 & 5.52E-05 & 7.42E-06 & 7.50E-05 & 5.90E-05 & 3.40E-04 & 3.60E-05\ N7354 & 9.02E-02 & 1.02E-02 & 3.96E-02 & 5.44E-03 & & & 7.82E-06 & 1.11E-06 & 6.38E-06 & 1.84E-06 & 3.47E-04 & 3.56E-05\ N7662 & 6.72E-02 & 6.18E-03 & 5.47E-02 & 7.91E-03 & 1.28E-04 & 1.40E-05 & 5.68E-07 & 3.50E-08 & 4.64E-06 & 1.20E-06 & 1.95E-04 & 1.00E-05\ Ionic abundances in Table \[ions\] were converted to the total elemental abundance ratios of interest here, i.e., He/H, C/O, N/O, and O/H, by multiplying the value of (He$^{+2}$ + He$^+$)/H$^+$, C$^{+2}$/O$^{+2}$, N$^+$/O$^+$ and (O$^{+2}$ + O$^+$)/H$^+$, respectively, by a relevant ionization correction factor (ICF). Except in the case of N/O, ICFs and their uncertainties were determined using the schemes of @delgado14. The ICF value for He/H was taken as unity for each object, since negligible amounts of neutral He are expected to be present in PN (see the next paragraph). On the other hand, the values for the C/O and O/H ICFs along with their uncertainties are different for each object and are therefore provided in Table \[icfs\]. For N/O we followed @kb94 and @kwitter01 and assumed that N/O=(N$^+$/O$^+$).[^6] We have assumed throughout that the contribution of neutral He is negligible in all objects. With the possible exception of IC 418, this is justified by the fact that the O$^{+2}$/O$^+$ abundance ratio is greater than unity (see Table \[ions\]), since the ionization potential of O$^+$ (35.1eV) greatly exceeds that of He$^o$ (24.6eV). Concerning IC 418, @dopita17 recently published the results of new high resolution integral field spectroscopy for this PN. Their observations show both moderate \[O II\] $\lambda$3727 and \[O III\] $\lambda$5007 strengths, weak \[O I\] $\lambda$6300 and no He I $\lambda$4686, qualitatively similar to the findings in @henry00 and @sharpee03. @dopita17 also construct a detailed nebular model that implies an abundance ratio of He/H=0.11, significantly higher than our value of 0.07. Therefore, our neglect of neutral He in IC 418 may be unwarranted, in which case our inferred He abundance may in fact be too low. This uncertainty obviously affects the position of IC 418, currently at He/H=0.07, in Figs. \[he2h\], \[c2ovhe2h\] and \[n2ovhe2h\]. [lcccc]{} FG1 & & & 1.06 & 0.17\ IC2149 & & & 1.00 & 0.15\ IC2165 & 1.09 & 0.11 & 1.50 & 0.24\ IC3568 & 1.17 & 0.13 & 1.02 & 0.17\ IC418 & 0.64 & 0.04 & 1.00 & 0.11\ IC4593 & 1.06 & 0.45 & 1.00 & 0.16\ N1501 & & & 1.26 & 0.21\ N2371 & & & 2.93 & 0.46\ N2392 & 0.96 & 0.08 & 1.21 & 0.18\ N2438 & & & 1.15 & 0.18\ N2440 & 0.87 & 0.06 & 1.84 & 0.27\ N2792 & & & 3.99 & 0.66\ N3195 & & & 1.05 & 0.14\ N3211 & & & 2.68 & 0.44\ N3242 & 1.19 & 0.13 & 1.39 & 0.23\ N3918 & & & 1.37 & 0.22\ N5315 & 1.17 & 0.13 & 1.00 & 0.16\ N5882 & 1.21 & 0.14 & 1.04 & 0.17\ N6369 & & & 1.01 & 0.16\ N6445 & & & 1.23 & 0.18\ N6537 & & & 1.45 & 0.24\ N6563 & & & 1.07 & 0.15\ N6567 & & & 1.01 & 0.17\ N6572 & & & 1.00 & 0.16\ N6629 & & & 1.01 & 0.16\ N6751 & & & 1.00 & 0.15\ N6804 & & & 3.65 & 0.60\ N6826 & 1.15 & 0.12 & 1.00 & 0.16\ N6894 & & & 1.07 & 0.15\ N7008 & & & 1.53 & 0.25\ N7009 & 1.22 & 0.14 & 1.05 & 0.17\ N7027 & & & 1.41 & 0.23\ N7293 & 0.96 & 0.08 & 1.04 & 0.16\ N7354 & & & 1.25 & 0.21\ N7662 & 1.19 & 0.13 & 1.48 & 0.24\ [lccccccccc]{} FG1 & 0.6 & 0.13 & 0.01 & & & 0.61 & 0.19 & 3.07E-04 & 5.60E-05\ IC2149 & 1.2 & 0.11 & 0.01 & & & 0.14 & 0.04 & 2.20E-04 & 3.89E-05\ IC2165 & 2.3 & 0.11 & 0.01 & 1.37 & 0.18 & 0.36 & 0.06 & 2.23E-04 & 3.66E-05\ IC3568 & 2.5 & 0.12 & 0.01 & 0.65 & 0.14 & 0.04 & 0.03 & 2.96E-04 & 5.18E-05\ IC418 & 1.4 & 0.07 & 0.01 & 3.34 & 1.99 & 0.45 & 0.14 & 1.53E-04 & 2.74E-05\ IC4593 & 0.7 & 0.10 & 0.01 & 2.43 & 0.45 & 0.08 & 0.05 & 4.16E-04 & 7.65E-05\ N1501 & 1.7 & 0.12 & 0.01 & & & 0.20 & 0.04 & 4.15E-04 & 7.52E-05\ N2371 & 0.6 & 0.11 & 0.01 & & & 0.56 & 0.35 & 4.47E-04 & 9.04E-05\ N2392 & 1.6 & 0.08 & 0.01 & 1.06 & 0.54 & 0.05 & 0.05 & 1.68E-04 & 3.35E-05\ N2438 & 1.8 & 0.10 & 0.02 & & & 0.59 & 0.35 & 3.72E-04 & 7.27E-05\ N2440 & 2.8 & 0.13 & 0.01 & 0.99 & 0.14 & 1.71 & 0.61 & 3.10E-04 & 5.44E-05\ N2792 & 1.2 & 0.11 & 0.01 & & & 0.24 & 0.23 & 4.50E-04 & 9.81E-05\ N3195 & 2.1 & 0.14 & 0.01 & & & 0.64 & 0.21 & 4.78E-04 & 8.57E-05\ N3211 & 2.5 & 0.12 & 0.01 & & & 0.28 & 0.29 & 5.08E-04 & 1.09E-04\ N3242 & 1.2 & 0.11 & 0.01 & 0.82 & 0.10 & 0.10 & 0.03 & 3.40E-04 & 5.64E-05\ N3918 & 2 & 0.11 & 0.01 & & & 0.51 & 0.18 & 3.85E-04 & 7.68E-05\ N5315 & 2.4 & 0.13 & 0.02 & 0.75 & 0.09 & 1.58 & 0.45 & 3.59E-04 & 6.11E-05\ N5882 & 1.1 & 0.11 & 0.01 & 0.23 & 0.07 & 0.34 & 0.06 & 4.32E-04 & 7.81E-05\ N6369 & 3 & 0.13 & 0.01 & & & 0.50 & 0.14 & 4.98E-04 & 1.06E-04\ N6445 & 2.3 & 0.14 & 0.01 & & & 0.87 & 0.32 & 5.73E-04 & 1.06E-04\ N6537 & 3.7 & 0.17 & 0.02 & & & 8.84 & 3.47 & 1.65E-04 & 3.67E-05\ N6563 & 1.7 & 0.13 & 0.01 & & & 0.44 & 0.17 & 4.32E-04 & 8.17E-05\ N6567 & 0.7 & 0.10 & 0.02 & & & 0.31 & 0.13 & 2.28E-04 & 4.46E-05\ N6572 & 1.4 & 0.13 & 0.01 & & & 0.95 & 0.28 & 3.80E-04 & 7.20E-05\ N6629 & 1.6 & 0.11 & 0.01 & & & 0.15 & 0.08 & 4.10E-04 & 7.54E-05\ N6751 & 2.7 & 0.14 & 0.02 & & & 0.59 & 0.18 & 3.96E-04 & 7.03E-05\ N6804 & 1.4 & 0.11 & 0.01 & & & 0.12 & 0.10 & 4.01E-04 & 8.48E-05\ N6826 & 1.6 & 0.11 & 0.01 & 1.26 & 0.47 & 0.12 & 0.34 & 3.76E-04 & 6.96E-05\ N6894 & 1 & 0.13 & 0.01 & & & 0.79 & 0.27 & 3.81E-04 & 7.59E-05\ N7008 & 0.7 & 0.15 & 0.01 & & & 0.57 & 0.31 & 4.66E-04 & 8.99E-05\ N7009 & 1.4 & 0.12 & 0.02 & 1.74 & 1.16 & 0.41 & 0.11 & 5.10E-04 & 9.66E-05\ N7027 & 2.7 & 0.11 & 0.01 & & & 1.02 & 0.20 & 2.69E-04 & 5.63E-05\ N7293 & 2.1 & 0.12 & 0.01 & 0.52 & 0.91 & 0.74 & 0.59 & 4.31E-04 & 9.75E-05\ N7354 & 2.5 & 0.13 & 0.01 & & & 1.22 & 0.39 & 4.42E-04 & 8.51E-05\ N7662 & 1.2 & 0.12 & 0.01 & 0.78 & 0.13 & 0.12 & 0.03 & 2.95E-04 & 5.06E-05\ Our final elemental abundances and uncertainties appear in Table \[abuns\]. Object names are provided in column 1, while column 2 contains our estimate of the progenitor star mass for that object. These masses were inferred according to the method described in the next subsection. Beginning with column 3, pairs of columns list the elemental number abundances and uncertainties for He/H, C/O, N/O and O/H. The uncertainties were rigorously determined by adding in quadrature the partial uncertainty contributions from each ion involved in the total element computation as well as the ICF uncertainty[^7]. The results provided in Table \[abuns\] will be analyzed in detail in §\[results\] following our detailed discussion of our method for determining progenitor masses. Progenitor Star Masses\[progmass\] ---------------------------------- Central star and progenitor masses were estimated by plotting the position of each central star in the log(L/L$_{\odot}$)-log T$_{eff}$ plane along with theoretical post-AGB evolutionary tracks and interpolating between tracks for each of our 35 objects. The values of log(L/L$_{\odot}$) and log T$_{eff}$ were taken from @frew08 for 32 of our 35 sample objects. For the three sample objects not included in @frew08 (IC2165, IC3568 and NGC5315) we assumed the L and T values derived from models in @henry15. We decided to base our analysis on the log(L/L$_{\odot}$) and log T$_{eff}$ values for each of our objects found in @frew08 because of the thoroughness of the procedures which he used to obtain these values. In his compilation of log(L/L$_{\odot}$) values, Frew vetted all published V magnitude estimates for quality and then averaged the best values for each central star. Absolute visual magnitudes were then determined via a distance modulus, where distances were inferred from a new relation developed in @frew08 between the H$\alpha$ surface brightness and nebular radius of a PN. Following the application of a bolometric correction, bolometric magnitudes were converted to solar luminosities. The effective temperature of each central star was determined by Frew using the H and He Zanstra temperature methods in most cases. Table \[masses\] contains our adopted values for log(L/L$_{\odot}$) and log T$_{eff}$ in columns 2 and 3, respectively, for each PN listed in column 1. [lccccc]{} FG1 & 3.23 & 4.9 & 0.4 & 0.8 & 0.6\ IC2149 & 3.66 & 4.62 & 1.3 & 1.2 & 1.2\ IC2165 & 3.87 & 5.06 & 2.3 & 2.2 & 2.3\ IC3568 & 3.98 & 4.71 & 2.5 & 2.4 & 2.5\ IC418 & 3.72 & 4.58 & 1.5 & 1.3 & 1.4\ IC4593 & 3.41 & 4.6 & 0.5 & 0.9 & 0.7\ N1501 & 3.66 & 5.13 & 1.8 & 1.5 & 1.7\ N2371 & 2.98 & 5 & 0.5 & 0.8 & 0.6\ N2392 & 3.82 & 4.67 & 1.8 & 1.5 & 1.6\ N2438 & 2.31 & 5.09 & 2.0 & 1.5 & 1.8\ N2440 & 3.32 & 5.32 & 3.0 & 2.6 & 2.8\ N2792 & 3.18 & 5.1 & 1.3 & 1.2 & 1.2\ N3195 & 2.56 & 5.15 & 2.2 & 1.9 & 2.1\ N3211 & 2.76 & 5.21 & 2.6 & 2.3 & 2.5\ N3242 & 3.54 & 4.95 & 1.2 & 1.2 & 1.2\ N3918 & 3.7 & 5.18 & 2.0 & 2.0 & 2.0\ N5315 & 3.95 & 4.78 & 2.5 & 2.3 & 2.4\ N5882 & 3.52 & 4.83 & 1.0 & 1.1 & 1.1\ N6369 & 4.07 & 4.82 & 3.1 & 2.8 & 3.0\ N6445 & 2.97 & 5.23 & 2.4 & 2.1 & 2.3\ N6537 & 3.3 & 5.4 & 4.2 & 3.1 & 3.7\ N6563 & 2.34 & 5.09 & 1.9 & 1.4 & 1.7\ N6567 & 3.35 & 4.78 & 0.5 & 0.9 & 0.7\ N6572 & 3.72 & 4.84 & 1.5 & 1.3 & 1.4\ N6629 & 3.82 & 4.67 & 1.8 & 1.5 & 1.6\ N6751 & 3.97 & 5.02 & 2.7 & 2.6 & 2.7\ N6804 & 3.71 & 4.93 & 1.6 & 1.3 & 1.4\ N6826 & 3.81 & 4.7 & 1.7 & 1.5 & 1.6\ N6894 & 2.23 & 5 & 0.9 & 1.0 & 1.0\ N7008 & 3.12 & 4.99 & 0.5 & 0.8 & 0.7\ N7009 & 3.67 & 4.94 & 1.5 & 1.2 & 1.4\ N7027 & 3.87 & 5.24 & 2.8 & 2.6 & 2.7\ N7293 & 1.95 & 5.04 & 2.2 & 2.0 & 2.1\ N7354 & 3.95 & 4.98 & 2.5 & 2.5 & 2.5\ N7662 & 3.42 & 5.05 & 1.2 & 1.2 & 1.2\ We experimented with two sets of post-AGB evolutionary tracks: those by @vw94 [Z=0.016, VW] and @mb16 [Z=0.010, MB]. Model sets differing in authorship as well as metallicity were chosen deliberately in order to test the effect upon inferred masses. For each set we plotted tracks in a separate log(L/L$_{\odot}$)-log T$_{eff}$ diagram and then placed our sample objects in the graph using our adopted values of these two stellar properties listed in Table \[masses\]. Figures \[fvw94\] and \[fmb16\] show the positions of our sample objects in a log(L/L$_{\odot}$)-log T$_{eff}$ plane along with the model tracks of VW and MB, respectively. ![log L/L$_{\odot}$ versus log T$_{eff}$. Solid colored lines show the post-AGB tracks of @vw94 for Z=0.016. The legend indicates the correspondence between line color and remnant/progenitor mass. The positions of our 35 objects are shown with filled circles. The representative error bars located in the lower right are taken from Fig. 9.8 of @frew08.[]{data-label="fvw94"}](f4vw94016_tracks.ps){width="6in"} ![log L/L$_{\odot}$ versus log T$_{eff}$. Solid colored lines show the post-AGB tracks of @mb16 for Z=0.010. The legend indicates the correspondence between line color and remnant/progenitor mass. The positions of our 35 objects are shown with filled circles. The representative error bars located in the lower right are taken from Fig. 9.8 of @frew08.[]{data-label="fmb16"}](f5mb1601_tracks.ps){width="6in"} The final/initial mass associated with each track is designated by track color as defined in each figure’s legend. Representative error bars for the observed values, shown in the lower right of each figure, are taken directly from Fig. 9.8 of @frew08, since uncertainties for individual objects were not provided. Because each track is associated with a specific initial and final mass, we carefully measured each object’s displacement from adjacent tracks and interpolated to find the mass values. The resulting initial masses determined in Figs. \[fvw94\] and \[fmb16\] are listed in columns 4 and 5 of Table \[masses\], respectively. The average of these two masses is listed in column 6 of that table as well as in column 2 of Table \[abuns\]. Figure \[vwmb\] is a plot of masses from column 5 versus those in column 4 of Table \[masses\]. ![Comparison of progenitor masses of our sample objects derived from the post-AGB tracks of @mb16 (Fig. \[fmb16\]; vertical axis) and @vw94 (Fig. \[fvw94\]; horizontal axis). The solid line indicates the one-to-one correspondence.[]{data-label="vwmb"}](f6vw_vs_mb_mass.ps){width="6in"} The straight line shows the one-to-one relation. For a vast majority of objects, the progenitor masses ($M_i$) determined using the MB tracks tend to be smaller than those determined from the VW tracks by about 0.3 M$_{\odot}$. This systematic difference is a direct consequence of the higher luminosity of the MB models during the constant luminosity stage resulting from the updated treatment of the evolutionary stages that precede the post-AGB stage. However, this offset is less than our estimated uncertainty of $\pm$0.5 M$_{\odot}$ and therefore is likely insignificant for our purposes here. Interesting exceptions are the five objects with $M_i\lesssim 1 M_\odot$, for which the MB tracks are slightly less luminous than those of VW. This leads to significantly higher extrapolated masses ($M_i\sim 0.8 M_\odot$) for three of the objects when using the MB tracks, instead of the unrealistically low $M_i\sim 0.5 M_\odot$ obtained with the VW tracks. Results & Discussion\[results\] =============================== We now present a comparison of observed abundance ratios for our sample objects to several sets of theoretical model predictions of PN abundances in Figures \[he2h\]-\[c2ovn2o\]. Note that in these figures, model tracks differing in metallicity but produced by the same code share the same line color. The metallicity of each model follows the code name in the legend and generally increases in value from solid to dashed to dotted line types. Solar abundance values from @asplund09 are shown with black dotted lines. To understand the differences in the predictions of the different theoretical models, and also to extract some physical insight from their comparison with the observations, it is necessary to keep in mind the different physical assumptions of each grid. The evolution of the surface abundances of AGB stellar models is particularly sensitive to the adopted physics on the AGB. In addition, the properties of the stellar models in advanced evolutionary stages, such as the AGB, are affected by the modeling of previous evolutionary stages. The latter is particularly true regarding the treatment of mixing processes such as rotationally induced mixing or convective boundary mixing (or overshooting) during H- and He-core burning stages. Description of the Model Codes ------------------------------ We now briefly review the treatment of these key ingredients in the four grids adopted here for the comparison: the MONASH grid [@2014MNRAS.445..347K; @karakas16], the LPCODE grid [@mb16], the ATON grid [@ventura15; @2016MNRAS.462..395D] and the FRUITY database [@cristallo11; @cristallo15]. While all the models discussed here include an up-to-date treatment of the microphysics, and all of them neglect the impact of rotation, the theoretical models discussed in this section have some key differences in the modeling of winds and convective boundary mixing processes. These differences will affect the predicted evolution and final abundances during the TP-AGB. Based on the treatment of winds on the AGB, the models can be roughly divided in two groups. On one hand we have the MONASH and FRUITY models that adopt a single relation between the pulsational period $P$ and the mass loss rate $\dot{M}$ for both C-rich and O-rich AGB stars. The mass loss recipe $\dot{M}(P)$ adopted by the MONASH models is the well know formula by @1993ApJ...413..641V [eqs. 1, 2, & 5], while the FRUITY models adopt a similar prescription derived by @2006NuPhA.777..311S [see their §5]. On the other hand, we have the implementations by the ATON and LPCODE grids that incorporate a different treatment for the C-rich and the O-rich AGB winds. The ATON code adopts the empirical law by reduced by a factor 50 for the O-rich phase and the theoretical mass loss rates by for C-rich winds. The LPCODE models adopt the empirical law by @1998MNRAS.293...18G for the C-rich phase while winds for the O-rich phase mostly follow the @2005ApJ...630L..73S law. These laws appear as eqs. 1, 2, 3, & 5 in @mb16. Even more important than the treatment of winds is the treatment of convective boundary mixing (or overshooting) during the TP-AGB phase as well as in previous evolutionary stages. Again the models can be roughly separated into two groups regarding the treatment of overshooting during core-burning stages. As before, on the one hand we have the MONASH and FRUITY models that do not include any kind of convective boundary mixing processes on the upper main sequence where stars have convective cores. However, later during the He-core burning stage, FRUITY models include convective boundary mixing in the form of semiconvection [@cristallo11]. And while the MONASH models do not include any explicit prescription for convective boundary mixing, a similar result would be expected from their adopted numerical algorithm to search for a neutrally stable point at the outer boundary of the convective core [@1986ApJ...311..708L]. On the other hand, the ATON and LPCODE models include overshooting on top of the H-burning core with its extension calibrated to fit the width of the upper main sequence. Both grids keep the same calibrated overshooting for the convective core during the core He-burning stage. From this difference alone in the treatment of convective boundary mixing before the TP-AGB, [*one should expect third dredge up (TDU) and hot bottom burning (HBB) to develop at lower initial masses ($M_i$) in the ATON and LPCODE models than in the models of the MONASH and FRUITY grids*]{}. Regarding convective boundary mixing on the TP-AGB, two convective boundaries are key for the strength of TDU events during the TP-AGB \[see \]. These are the boundary mixing at the bottom of the pulse drive convective zone (PDCZ) that develops in the intershell region during the thermal pulses, and the boundary mixing at the bottom of the convective envelope (CE). The inclusion of overshooting at both convective boundaries increases the efficiency of TDU and lowers the threshold in initial stellar mass above which TDU develops. In addition, the inclusion of overshooting at the bottom of the PDCZ leads to the dredging up of O from the CO core, increasing the intershell and surface O abundances. The treatment of these convective boundaries varies widely in the four grids discussed here. The MONASH models do not include any explicit prescription for convective boundary mixing. However, some overshooting at convective boundaries does occur as a consequence of the adoption of the numerical algorithm for the determination of the convective boundaries [@1986ApJ...311..708L]. On the contrary the FRUITY, ATON and LPCODE models adopt different implementations of an exponentially decaying mixing coefficient beyond the formally convective boundaries and with different intensities. While FRUITY models include strong overshooting at the bottom of the CE but no overeshooting at the PDCZ, LPCODE models adopt a moderate overshooting at the base of the PDCZ and no overshooting at the bottom of the CE. Finally, the ATON models adopt a very small amount of overshooting both at the bottom of the PDCZ and the CE. While there are strong arguments in favour of the inclusion of moderate overshooting during the main sequence the situation on the AGB is much less clear. In fact, trying to fit all available observational constraints by means of a simple overshooting prescription might not be even possible \[see @weiss09 [@karakas14; @mb16]\]. This fact, together with the lack of compelling theoretical arguments and the lack of a common observational benchmark for AGB theoretical evolution models has led authors to the adoption of very different approaches. Finally we note that convection in the ATON code is computed with the full spectrum of turbulence convection, which leads to stronger HBB when compared with models that adopt the standard mixing length theory \[@cristallo11, @karakas16 and @mb16\]. In summary, we can roughly divide the four grids into two main groups: 1) the MONASH and FRUITY models that neglect convective boundary mixing during the main sequence, do not include overshooting in the PDCZ and adopt a single wind formula for both the C- and O-rich phases; and 2) the ATON and LPCODE models which calibrate overshooting during core H-burning to the width of the main sequence, adopt the same overshooting for the core-He burning phase, include some overshooting at the bottom of the PDCZ, and adopt different wind prescriptions for the C- and O-rich phases. Note, however, that all grids adopt different treatments of convective boundary mixing during the TP-AGB. In addition to the differences in the adopted physics, there is another difference related to the point at which each sequence is terminated. Due to the several convergence problems experienced by stellar models at the end of the AGB, different authors choose to stop their sequences at some point before the end of the AGB, missing the last thermal pulse(s). Although the efficiency of third dredge up drops at the end of the AGB, some significant changes in the surface abundances can still happen in the last thermal pulses. This is because when the H-rich envelope mass has already been reduced by more than one order of magnitude, a much smaller amount of processed material needs to be dredged up to the surface to affect the final surface abundances. This is an important difference between the FRUITY, MONASH and ATON models that do not reach the post-AGB phase and the LPCODE grid models which are computed up until the white dwarf stage. LPCODE models show abundance variations due to the timing of the last AGB thermal pulse. Analysis\[analysis\] -------------------- Our primary results involving the behavior of He/H, C/O and N/O versus progenitor mass appear in Figs. \[he2h\]-\[n2o\]. ![He/H versus central star birth mass in solar units. PN are shown with connected pairs of open symbols. The squares represent objects whose progenitor masses were determined using the evolutionary tracks of @vw94 [Fig. \[fvw94\]], while the circles similarly refer to the tracks of @mb16 [Fig. \[fmb16\]]. Error bars for individual objects have been suppressed for clarity, while a representative set of error bars is provided in the upper left corner of the plot. The horizontal black dotted line indicates the solar He/H value of 0.085 as determined by @asplund09. Model predictions by the MONASH (red lines) and LPCODE (green lines) codes are shown for the metallicities given in the legend and designated in the graph by line type.[]{data-label="he2h"}](f7he2h_vw_mb.ps){width="6in"} ![C/O versus central star birth mass in solar units. PN are shown with connected pairs of open symbols. The squares represent objects whose progenitor masses were determined using the evolutionary tracks of @vw94 [Fig. \[fvw94\]], while the circles similarly refer to the tracks of @mb16 [Fig. \[fmb16\]]. Error bars for individual objects have been suppressed for clarity, while a representative set of error bars is provided in the lower left corner of the plot. The horizontal black dotted line indicates the solar C/O value of 0.55 as determined by @asplund09. Model predictions by the MONASH (red lines), LPCODE (green lines), ATON (blue lines) and FRUITY (violet lines) codes are shown for the metallicities given in the legend and designated in the graph by line type.[]{data-label="c2o"}](f8c2o_vw_mb.ps){width="6in"} ![N/O versus central star birth mass in solar units. PN are shown with connected pairs of open symbols. The squares represent objects whose progenitor masses were determined using the evolutionary tracks of @vw94 [Fig. \[fvw94\]], while the circles similarly refer to the tracks of @mb16 [Fig. \[fmb16\]]. Error bars for individual objects have been suppressed for clarity, while a representative set of error bars is provided in the upper right corner of the plot. The horizontal black dotted line indicates the solar N/O value of 0.14 as determined by @asplund09. Model predictions by the MONASH (red lines), LPCODE (green lines), ATON (blue lines) and FRUITY (violet lines) codes are shown for the metallicities given in the legend and designated in the graph by line type.[]{data-label="n2o"}](f9n2o_vw_mb.ps){width="6in"} Objects in our sample are shown with connected pairs of open squares and circles. The squares represent objects whose progenitor masses were determined using the evolutionary tracks of @vw94 [our Fig. \[fvw94\]], while the circles similarly refer to the tracks of @mb16 [our Fig. \[fmb16\]]. Unpaired green circles represent objects for which the two derived masses were identical. For clarity, only a representative set of error bars is provided in each graph, where the vertical bar indicates the average of the relevant uncertainties given in Table \[abuns\]. Also included in the plots are model abundance predictions for PN ejecta by the MONASH, LPCODE, ATON and FRUITY grids (He/H predictions by the FRUITY and ATON grids were roughly constant at 0.10 and 0.095, respectively, and were not included in Fig. \[he2h\]). Line colors and types refer to the specific grid and metallicity, respectively, as defined in the figure legend. The horizontal and vertical black dotted lines show the solar values [@asplund09]. The behavior of He/H versus progenitor mass is shown in Fig. \[he2h\]. Relative to the solar value, all of our sample members except two show He enrichment. Conspicuous outliers include NGC 6537 (He/H=0.17$\pm$.02) in the upper right and IC 418 (He/H=0.07$\pm$.01) and NGC 2392 (He/H=0.08$\pm$.01) both located below the solar line. NGC 6537 is a Peimbert Type I PN, a class which characteristically shows an enhanced He abundance. Considering the He/H uncertainties, the MONASH and LPCODE model grids span the area occupied by the majority of points. Note, though, that in the case of the MONASH models, some of this success is achieved only by including the Z=0.030 model set, i.e., a metallicity roughly twice the solar value. This result is at odds with the metallicities which we measured for our sample of objects, where nearly all have O/H values[^8] in Table \[abuns\] at or below the solar level of $4.90 \times 10^{-4}$. In addition both the MONASH and LPCODE models predict a slight rise in He/H with metallicity, but the observational uncertainties of He/H likely obscure this theoretically predicted trend; if it indeed exists, it would be difficult to see it in the data. And while deeper spectra may increase the S/N, accuracy would continue to be compromised due to the errors introduced by flux calibration, dereddening, instrumental effects and uncertainties associated with atomic constants, including collisional corrections. We feel that uncertainties of no less than $\pm$0.005 (a vertical error bar of 0.01) could likely be obtained. In general the fact that most measured He/H ratios are above the solar value is in line with the expectations from stellar evolution theory, as all dredge up events during post main sequence evolution lead to increases in the He/H ratio. It is well known that extra-mixing processes are needed to explain the abundance patterns in first red giant branch (RGB) stars located above the RGB bump . We refer here to mixing processes in addition to overshooting, such as rotationally induced mixing[^9] or thermohaline mixing[^10]. The fact that all grids fail to achieve the maximum observed values of He/H might be related to their neglect of extra-mixing processes on the pre-AGB evolution. Figure \[c2o\] features the comparison of observations and models pertaining to C/O versus progenitor mass. It is interesting to note that C/O values are centered around 1.23 with a standard deviation of 0.85. Thus, despite the uncertainty in C/O indicated by the example error bar, the distribution of the 13 objects favors a supersolar value, the result of TDU. Both IC 418 (C/O=3.34$\pm$1.99) and IC 4593 (C/O=2.43$\pm$.45) exhibit C/O values which are at least twice the sample average. From our results in §\[progmass\] and Table \[masses\], IC 418 had a progenitor mass of roughly 1.4$\pm$.5 M$_{\odot}$, while IC 4593’s mass was originally 0.7$\pm$.5 M$_{\odot}$. The only model in Fig. \[c2o\] which predicts this much excess C within the mass range of the two progenitor stars is the one of $M_i=1.25 M_\odot$ and $Z=0.010$ in the LPCODE grid. Interestingly, that model attains its high surface carbon abundance due to a final thermal pulse when the mass of the central star is already reduced to $0.593 M_\odot$. In this circumstance TDU leads to the mixing of $M_{\rm TDU}\simeq 0.003 M_\odot$ from the H-free core into a H-rich envelope of $M^H_{\rm env}\simeq 0.027 M_\odot$, significantly increasing the surface carbon abundance of the star. This example shows why it is necessary to keep in mind that final AGB thermal pulses coupled with low envelope masses can significantly change the surface abundances from those predicted by AGB stellar evolution models which are not computed to the very end of the AGB. Yet, it is necessary to emphasize that if the mass ejected after the last thermal pulse is too small, the final abundances of the central stars might be different from those displayed by their surrounding PN. The nebula might not be homogeneous and may be dominated by the material ejected before the star altered its surface composition in the last thermal pulse. Each of the four sets of model tracks displayed in Fig. \[c2o\] generally predicts two trends regarding C/O in PN: 1) as progenitor mass increases, C/O increases slowly, peaks around 2.5-3.0 M$_{\odot}$ and then decreases; and 2) for constant progenitor mass, C/O increases with decreasing metallicity. Both of these predicted trends are well-known and the presumed causes are nicely summarized in @karakas14 [§3.3]. In an AGB star, C is produced (and also dredged-up from the CO core) within the periodically unstable He shell by the triple alpha process and is subsequently transported to the H-rich outer envelope during TDU. According to models, the amount of C that is mixed up into the envelope is directly related to the efficiency of the dredge-up process, where the dredge-up efficiency is characterized by the ratio of the mass of material brought to the surface relative to the increase in mass of the C-O core during the process. Models indicate that this efficiency increases independently with increasing progenitor mass and decreasing metallicity. However, this process begins to be damped as the stellar mass approaches 4 M$_{\odot}$ in the case of the MONASH grid and 2.5-3 M$_{\odot}$ for the other three grids as C is converted to N via the CN cycle during HBB. The difference between the MONASH grid and the other three grids could be related to the lack of convective boundary mixing in the high mass models of the former grid, which leads to a less efficient HBB. We turn now to the behavior of N/O versus progenitor mass featured in Fig. \[n2o\]. Here we see that roughly 30% of our objects exceed the solar value of 0.14 for N/O by more than their uncertainties. We also observe an upward trend in N/O in the data with increasing birth mass up to about 3 M$_{\odot}$. The apparent nitrogen enrichment below 1.5 M$_{\odot}$ is likely the result of dredge-up events before the AGB phase. However, the upward trend beyond this point is rather substantial and likely is the product of HBB. Interestingly, the lowest mass at which HBB is predicted by the ATON and LPCODE models to begin is around 3 M$_{\odot}$ while MONASH and FRUITY models predict the onset of HBB at around 5 M$_{\odot}$ (outside of the figure range). This difference is mostly due to the implementation of overshooting during the main sequence evolution in ATON and LPCODE models. Yet, the upward trend of N/O in our PN sample occurs at an even lower progenitor mass, with high N/O values corresponding to $M_i\gtrsim 2.25~M_{\odot}$. If our stellar mass determinations are reasonably correct, this result confirms the well established need to include overshooting in the modeling of the upper main sequence, and perhaps the need to include some additional mixing processes like rotation-induced mixing in main sequence intermediate mass stars . An additional shortcoming of the models is that none of the sets spans the entire region occupied by our PN. [*In particular, the observations clearly suggest that stars with progenitor masses below 3 M$_{\odot}$ produce higher levels of N than are predicted by any of the models.*]{} As mentioned above, the failure of the models to account for the observed abundances of N in low-mass stars might be pointing to the need to include other mixing processes, such as rotation-induced mixing, during previous evolutionary stages . Figures \[c2ovhe2h\], \[n2ovhe2h\] and \[c2ovn2o\] compare observations and models in terms of one element ratio versus another one. Model tracks apply only to progenitor masses between 1 and 4 solar masses. ![C/O versus He/H. PN are shown with filled black circles. Solar ratios from @asplund09 are shown with dotted black lines. Model predictions by the MONASH (red lines), LPCODE (green lines) and ATON (blue lines) codes are shown for the metallicities given in the legend and designated in the graph by line type. Note that the line for the ATON 0.014 model is purposely offset slightly to the right of the ATON 0.008 model to distinguish them, as otherwise they would lie on top of each other.[]{data-label="c2ovhe2h"}](f10c2ovhe2h.ps){width="6in"} ![N/O versus He/H. PN are shown with filled circles. Solar ratios from @asplund09 are shown with dotted black lines. Model predictions by the MONASH (red lines), LPCODE (green lines) and ATON (blue lines) codes are shown for the metallicities given in the legend and designated in the graph by line type. Note that the line for the ATON 0.014 model is purposely offset slightly to the right of the ATON 0.008 model to distinguish them, as otherwise they would lie on top of each other.[]{data-label="n2ovhe2h"}](f11n2ovhe2h.ps){width="6in"} ![C/O versus N/O. PN are shown with filled black circles. Solar ratios from @asplund09 are shown with dotted black lines. Model predictions by the MONASH (red lines), LPCODE (green lines), ATON (blue lines) and FRUITY (violet lines) codes are shown for the metallicities given in the legend and designated in the graph by line type.[]{data-label="c2ovn2o"}](f12c2ovn2o13.ps){width="6in"} Figure \[c2ovhe2h\] is a plot of C/O versus He/H, where we observe no apparent correlation between the values of these two ratios. As we saw earlier in Fig. \[c2o\], the observed He/H ratio for all but IC 418 and NGC 2392 is above the solar value. This strongly suggests that a majority of the objects in our sample experienced significant He enrichment during their evolution. All of the ATON models within the 1-4 M$_{\odot}$ range predict a He/H value of 0.10, hence the straight vertical lines for those models. We have offset their track for the 0.014 metallicity models to the right slightly to help distinguish the two tracks. The model tracks of the MONASH and LPCODE grids are consistent with the observations in the sense that each model set spans the space occupied by the bulk of the sample objects, i.e., those 11 PN which have He/H$\ge$0.10. The ATON models appear to span the observed C/O values, but lack the range in He/H exhibited by the data. The observational data in Fig. \[n2ovhe2h\] suggest that the N enrichment seen earlier in Fig. \[n2o\] may be coupled with He enrichment in the sense that large N/O values occur at high levels of He/H, although the large uncertainties in both N/O and He/H cloud the issue. Interestingly, a clear positive trend in N/O versus O/H was reported by @kaler79 for Galactic disk PN, while @kb94 observed similar behavior in Type I PN only. The MONASH models fail to predict such behavior, as does the Z=0.01 LPCODE model track. This is due to the lack of efficient HBB in these models, as the MONASH models do not show HBB for $M_i<4M\odot$ and the LPCODE grid only reaches $M_i=3M\odot$ for this grid. On the contrary, LPCODE models with Z=0.02 do show an upward trend in N/O as He/H increases due to the action of TDU and HBB during a large number of thermal pulses in the $M_i=4 M_\odot$ model. The ATON models seem to span the observed N/O values, although again there is no reported range in their He/H values. Overall, there is little theoretical evidence that any of the model grids completely spans the point positions of the observational data, a result we also see in Fig. \[n2o\]. We conclude that the possible observational trend in Fig. \[n2ovhe2h\] previously seen in Fig. \[n2o\] is likely reflecting the action of TDU and HBB. Finally, Fig. \[c2ovn2o\] shows the relation of C/O versus N/O for the 13 objects for which we have C measurements. As we saw in Fig. \[c2o\], these data exhibit a wide variation in the C/O ratio, with several objects having values significantly larger than the solar value. These same objects also have relatively low values of N/O, where ratios range from near solar to slightly above it. Then there are the three PN with solar C/O values that appear to be decidedly enriched with N. All model sets predict a significant variation in enhanced C/O at relatively low N/O, while at higher N/O levels the C/O values approach the solar value of 0.55. The data appear to be consistent with the models, and generally speaking, all model sets appear to span the empirical data sufficiently, although the high N/O region contains only three PN. The data in this figure are consistent with the theoretical expectation that C and N are anti-correlated, as C from TDU is subsequently destroyed during HBB to produce N. Summarizing our detailed comparison of models and observations, the empirical trends seen in Figs. 9 and 12, and perhaps 11, suggest the existence of HBB in stars with birth masses less than 4 M$_{\odot}$, something that is only attained by models that include overshooting on the main sequence (ATON, LPCODE). In more general terms, however, observations, when combined with model predictions of four independent model grids, currently demonstrate that all four grids are compatible with the data except in the case of N/O. That is, all grids seem capable of spanning the distribution of points in the cases of C/O and He/H. We suggest that future computational efforts consider the implication that the onset of HBB occurs at a lower initial mass than previously believed. This is the most important result of our study. Summary and Conclusions ======================= Helium, carbon and nitrogen are known through observations to be synthesized by stars within the mass range of 1-8 M$_{\odot}$ (low and intermediate mass stars, or LIMS). We demonstrated this plainly in Figs. \[he2hvo2h\_BIG\]-\[n2ovo2h\_BIG\], where we saw that the He/H, C/O and N/O abundance ratios as a function of metallicity in a large sample of PN systematically fall above ISM values for the same ratios measured by stars and H II systems. To evaluate the significance of the relative contribution that LIMS make to the galactic chemical evolution of these three elements, we need to determine the amount of He, C and N that a star produces and releases into the interstellar medium, i.e., the stellar yield. Fortunately, a portion of this ejected matter forms a planetary nebula, and from the emission spectra produced by these objects, we are able to measure the abundances of He, C, N among other elements. Since theoretical models of LIMS predict both the total yield and the PN abundance, by comparing the observed abundances to theoretical predictions of the same we can simultaneously infer the yield. The goal of this project has been to make a detailed comparison between observationally determined abundances of the elements He, C and N in planetary nebulae with theoretical predictions of the same by four different grids of stellar evolution models. We have carefully selected PN for which high quality spectra and good determinations of the luminosity and effective temperature of each associated central star are available. The optical and UV spectra consist exclusively of our own observations made with ground-based telescopes as well as HST/STIS and IUE. To ensure homogeneity, all spectral data were reduced and measured in a consistent manner, and abundances were all determined using the same algorithms. Central star luminosities and effective temperatures in all but three cases were taken from @frew08, and central and progenitor star masses were inferred by plotting these values in L-T diagrams containing evolutionary tracks from @vw94 and @mb16. Our final sample contained 35 Galactic PN, 13 of which have C abundances measured from UV lines available. These 35 objects vary widely in morphology. All are either categorized as Peimbert type I or II. And most are located in the Galactic thin disk within 2 kpc of the Sun. Combining the inferred abundances and stellar masses, we conclude the following: 1. [*The mean values of N/O across the observed progenitor mass range of 1-3 M$_{\odot}$ are well above the solar value.*]{} With respect to current theory, this is an unexpected result and suggests that extra-mixing is required in this stellar group to explain the N enrichment. Our results also suggest an increase in N/O with progenitor mass for M$>$2 M$_{\odot}$, implying that the onset of hot bottom burning occurs at lower masses than previously thought. 2. [*All but two of our sample PN clearly show evidence of He enrichment relative to the solar value.*]{} This is expected, since both first and third dredge-up mix He-rich material into the stellar atmosphere prior to PN formation from expelled atmospheric matter. 3. [*The average value of measured C/O within our sample is 1.23, well above the solar value of 0.55 [@asplund09].*]{} The standard deviation for the sample is 0.85. Evidence of C enrichment is present in roughly half of the sample of 13 objects for which we measured the C abundance. Interestingly, the PN with the higher C/O values seem to come from low mass progenitors with $M\approx 1\ M_{\odot}$. 4. [*The model grids to which we compared the observations successfully span the data points in the case of C/O. The models are also consistent with some, but not all, of the objects in terms of He/H. However, all of the models seem to fail in the case of N/O.*]{} Our finding of elevated N/O in low mass stars, possibly due to an earlier-than-expected onset of HBB and/or the presence of extra-mixing, is the most significant result of our study. Further confirmation of this result will help markedly in the ongoing efforts to determine the provenance of N in the context of galactic chemical evolution. Because stars of masses between 1 and 3 M$_{\odot}$ are roughly five times more numerous than stars between 3 and 8 M$_{\odot}$ (assuming a simple Salpeter initial mass function), the potential impact of these low mass stars on the question of the chemical evolution of nitrogen is obviously significant. The anonymous referee of our paper offered many helpful suggestions for improvement, and we thank him/her for performing such a careful review. We also thank Paolo Ventura and Sergio Cristallo for providing answers to our enquiries regarding the details of the ATON (Ventura) and FRUITY (Cristallo) model predictions and in some cases sending us additional output. We also appreciate the help provided by Gloria Delgado-Inglada concerning her group’s recently updated ionization correction factors. Portions of the UV data employed in our project came from HST Program number GOÐ12600. B.G.S. is grateful for summer support by the NSF through the Research Experience for Undergraduates program. M.M.M.B. was supported by ANPCyT through grant PICT-2014-2708 and by a Return Fellowship from the Alexander von Humboldt Foundation. Finally, R.B.C.H., B.G.S., K.B.K., and B.B. are grateful to their home institutions for travel support. Akerman, C.J., Carigi, L., Nissen, P.E., et al., 2004, , 414, 931 Aller, L.H., & Czyzak, S.J. 1983, , 51, 211 Aller, L.H., & Keyes, C.D. 1987, , 65, 405 Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, , 47, 481 Berg, D.A., Skillman, E.D., Henry, R.B.C., Erg, D.K., & Carigi, L. 2016, , 827, 126 Bloecker, T. 1995, , 297, 727 Boffin, H.M.J., Miszalski, B., Rauch, T., et al. 2012, Science, 338, 773 Charbonnel, C., & Zahn, J.-P. 2007, , 467, L15 Charbonnel, C., & Lagarde, N. 2010, , 522, A10 Cristallo, S., Piersanti, L., Straniero, O., et al. 2011, , 197, 17 Cristallo, S., Straniero, O., Piersanti, L., & Gobrecht, D. 2015, , 219, 40 Deharveng, L., Peña, M., Caplan, J., & Costero, R. 2000, , 311, 329 Delgado-Inglada, G., Morisset, C., and Stasińska, G. 2014, , 440, 536 Delgado-Inglada, G. 2016, Proceedings IAU Symposium No. 323, Planetary Nebulae: Multi-wavelength probes of stellar and galactic evolution, X. Liu, L. Stanghellini & A. Karakas, eds. Delgado-Inglada, G., Rodríguez, M., Peimbert, M., et al. (2015), , 449, 1797 De Marco, O., Long, J., Jacoby, G.H., et al. 2015, , 448, 3587 Di Criscienzo, M., Ventura, P., Garc[í]{}a-Hern[á]{}ndez, D. A., et al. 2016, , 462, 395 Dopita, M.A., Ali, A., Sutherland, R.S., et al. 2017, arXiv:1705.03974v1 Dufour, R.J., Kwitter, K.B., Shaw, R.A., Henry, R.B.C., Balick, B., and Corradi, R.L.M. 2015, , 803, 23 Ekstr[ö]{}m, S., Georgy, C., Eggenberger, P., et al. 2012, , 537, A146 Esteban, C., Peimbert, M., García-Rojas, J., et al. 2004, , 355, 229 Frew, D. 2008, PhD thesis, Macquarie University Frew, D.J., and Parker, Q.A. 2010, , 27, 129 Freytag, B., Ludwig, H.-G., & Steffen, M. 1996, , 313, 497 García-Hernádez, D.A., Ventura, P., Delgado-Inglada, G., et al. 2016, , 458, L118 Garnett, D.R., Skillman, E.D., Dufour, R.J., et al. 1995, , 443, 64 Garnett, D.R., Skillman, E.D., Dufour, R.J., & Shields, G.A. 1997, , 481,174 Garnett, D.R., Shields, G.A., Peimbert, M., et al. 1999, , 513, 168 Groenewegen, M. A. T., Whitelock, P. A., Smith, C. H., & Kerschbaum, F. 1998, , 293, 18 Gustafsson, B., Karlsson, T., Olsson, E., et al., 1999, , 342, 426 Henry, R.B.C. 1990, , 356, 229 Henry, R.B.C. 1990, , 363, 728 Henry, R.B.C., Kwitter, K.B., & Dufour, R.J. 1999, , 517, 782 Henry, R.B.C., Kwitter, K.B., & Bates, J.A. 2000, , 531, 928 Henry, R.B.C., Kwitter, K.B. & Balick, B. 2004, , 127, 2284 Henry, R.B.C., Kwitter, K.B., Jaskot, A.E., Balick, B., Morrison, M., & Milingo, J.B. 2010, , 724, 748 Henry, R.B.C., Balick, B., Dufour, R.J., et al. 2015, , 813, 121 Herwig, F., Bloecker, T., Schoenberner, D., & El Eid, M. 1997, , 324, L81 Herwig, F. 2005, , 43, 435 Herwig, F. 2000, , 360, 952 Izotov, Y.I., & Thuan, T.X. 1999, , 511, 639 Izotov, Y.I., Thuan, T.X., & Guseva, N.G. 2012, , 546, A122 Johnson, M. D., Levitt, J. S., Henry, R. B. C., & Kwitter, K. B. 2006, in IAU Symp. 234, ed. M. J. Barlow, & R. Méndez (Cambridge: Cambridge Univ. Kaler, J.B. 1979, , 228, 163 Press), 439 Karakas, A.I. 2010, , 403, 1413 Karakas, A. I. 2014, , 445, 347 Karakas, A.I., and Lugaro, M. 2016, , 825, 26 Karakas, A.I., and Lattanzio, J.C. 2014, , 31, 30 Kingsburgh, R.L., & Barlow, M.J. 1994, , 271, 257 Kwitter, K.B. & Henry, R.B.C. 1998, , 493, 247 Kwitter, K.B., & Henry, R.B.C. 2001, , 562, 804 Kwitter, K.B., Henry, R.B.C., & Milingo, J.B. 2003, , 115, 80 Kwitter, K.B., & Henry, R.B.C. 2012, Proceedings IAU Symposium No. 283, 2011, Planetary Nebulae: An Eye to the Future, A. Manchado, L. Stanghellini & D. Schönberner, eds., p. 119 Kwitter, K.B., Lehman, E.M.M., Balick, B., & Henry, R.B.C. , 753, 12 Lagarde, N., Decressin, T., Charbonnel, C., et al. 2012, , 543, A108 Lattanzio, J. C. 1986, , 311, 708 Lugaro, M., Karakas, A.I., Pignatari, M., & Doherty, C.L. 2016, in Proceedings IAU Symposium No. 323, Planetary Nebulae: Multi-wavelength probes of stellar and galacitc evolution, X. Liu, L. Stanghellini, & A.I. Karakas, eds., IAU, arXiv:1703.00280 Maciel, W.J., Costa, R.D.D., & Cavichia, O. 2017, , 53, 151 Maeder, A., Meynet, G., Lagarde, N., & Charbonnel, C. 2013, , 553, A1 Marigo, P., Bernard-Salas, J., Pottasch, S.R., Tielens, A.G.G.M., & Wessellius, P.R. 2003, , 409, 619 Marigo, P., Bressan, A., Girandi, L., et al. 2011, in ASP Conference Series, 445, Why Galaxies Care About AGB Stars II: Shining Examples and Common Inhabitants, Kerschbaum, Lebzelter, & Whig, eds., PASP, p.431 Milingo, J.B., Kwitter, K.B., Henry, R.B.C., & Cohen, R.E. 2002, , 138, 279 Milingo, J.B., Kwitter, K.B., Henry, R.B.C., & Souza, S.P. 2010, , 711, 619 Miller Bertolami, M.M. 2016, , 588, A25 Milingo, J.B., Kwitter, K.B., Henry, R.B.C., & Souza, S.P. 2010, , 711, 619 Osterbrock, D.E., and Ferland, G.J. 2006, in Astrophysics of Gaseous Nebulae and Active Galactic Nuclei, (Sausalito: University Science Books) Péquignot, D., Walsh, J.R., Zijlstra, A.A., & Dudziak, G. 2000, , 361, L1 Pietrinferni, A., Cassisi, S., Salaris, M., & Castelli, F. 2004, , 612, 168 Schaller, G., Schaerer, D., Meynet, G., & Maeder, A. 1992, , 96, 269 Schr[ö]{}der, K.-P., & Cuntz, M. 2005, , 630, L73 Sharpee, B., Williams, R., Baldwin, J.A., & van Hoof, P.A.M. 2003, , 149, 157 Stanghellini, L., Shaw, R.A., & Gilmore, D. 2005, , 622, 294 Stanghellini, L., Lee, T-H., Shaw, R.A., Balick, B., & Villaver, E. 2009, , 702, 733 Stasińska, G., Richer, M.G., & McCall, M.L. 1998, , 336, 667 Sterling, N.C. 2017, Proceedings IAU Symposium No. 323, Planetary Nebulae: Multi-wavelength probes of stellar and galactic evolution, X. Liu, L. Stanghellini & A. Karakas, eds. Straniero, O., Gallino, R., & Cristallo, S. 2006, Nuclear Physics A, 777, 311 van Zee, L., Salzer, J.J., Haynes, M.P., O’Donoghue, A.A., & Balonek, T.J. 1998, , 116, 2805 Vassiliadis, E., & Wood, P. R. 1993, , 413, 641 Vassiliadis, E., & Wood, P. R. 1994, , 92, 125 Ventura, P., & D’Antona, F. 2005, , 431, 279 Ventura, P., /Stanghellini, L., Dell’Agli, F., & García-Hernández, D.A. 2012, , 452, 3395 Wachlin, F. C., Miller Bertolami, M. M., & Althaus, L. G. 2011, , 533, A139 Wachter, A., Winters, J. M., Schr[ö]{}der, K.-P., & Sedlmayr, E. 2008, , 486, 497 Weiss, A., & Ferguson, J.W. 2009, , 508, 1343 [^1]: The alpha elements O, Ne, S and Ar are apparently forged in massive stars by similar nuclear processes which transcend position and environment. Therefore, their relative abundances track each other. [^2]: We qualify this seemingly tidy picture by pointing out that oxygen enrichment in PN has been reported by @pequignot00 and more recently in C-rich PN by @delgado15 and @garcia16. [^3]: Note that we have estimated their averages for N/O and C/O from their separate averages of N, C and O. [^4]: We are very much aware of the pitfalls of using this method to determine central star and progenitor star masses. Problems stem primarily from the small separation between adjacent model evolutionary tracks in the luminosity-temperature plane that are used to infer these masses, given the uncertainties of the observed values of these two parameters. However, we are confident that in using this method we can at least tell if a progenitor star is inside or outside of a mass range for which theory predicts C enrichment through triple alpha burning and dredge-up, or N enrichment through hot bottom burning. [^5]: Fg1 and NGC 6826 are the only objects in our sample with any evidence of binary central stars. According to @boffin12, Fg1 has a period of 1.2d. NGC 6826 has a fast rotating central star, which is something that can only be achieved in a merger [@demarco15]. However, neither of these objects exhibits any abundance peculiarities, according to our data. For now, we have assumed that the presence of a secondary star does not affect our results. [^6]: Since the radiation- or density-bounded natures of our PN are unknown, Delgado-Inglada (private communication) recommended that we use this ICF instead of the one published in @delgado14. [^7]: The ICF uncertainty was unavailable in the case of N/O. [^8]: We note that oxygen may not be a reliable metallicity indicator if significant amounts of O are dredged up to the surface or destroyed by hot bottom burning during the TP-AGB as predicted by some models —see section 3.1.1 in [@2016MNRAS.462..395D] and Table 3 in @mb16. [^9]: Rotationally induced mixing includes different types of mixing processes caused by the existence of rotation. These include mixing by meridional circulation and diffusion by shear turbulence in differentially rotating stars [@lagarde12; @maeder13]. [^10]: Thermohaline mixing is a double diffusive process that can develop in low-mass stars. This thermohaline instability takes place when the stabilizing agent (heat) diffuses away faster than the destabilizing agent (chemical composition), leading to a slow mixing process. Thermohaline mixing can happen in low mass stars after the RGB-bump, and on the early AGB [@lagarde12], where an inversion of molecular weight is created, by the 3He(3He,2p)4He reaction, on a dynamically stable structure.
{ "pile_set_name": "ArXiv" }